The two of us

The-other-Wendy-Grossman-who-is-a-journalist came to my attention in the 1990s by writing a story about something Internettish while a student at Duke University. Eventually, I got email for her (which I duly forwarded) and, once, a transatlantic phone call from a very excited but misinformed PR person. She got married, changed her name, and faded out of my view.

By contrast, Naomi Klein‘s problem has only inflated over time. The “doppelganger” in her new book, Doppelganger: A Trip into the Mirror World, is “Other Naomi” – that is, the American author Naomi Wolf, whose career launched in 1990 with The Beauty Myth . “Other Naomi” has spiraled into conspiracy theories, anti-government paranoia, and wild unscientific theories. Klein is Canadian; her books include No Logo (1999) and The Shock Doctrine (2007). There is, as Klein acknowledges a lot of *seeming* overlap in that a keyword search might surface both.

I had them confused myself until Wolf’s 2019 appearance on BBC radio, when a historian dished out a live-on-air teardown of the basis of her latest book. This author’s nightmare is the inciting incident Klein believes turned Wolf from liberal feminist author into a right-wing media star. The publisher withdrew and pulped the book, and Wolf herself was globally mocked. What does a high-profile liberal who’s lost her platform do now?

When the covid pandemic came, Wolf embraced every available mad theory and her liberal past made her a darling of the extremist right wing media. Increasingly obsessed with following Wolf’s exploits, which often popped up in her online mentions, Klein discovered that social media algorithms were exacerbating the confusion. She began to silence herself, fearing that any response she made would increase the algorithms’ tendency to conflate Naomis. She also abandoned an article deploring Bill Gates’s stance protecting corporate patents instead of spreading vaccines as widely as possible (The Gates Foundation later changed its position.)

Klein tells this story honestly, admitting to becoming addictively obsessed, promising to stop, then “relapsing” the first time she was alone in her car.

The appearance of overlap through keyword similarities is not limited to the two Naomis, as Klein finds on further investigation. YouTube stars like Steve Bannon, who founded Breitbart and served as Donald Trump’s chief strategist during his first months in the White House, wrote this playbook: seize on under-acknowledged legitimate grievances, turn them into right wing talking points, and recruit the previously-ignored victims as allies and supporters. The lab leak hypohesis, the advice being given by scientific authorities, why shopping malls were open when schools were closed, the profiteering (she correctly calls out the UK), the behavior of corporate pharma – all of these were and are valid topics for investigation, discussion, and debate. Their twisted adoption as right-wing causes made many on the side of public health harden their stance to avoid sounding like “one of them”. The result: words lost their meaning and their power.

These are problems no amount of content moderation or online safety can solve. And even if it could, is it right to ask underpaid workers in what Klein terms the “Shadowlands” to clean up our society’s nasty side so we don’t have to see it?

Klein begins with a single doppelganger, then expands into psychology, movies, TV, and other fiction, and ends by navigating expanding circles; the extreme right-wing media’s “Mirror World” is our society’s Mr Hyde. As she warns, those who live in what a friend termed “my blue bubble” may never hear about the media and commentators she investigates. After Wolf’s disgrace on the BBC, she “disappeared”, in reality going on to develop a much bigger platform in the Mirror World. But “they” know and watch us, and use our blind spots to expand their reach and recruit new and unexpected sectors of the population. Klein writes that she encounters many people who’ve “lost” a family member to the Mirror World.

This was the ground explored in 2015 by the filmmaker Jen Senko, who found the smae thing when researching her documentary The Brainwashing of My Dad. Senko’s exploration leads from the 1960s John Birch Society through to Rush Limbaugh and Roger Ailes’s intentional formation of Fox News. Klein here is telling the next stage of that same story. Mirror World is not an accident of technology; it was a plan, then technology came along and helped build it further in new directions.

As Klein searches for an explanation for what she calls “diagnonalism” – the phenomenon that sees a former Obama voter now vote for Trump, or a former liberal feminist shrug at the Dobbs decision – she finds it possible to admire the Mirror World’s inhabitants for one characteristic: “they still believe in the idea of changing reality”.

This is the heart of much of the alienation I see in some friends: those who want structural change say today’s centrist left wing favors the status quo, while those who are more profoundly disaffected dismiss the Bidens and Clintons as almost as corrupt as Trump. The pandemic increased their discontent; it did not take long for early optimistic hopes of “build back better” to fade into “I want my normal”.

Klein ends with hope. As both the US and UK wind toward the next presidential/general election, it’s in scarce supply.

Illustrations: Charlie Chaplin as one of his doppelgangers in The Great Dictator (1940).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: Sorry, Sorry, Sorry

Sorry, Sorry, Sorry: The Case for Good Apologies
By Marjorie Ingalls and Susan McCarthy
Gallery Books
ISBN: 978-1-9821-6349-5

Years ago, a friend of mine deplored apologies: “People just apologize because they want you to like them,” he observed.

That’s certainly true at least some of the time, but as Marjorie Ingalls and Susan McCarthy argue at length in their book Sorry, Sorry, Sorry, well-constructed and presented apologies can make the world a better place. For the recipient, they can remove the sting of old wrongs; for the giver, they can ease the burden of old shames.

What you shouldn’t do, when apologizing, is what self-help groups sometimes describe as “plan the outcome”. That is, you present your apology and you take your chances. Follow Ingalls’ and McCarthy’s six steps to construct your apology, then hope for, but do not demand, forgiveness, and don’t mess the whole thing up by concluding with, “So, we’re good?”

Their six steps to a good apology:
1. Say you’re sorry.
2. For what you did.
3. Show you understand why it was bad.
4. Only explain if you need to; don’t make excuses.
5. Say why it won’t happen again.
6. Offer to make up for it.
Six and a half. Listen.

It’s certainly true that many apologies don’t have the desired effect. Often, it’s because the apology itself is terrible. Through their Sorry Watch blog, Ingalls and McCarthy have been collecting and analyzing bad public apologies for years (obDisclosure: I send in tips on apologies in tennis and British politics). Many of these appear in the book, organized into chapters on apologies from doctors and medical establishments, large corporations, and governments and nation-states. Alongside these are chapters on the psychology of apologies, teaching children to apologize, practical realities relating to gender, race, and other disparities. Women, for example, are more likely to apologize well, but take greater risk when they do – and are less likely to be forgiven.

Some templates for *bad* apologies when you’ve done something hurtful (do not try this at home!): “I’m sorry if…”, “I’m sorry that you felt…”, “I regret…”, and, of course, the often-used classic, “This is not who we are.”

These latter are, in Ingalls’ and McCarthy’s parlance “apology-shaped objects”, but not actually apologies. They explain this in detail with plenty of wit – and no less than five Bad Apology bingo cards.

Even for readers of the blog, there’s new information. I was particularly interested to learn that malpractice lawyers are likely wrong in telling doctors not to apologize because admitting fault invites a lawsuit. A 2006 Harvard hospital system report found little evidence for this contention – as long as the apologies are good ones. It’s the failure to communicate and the refusal to take responsibility that are much more anger-provoking. In other words, the problem there, as everywhere else, is *bad* apologies.

A lot of this ought to be common sense. But as Ingalls and McCarthy make plain, it may be sense but it’s not as common as any of us would like.

Power cuts

In the latest example of corporate destruction, the Guardian reports on the disturbing trend in which streaming services like Disney and Warner Bros Discovery are deleting finished, even popular, shows for financial reasons. It’s like Douglas Adams’ rock star Hotblack Desiato spending a year dead for tax reasons.

Given that consumers’ budgets are stretched so thin that many are reevaluating the streaming services they’re paying for, you would think this would be the worst possible time to delete popular entertainments. Instead, the industry seems to be possessed by a death wish in which it’s making its offerings *less* attractive. Even worse, the promise they appeared to offer to showrunners was creative freedom and broad and permanent access to their work. The news that Disney+ is even canceling finished shows (Nautilus) shortly before their scheduled release in order to pay less *tax* should send a chill through every creator’s spine. No one wants to spend years of their life – for almost *any* amount of money – making things that wind up in the corporate equivalent of the warehouse at the end of Raiders of the Lost Ark.

It’s time, as the Masked Scheduler suggested recently on Mastodon, for the emergence of modern equivalents of creator-founded studios United Artists and Desilu.

***

Many of us were skeptical about Meta’s Oversight Board; it was easy to predict that Facebook would use it to avoid dealing with the PR fallout from controversial cases, but never relinquish control. And so it is proving.

This week, Meta overruled the Board‘s recommendation of a six-month suspension of the Facebook account belonging to former Cambodian prime minister Hun Sen. At issue was a video of one of Sen’s speeches, which everyone agreed incited violence against his opposition. Meta has kept the video up on the grounds of “newsworthiness”; Meta also declined to follow the Board’s recommendation to clarify its rules for public figures in “contexts in which citizens are under continuing threat of retaliatory violence from their governments”.

In the Platformer newsletter Casey Newton argues that the Board’s deliberative process is too slow to matter – it took eight months to decide this case, too late to save the election at stake or deter the political violence that has followed. Newton also concludes from the list of decisions that the Board is only “nibbling round the edges” of Meta’s policies.

A company with shareholders, a business model, and a king is never going to let an independent group make decisions that will profoundly shape its future. From Kate Klonick’s examination, we know the Board members are serious people prepared to think deeply about content moderation and its discontents. But they were always in a losing position. Now, even they must know that.

***

It should go without saying that anything that requires an Internet connection should be designed for connection failures, especially when the connected devices are required to operate the physical world. The downside was made clear by the 2017 incident, when lost signal meant a Tesla-owning venture capitalist couldn’t restart his car. Or the one in 2021, when a bunch of Tesla owners found their phone app couldn’t unlock their car doors. Tesla’s solution both times was to tell car owners to make sure they always had their physical car keys. Which, fine, but then why have an app at all?

Last week, Bambu 3D printers began printing unexpectedly when they got disconnected from the cloud. The software managing the queue of printer jobs lost the ability to monitor them, causing some to be restarted multiple times. Given the heat and extruded material 3D printers generate, this is dangerous for both themselves and their surroundings.

At TechRadar, Bambu’s PR acknowledges this: “It is difficult to have a cloud service 100% reliable all the time, but we should at least have designed the system more carefully to avoid such embarrassing consequences.” As TechRadar notes, if only embarrassment were the worst risk.

So, new rule: before installation test every new “smart” device by blocking its Internet connection to see how it behaves. Of course, companies should do this themselves, but as we/’ve seen, you can’t rely on that either.

***

Finally, in “be careful what you legislate for”, Canada is discovering the downside of C-18, which became law in June. and requires the biggest platforms to pay for the Canadian news content they host. Google and Meta warned all along that they would stop hosting Canadian news rather than pay for it. Experts like law professor Michael Geist predicted that the bill would merely serve to dramatically cut traffic to news sites.

On August 1, Meta began adding blocks for news links on Facebook and Instagram. A coalition of Canadian news outlets quickly asked the Competition Bureau to mount an inquiry into Meta’s actions. At TechDirt Mike Masnick notes the irony: first legacy media said Meta’s linking to news was anticompetitive; now they say not linking is anticompetitive.

However, there are worse consequences. Prime minister Justin Trudeau complains that Meta’s news block is endangering Canadians, who can’t access or share local up-to-date information about the ongoing wildfires.

In a sensible world, people wouldn’t rely on Facebook for their news, politicians would write legislation with greater understanding, and companies like Meta would wield their power responsibly. In *this* world, a we have a perfect storm.

Illustrations:XKCD’s Dependency.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Infrastructure, Intellectual Property, Law, Media, Net lifeTags , , Leave a comment on Power cuts

Review: Should You Believe Wikipedia?

Should You Believe Wikipedia? Online Communities and the Construction of Knowledge
By Amy S. Bruckman
Publisher: Cambridge
Print publication year: 2022
ISBN: 978-1-108780-704

Every Internet era has had its new-thing obsession. For a time in the mid-1990s, it was “community”. Every business, some industry thinkers insisted, would need to build a community of customers, suppliers, and partners. Many tried, and the next decade saw the proliferation of blogs, web boards, and, soon, multi-player online games. We learned that every such venture of any size attracts abuse that requires human moderators to solve. We learned that community does not scale. Then came Facebook and other modern social media, fueled by mobile phones, and the business model became data collection to support advertising.

Back at the beginning, Amy S. Bruckman, now a professor at Georgia Tech but then a student at MIT, set up the education-oriented MOO Crossing, in which children could collaborate on building objects as a way of learning to code. For 20 years, she has taught a course on designing communities. In Should You Believe Wikipedia?, Bruckman distills the lessons she’s learned over all that time, combining years of practical online experience with readable theoretical analysis based on sociology, psychology, and epistemology. Whether or not to trust Wikipedia is just one chapter in her study of online communities and the issues they pose.

Like pubs, cafes, and town squares, online communities are third spaces – that is, neutral ground where people can meet on equal terms. Clearly not neutral: many popular blogs, which tend to be personal or promotional, or the X formerly known as Twitter. Third places also need to be enclosed but inviting, visible from surrounding areas, and offering affordances for activity. In that sense, two of the most successful online communities are Wikipedia and OpenStreetMap, both of which pursue a common enterprise that contributors can feel is of global value. Facebook is home to probably hundreds of thousands of communities – families, activists, support groups, and so on – but itself is too big, too diffuse, and too lacking in shared purpose to be a community. Bruckman also cites as examples of productive communities open source software projects and citizen science.

Bruckman’s book has arrived at a moment that we may someday see as a watershed. Numerous factors – Elon Musk’s takeover and remaking of Twitter, debates about regulation and antitrust, increased privacy awareness – are making many people reevaluate what they want from online social spaces. It is a moment when new experiments might thrive.

Something like that is needed, Bruckman concludes: people are not being well served by the free market’s profit motives and current business models. She would like to see more of the Internet populated by non-profits, but elides the key hard question: what are the sustainable models for supporting such endeavors? Mozilla, one of the open source software-building communities she praises, is sustained by payments from Google, making it still vulnerable to the dictates of shareholders, albeit at one remove. It remains an open question if the Fediverse, currently chiefly represented by Mastodon, can grow and prosper in the long term under its present structure of volunteer administrators running their own servers and relying on users’ donations to pay expenses. Other established commercial community hosts, such as Reddit, where Bruckman is a moderator, have long failed to find financial sustainability.

Bruckman never quite answers the question in the title. It reflects the skepticism at Wikipedia’s founding that an encyclopedia edited by anyone who wanted to participate could be any good. As she explains, however, the fact that every page has its Talk page that details disputes and exposes prior versions provides transparency the search engines don’t offer. It may not be clear if we *should* believe Wikipedia, whose quality varies depending on the subject, but she does make clear why we *can* when we do.

Five seconds

Careful observers posted to Hacker News this week – and the Washington Post reported – that the X formerly known as Twitter (XFKAT?) appeared to be deliberately introducing a delay in loading links to sites the owner is known to dislike or views as competitors. These would be things like the New York Times and selected other news organizations, and rival social media and publishing services like Facebook, Instagram, Bluesky, and Substack.

The 4.8 seconds users clocked doesn’t sound like much until you remember, as the Post does, that a 2016 Google study found that 53% of mobile users will abandon a website that takes longer than three seconds to load. Not sure whether desktop users are more or less patient, but it’s generally agreed that delay is the enemy.

The mechanism by which XFKAT was able to do this is its built-in link shortener, t.co, through which it routes all the links users post. You can see this for yourself if you right-click on a posted link and copy the results. You can only find the original link by letting the t.co links resolve and copying the real link out of the browser address bar after the page has loaded.

Whether or not the company was deliberately delaying these connections, the fact is that it *can* – as can Meta’s platforms and many others. This in itself is a problem; essentially it’s a failure of network neutrality. This is the principle that a telecoms company should treat all traffic equally, and it is the basis of the egalitarian nature of the Internet. Regulatory insistence on network neutrality is why you can run a voice over Internet Protocol connection over broadband supplied by a telco or telco-owned ISP even though the services are competitors. Social media platforms are not subject to these rules, but the delaying links story suggests maybe they should be once they reach a certain size.

Link shorteners have faded into the landscape these days, but they were controversial for years after the first such service – TinyURL – was launched in 2002 (per Wikipedia). Critics cited several main issues: privacy, persistence, and obscurity. The latter refers to users’ inability to know where their clicks are taking them; I feel strongly about this myself. The privacy issue is that the link shorteners-in-the-middle are in a position to collect traffic data and exploit it (bad actors could also divert links from their intended destination). The ability to collect that data and chart “impact” is, of course, one reason shorteners were widely adopted by media sites of all types. The persistence issue is that intermediating links in this way creates one or more central points of failure. When the link shortener’s server goes down for any reason – failed Internet connection, technical fault, bankrupt owner company – the URL the shortener encodes becomes unreachable, even if the page itself is available as normal. You can’t go directly to the page, or even located a cached copy at the Internet Archive, without the original URL.

Nonetheless, shortened links are still widely used, for the same reasons why they were invented. Many URLs are very long and complicated. In print publications, they are visually overwhelming, and unwieldy to copy into a web address bar; they are near-impossible to proofread in footnotes and citations. They’re even worse to read out on broadcast media. Shortened links solve all that. No longer germane is the 140-character limit Twitter had in its early years; because the URL counted toward that maximum, short was crucial. Since then, the character count has gotten bigger, and URLs aren’t included in the count any more.

If you do online research of any kind you have probably long since internalized the routine of loading the linked content and saving the actual URL rather than the shortened version. This turns out to be one of the benefits of moving to Mastodon: the link you get is the link you see.

So to network neutrality. Logically, its equivalent for social media services ought to include the principle that users can post whatever content or links they choose (law and regulation permitting), whether that’s reposted TikTok videos, a list of my IDs on other systems, or a link to a blog advocating that all social media companies be forced to become public utilities. Most have in fact operated that way until now, infected just enough with the early Internet ethos of openness. Changing that unwritten social contract is very bad news even though no one believed XFKAT’s CEO when he insisted he was a champion of free speech and called the now-his site the “town square”.

If that’s what we want social media platforms to be, someone’s going to have to force them, especially if they begin shrinking and their owners start to feel the chill wind of an existential threat. You could even – though no one is, to the best of my knowledge – make the argument that swapping in a site-created shortened URL is a violation of the spirit of data protection legislation. After all, no one posts links on a social media site with the view that their tastes in content should be collected, analyzed, and used to target ads. Librarians have long been stalwarts in resisting pressure to disclose what their patrons read and access. In the move online in general, and to corporate social media in particular, we have utterly lost sight of the principle of the right to our own thoughts.

Illustrations: The New York City public library in 2006..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series she is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Media, Net life, UncategorizedTags , Leave a comment on Five seconds

The data grab

It’s been a good week for those who like mocking flawed technology.

Numerous outlets have reported, for example, that “AI is getting dumber at math”. The source is a study conducted by researchers at Stanford and the University of California Berkeley comparing GPT-3.5’s and GPT-4’s output in March and June 2023. The researchers found that, among other things, GPT-4’s success rate at identifying prime numbers dropped from 84% to 51%. In other words, in June 2023 ChatGPT-4 did little better than chance at identifying prime numbers. That’s psychic level.

The researchers blame “drift”, the problem that improving one part of a model may have unhelpful knock-on effects in other parts of the model. At Ars Technica, Benj Edwards is less sure, citing qualified critics who question the study’s methodology. It’s equally possible, he suggests, that as the novelty fades, people’s attempts to do real work surface problems that were there all along. With no access to the algorithm itself and limited knowledge of the training data, we can only conduct such studies by controlling inputs and observing the outputs, much like diagnosing allergies by giving a child a series of foods in turn and waiting to see which ones make them sick. Edwards advocates greater openness on the part of the companies, especially as software developers begin building products on top of their generative engines.

Unrelated, the New Zealand discount supermarket chain Pak’nSave offered an “AI” meal planner that, set loose, promptly began turning out recipes for “poison bread sandwiches”, “Oreo vegetable stir-fry”, and “aromatic water mix” – which turned out to be a recipe for highly dangerous chlorine gas.

The reason is human-computer interaction: humans, told to provide a list of available ingredients, predictably became creative. As for the computer…anyone who’s read Janelle Shane’s 2019 book, You Look LIke a Thing and I Love You, or her Twitter reports on AI-generated recipes could predict this outcome. Computers have no real world experience against which to judge their output!

Meanwhile, the San Francisco Chronicle reports, Waymo and Cruise driverless taxis are making trouble at an accelerating rate. The cars have gotten stuck in low-hanging wires after thunderstorms, driven through caution tape, blocked emergency vehicles and emergency responders, and behaved erratically enough to endanger cyclists, pedestrians, and other vehicles. If they were driven by humans they’d have lost their licenses by now.

In an interesting side note that reminds of the cars’ potential as a surveillance network, Axios reports that in a ten-day study in May Waymo’s driverless cars found that human drivers in San Francisco speed 33% of the time. A similar exercise in Phoenix, Arizona observed human drivers speeding 47% of the time on roads with a 35mph speed limit. These statistics of course bolster the company’s main argument for adoption: improving road safety.

The study should – but probably won’t – be taken as a warning of the potential for the cars’ data collection to become embedded in both law enforcement and their owners’ business models. The frenzy surrounding ChatGPT-* is fueling an industry-wide data grab as everyone tries to beef up their products with “AI” (see also previous such exercises with “meta”, “nano”, and “e”), consequences to be determined.

Among the newly-discovered data grabbers is Intel, whose graphics processing unit (GPU) drivers are collecting telemetry data, including how you use your computer, the kinds of websites you visit, and other data points. You can opt out, assuming you a) realize what’s happening and b) are paying attention at the right moment during installation.

Google announced recently that it would scrape everything people post online to use as training data. Again, an opt-out can be had if you have the knowledge and access to follow the 30-year-old robots.txt protocol. In practical terms, I can configure my own site, pelicancrossing.net, to block Google’s data grabber, but I can’t stop it from scraping comments I leave on other people’s blogs or anything I post on social media sites or that’s professionally published (though those sites may block Google themselves). This data repurposing feels like it ought to be illegal under data protection and copyright law.

In Australia, Gizmodo reports that the company has asked the Australian government to relax copyright laws to facilitate AI training.

Soon after Google’s announcement the law firm Clarkson filed a class action lawsuit against Google to join its action against OpenAI. The suit accuses Google of “stealing” copyrighted works and personal data,

“Google does not own the Internet,” Clarkson wrote in its press release. Will you tell it, or shall I?

Whatever has been going on until now with data slurping in the interests of bombarding us with microtargeted ads is small stuff compared to the accelerating acquisition for the purpose of feeding AI models. Arguably, AI could be a public good in the long term as it improves, and therefore allowing these companies to access all available data for training is in the public interest. But if that’s true, then the *public* should own the models, not the companies. Why should we consent to the use of our data so they can sell it back to us and keep the proceeds for their shareholders?

It’s all yet another example of why we should pay attention to the harms that are clear and present, not the theoretical harm that someday AI will be general enough to pose an existential threat.

Illustrations: IBM Watson, Jeopardy champion.

Wendy M. Grossman is the 2013 winner of the Enigma Award and contributing editor for the Plutopia News Network podcast. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.

Review: Making a Metaverse That Matters

Making a Metaverse That Matters: From Snow Crash and Second Life to A Virtual World Worth Fighting For
By Wagner James Au
Publisher: Wiley
ISBN: 978-1-394-15581-1

A couple of years ago, when “the metaverse” was the hype-of-the-month, I kept wondering why people didn’t just join 20-year-old Second Life, or a game world. Even then the idea wasn’t new: the first graphical virtual world, Habitat, launched in 1988. And even *that* was preceded by text-based MUDs that despite their limitations afforded their users the chance to explore a virtual world and experiment with personal identity.

I never really took to Second Life. The initial steps – download the software, install it, choose a user name and password, and then an avatar – aren’t difficult. The trouble begins after that: what do I do now? Fly to an island, and then…what?

I *did*, once, have a commission to interview a technology company executive, who dressed his avatar in a suit and tie to give a lecture in a virtual auditorium and then joined me in the now-empty auditorium to talk, now changedinto jeans, T-shirt, and baseball cap.

In his new book, Making a Metaverse That Matters, the freelance journalist Wagner James Au argues that this sort of image consciousness derives from allowing humanoid avatars; they lead us to bring the constraints of our human societies into the virtual world, where instead we could free our selves. Humanoid form leads people to observe the personal space common in their culture, apply existing prejudices, and so on. Au favors blocking markers such as gender and skin color that are the subject of prejudice offline. I’m not convinced this will make much difference; even on text-based systems with numbers instead of names disguising your real-life physical characteristics takes work.

Au spent Second Life’s heyday as its embedded reporter; his news and cultural reports eventually became his 1999 book, The Making of Second Life: Notes from a New World. Part of his new book reassesses that work and reports regrets. He wishes he had been a stronger critic back then instead of being swayed by his own love for the service. Second Life’s biggest mistake, he thinks, was persistently refusing to call itself a game or add game features. The result was a dedicated user base that stubbornly failed to grow beyond about 600,000 as most people joined and reacted the way I did: what now? But some of those 600,000 benefited handsomely, as Au documents: some remade their lives, and a few continue to operate million-dollar businesses built inside the service.

Au returns repeatedly to Snow Crash author Neal Stephenson‘s original conception of the metaverse, a single pervasive platform. The metaverse of Au’s dreams has community as its core value, is accessible to all, is a game (because non-game virtual worlds have generally failed), and collaborative for creators. In other words, pretty much the opposite of anything Meta is likely to build.

The safe place

For a long time, fear that technical decisions – new domain names ($)(, cooption of open standards or software, laws mandating data localization – would splinter the Internet. “Balkanize” was heard a lot.

A panel at the UK Internet Governance Forum a couple of weeks ago focused on this exact topic, and was mostly self-congratulatory. Which is when it occurred to me that the Internet may not *be* fragmented, but it *feels* fragmented. Almost every day I encounter some site I can’t reach: email goes into someone’s spam folder, the site or its content is off-limits because it’s been geofenced to conform with copyright or data protection laws, or the site mysteriously doesn’t load, with no explanation. The most likely explanation for the latter is censorship built into the Internet feed by the ISP or the establishment whose connection I’m using, but they don’t actually *say* that.

The ongoing attrition at Twitter is exacerbating this feeling, as the users I’ve followed for years continue to migrate elsewhere. At the moment, it takes accounts on several other services to keep track of everyone: definite fragmentation.

Here in the UK, this sense of fragmentation may be about to get a lot worse, as the long-heralded Online Safety bill – written and expanded until it’s become a “Frankenstein bill”, as Mark Scott and Annabelle Dickson report at Politico – hurtles toward passage. This week saw fruitless debates on amendments in the House of Lords, and it will presumably be back in the Commons shortly thereafter, where it could be passed into law by this fall.

A number of companies have warned that the bill, particularly if it passes with its provisions undermining end-to-end encryption intact, will drive them out of the country. I’m not sure British politicians are taking them seriously; so often such threats are idle. But in this case, I think they’re real, not least because post-Brexit Britain carries so much less global and commercial weight, a reality some politicians are in denial about. WhatsApp, Signal, and Apple have all said openly that they will not compromise the privacy of their masses of users elsewhere to suit the UK. Wikipedia has warned that including it in the requirement to age-verify its users will force it to withdraw rather than violate its principles about collecting as little information about users as possible. The irony is that the UK government itself runs on WhatsApp.

Wikipedia, Ian McRae, the director of market intelligence for prospective online safety regulator Ofcom, showed in a presentation at UKIGF, would be just one of the estimated 150,000 sites within the scope of the bill. Ofcom is ramping up to deal with the workload, an effort the agency expects to cost £169 million between now and 2025.

In a legal opinion commissioned by the Open Rights Group, barristers at Matrix Chambers find that clause 9(2) of the bill is unlawful. This, as Thomas Macaulay explains at The Next Web, is the clause that requires platforms to proactively remove illegal or “harmful” user-generated content. In fact: prior restraint. As ORG goes on to say, there is no requirement to tell users why their content has been blocked.

Until now, the impact of most badly-formulated British legislative proposals has been sort of abstract. Data retention, for example: you know that pervasive mass surveillance is a bad thing, but most of us don’t really expect to feel the impact personally. This is different. Some of my non-UK friends will only use Signal to communicate, and I doubt a day goes by that I don’t look something up on Wikipedia. I could use a VPN for that, but if the only way to use Signal is to have a non-UK phone? I can feel those losses already.

And if people think they dislike those ubiquitous cookie banners and consent clickthroughs, wait until they have to age-verify all over the place. Worst case: this bill will be an act of self-harm that one day will be as inexplicable to future generations as Brexit.

The UK is not the only one pursuing this path. Age verification in particular is catching on. The US states of Virginia, Mississippi, Louisiana, Arkansas, Texas, Montana, and Utah have all passed legislation requiring it; Pornhub now blocks users in Mississippi and Virginia. The likelihood is that many more countries will try to copy some or all of its provisions, just as Australia’s law requiring the big social media platforms to negotiate with news publishers is spawning copies in Canada and California.

This is where the real threat of the “splinternet” lies. Think of requiring 150,000 websites to implement age verification and proactively police content. Many of those sites, as the law firm Mischon de Reya writes may not even be based in the UK.

This means that any site located outside the UK – and perhaps even some that are based here – will be asking, “Is it worth it?” For a lot of them, it won’t be. Which means that however much the Internet retains its integrity, the British user experience will be the Internet as a sea of holes.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.

The horns of a dilemma

It has always been possible to conceive a future for Mastodon and the Fediverse that goes like this: incomers join the biggest servers (“instances”). The growth of those instances, if they can afford it, accelerates. When the sysadmins of smaller instances burn out and withdraw, their users also move to the largest instances. Eventually, the Fediverse landscape is dominated by a handful of very large instances (who enshittify in the traditional way) with a long tail of small and smaller ones. The very large ones begin setting rules – mostly for good reasons like combating abuse, improving security, and offering new features – that the very small ones struggle to keep up with. Eventually, it becomes too hard for most small instances to function.

This is the history of email. In 2003, when I set up my own email server at home, almost every techie had one. By this year, when I decommissioned it in favor of hosted email, almost everyone had long since moved to Gmail or Hotmail. It’s still possible to run an independent server, but the world is increasingly hostile to them.

Another possible Fediverse future: the cultural norms that Mastodon and other users have painstakingly developed over time become swamped by a sudden influx of huge numbers of newcomers when a very large instance joins the federation. The newcomers, who know nothing of the communities they’re joining, overwhelm their history and culture. The newcomers are despised and mocked – but meanwhile, much of the previous organically grown culture is lost, and people wanting intelligent conversation leave to find it elsewhere.

This is the history of Usenet, which in 1994 struggled to absorb 1 million AOLers arriving via a new gateway and software whose design reflected AOL’s internal design rather than Usenet’s history and culture. The result was to greatly exacerbate Usenet’s existing problems of abuse.

A third possible Fediverse future: someone figures out how to make money out of it. Large and small instances continue to exist, but many become commercial enterprises, and small instances increasingly rely on large instances to provide services the small instances need to stay functional. While both profit from that division of labor, the difficulty of discover means small servers stay small, and the large servers become increasingly monopolistic, exploitative, and unpleasant to use. This is the history of the web, with a few notable exceptions such as Wikipedia and the Internet Archive.

A fourth possible future: the Fediverse remains outside the mainstream, and admins continue to depend on donations to maintain their servers. Over time, the landscape of servers will shift as some burn out or run out of money and are replaced. This is roughly the history of IRC, which continues to serve its niche. Many current Mastodonians would be happy with this; as long as there’s no corporate owner no one can force anyone out of business for being insufficiently profitable.

These forking futures are suddenly topical as Mastodon administrators consider how to respond to this: Facebook will launch a new app that will interoperate with Mastodon and any other network that uses the ActivityPub protocol. Early screenshots suggest a clone of Twitter, Meta’s stated target, and reports say that Facebook is talking to celebrities like Oprah Winfrey and the Dalai Lama as potential users. The plan is reportedly that users will access the new service via their Instagram IDs and passwords. Top-down and celebrity-driven is the opposite of the Fediverse.

It should not be much comfort to anyone that the competitor the company wants to kill with this initiative is Twitter, not Mastodon, because either way Meta doesn’t care about Mastodon and its culture. Mastodon is rounding error even for just Instagram. Twitter is also comparatively small (and, like Reddit, too text-based to grow much further) but Meta sees in it the opportunity to capture its influencers and build profits around them.

The Fediverse is a democracy in the sense that email and Usenet were; admins get to decide their server’s policy, and users can only accept or reject by moving their account (which generally loses their history). For admins, how to handle Meta is not an easy choice. Meta has approached for discussions the admins of some of the larger Mastodon instances, who must sign an NDA or give up the chance to influence developments. That decision is for the largest few; but potentially every Mastodon instance operator will have to decide the bigger question: do they federate with Meta or not? Refusal means their users can’t access Meta’s wider world, which will inevitably include many of their friends; acceptance means change and loss of control. As I’ve said here before, something that is “open” only to your concept of “good people” isn’t open at all; it’s closed.

At Chronicles of the Instantly Curious, Carey Lening deplores calls to shun Meta as elitist; the AOL comparison draws itself. Even so, the more imminent bad future for Mastodon is this fork that could split the Fediverse into two factions. Of course the point of being decentralized is to allow more choice over who you socially network with. But until now, none of those choices took on the religious overtones associated with the most heated cyberworld disputes. Fasten your seatbelts…

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.