Government identification as a service

This week, the clock started ticking on the UK’s Online Safety Act. Ofcom, the regulator charged with enforcing it, published its codes of practice and guidance, which come into force on March 17, 2025. At that point, websites that fall into scope – in Ofcom’s 2023 estimate 150,000 of them – must comply with requirements to conduct risk assessments, preemptively block child sexual abuse material, register a responsible person (who faces legal and financial liability), and much more.

Almost immediately, the first casualty made itself known: Dee Kitchen announced the closure of her site, which supports hundreds of interest-based forums. Ofcom’s risk assessment guidance (PDF), the personal liability would be overwhelming even if the forums produced enough in donations to cover the costs of compliance.

Russ Garrett has a summary for small sites. UK-linked blogs – even those with barely any readers – could certainly fit the definition per Ofcom’s checker tool, if users can comment on each other’s posts. Common sense says that’s ridiculous in many cases…but as Kitchen says all takes to ruin the blogger’s life is a malicious complainant wielding the OSA as their weapon.

Kitchen will certainly not be alone in concluding the requirements are prohibitively risky for web forums and bulletin boards that are run by volunteers and have minimal funding. Yet they are the Internet’s healthy social ecology, without the algorithms and business models that do most to create the harms the Act is meant to address. Promising Trouble and Power to Change are collaborating on a community of practice, and have asked Ofcom for a briefing on compliance for volunteers and small sites.

Garrett’s summary also points out that Ofcom’s rules leave it wide open for sites to censor *more* than is required, and many will do exactly that to minimize their risk. A side effect, as Garrett writes, will be to further centralize the Net, as moving communities to larger providers such as Discord will shift the liability onto *them*. This is what happens when rules controlling speech are written from the single lens of preventing harm rather than starting from a base of human rights.

More guidance to come from Ofcom next month. We haven’t even started on implementing age verification yet.

***

On Monday, I learned a new term I wish I hadn’t: “government identity as a service”. GIAAS?

The speaker was human rights campaigner Edward Hasbrouck, in a talk on identification Dave Farber‘s and Dan Gillmor‘s weekly CCRC/IP-Asia Zoom call.

Most people trace the accelerating rise of demands for identification in countries like the US and UK to 9/11. Based on that, there are now people old enough to drink in a US state who are not aware it was ever possible to just walk up to fly, get a hotel room, or enter an office. As Hasbrouck writes in a US election day posting, the rise in government demands for ID has been powered by the simultaneous rise of corporate tracking for commercial purposes. He calls it a “malign convergence of interest”.

It has long been obvious that anything companies collect can be subpoenaed by governments. Hasbrouck’s point, however, is that identification enables control as well as surveillance; it brings watchlists, blocklists, and automated bars to freedom of action – it makes us decision subjects as Gavin Freeguard said at the recent Foundation for Information Policy Research event.

Hasbrouck pinpoints three components that each present a vulnerability to control: identification, logging, decision making. As an example, consider the UK’s in-progress eVisa system, in which the government confirms an individual’s visa status online in real time with no option for physical documentation. This gives the government enormous power to stop individuals from doing vital but mundane things like rent a home, board an aircraft, or get a job. Its heart is identification – and a law delegating border enforcement to myriad civil intermediaries and normalizes these checks.

Many in the UK were outraged by proposals to give the Department of Work and Pensions the power to examine people’s bank accounts. In the US, Hasbrouck points to a recent report from the House Judiciary Committee on the Weaponization of the Federal Government that documents the Treasury Department’s Financial Crimes Enforcement Network’s collaboration with the FBI to push banks to submit reports of suspicious activity while it trawled for possible suspects after the January 6 insurrection. Yes, the destructors should be caught and punished; but also any weapon turned against people we don’t like can also be turned against us. Did anyone vote to let the FBI conduct financial surveillance by the million?

Now imagine that companies outsource ID checks to the government and offload the risk of running their own. That is how the no-fly list works. That’s how airlines operate *now*. GIAAS.

Then add the passive identification that systems like facial recognition are spreading. You can no longer reliably know whether you have been identified and logged, who gets that information, or what hidden decision they may make based on it. Few of us are sure of our rights in any situation, and few of us even ask why. In his slides (PDF), Hasbrouck offers a list of ways to fight back. He has hope.

Illustrations: Edward Hasbrouck at CPDP in 2017.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Loose ends

Privacy technologies typically fail for one of two reasons: 1) they’re too complicated and/or expensive to find widespread adoption among users; 2) sites and services ignore, undermine, or bypass them in order to preserve their business model. In the first category are numerous privacy-enhancing technologies that failed to make their case in the marketplace. Among examples of the first category are numerous encryption-related attempts to secure communications. Repeated failures in the marketplace, usually because the resulting products were too technically difficult for most users, they never found mass adoption. In the end, encrypted messaging didn’t really took off until WhatsApp built it into its service.

This week saw a category two failure: Mozilla announced it is removing the Do Not Track option from Firefox’s privacy settings. DNT is simple enough to implement if you can stand to check and change settings, but it falls on the wrong side of modern business models and, other than in California, the US has no supporting legislation to make it enforceable. Granted, Firefox is a minority browser now, but the moment feels significant for this 13-year-old technology.

As Kevin Purdy explains at Ars Technica, DNT began as an FTC proposal, based on work by Christopher Soghoian and Sid Stamm, that aimed to create a mechanism for the web similar to the “Do Not Call” list for telephone networks.

The world in which DNT seemed a hopeful possibility seems almost quaint now: then, one could still imagine that websites might voluntarily respect the signal web browsers sent indicating users’ preferences. Do Not Call, by contrast, was established by US federal legislation. Despite various efforts, the US failed to pass legislation underpinning DNT, and it never became a web standard. The closest it has come to the latter is Section 2.12 of the W3C’s Ethical Web Principles, which says, “People must be able to change web pages according to their needs.” Can I say I *need* to not be tracked?

Even at the time it seemed doubtful that web companies would comply. But it also suffered from unfortunate timing. DNT arrived just as the twin onslaught of smartphones and social media was changing the ethos that built the open web. Since then, as Cory Doctor wrote earlier this year, the incentives have aligned to push web browsers to become faithless user agents, and conventions mean less and less.

Ultimately, DNT only ever worked insofar as users could trust websites to honor their preference. As it’s become clear they can’t, ad blockers have proliferated, depriving sites of ad revenue they need to survive. Had DNT been successful, perhaps we’d have all been better off.

***

Also on the way out this week is Cruise’s San Francisco robotaxis. My last visit to San Francisco, about a year ago, was the first time I saw these in person. Most of the ones I saw were empty Waymos, perhaps in transit to a passenger, perhaps just pointlessly clogging the streets. Around then, a Cruise robotaxi ran over a pedestrian who’d been hit by another car and then dragged her 20 feet. San Francisco promptly suspended Cruise’s license. Technology critic Paris Marx thought the incident would likely be Cruise’s “death knell”. And so it’s proving. The announcement from GM, which acquired Cruise in 2016 for $1 billion, leaves just Waymo standing in the US self-driving taxi business, with Tesla saying it will enter the market late next year.

I always associate robotaxis with Vernor Vinge‘s 2006 novel Rainbows End. In it, Vinge imagined a future in which robotaxis arrived within minutes of being hailed and replaced both public transport and private car ownership. By 2012 or so, his fictional imagining had become real-life projection, and many were predicting that our streets would imminently be filled with self-driving cars, taxis or not. In 2017, the conversation was all about what ethics to program into them and reclaiming urban space. Now, that imagined future seems to be receding, as skeptics predicted it would.

***

American journalism has long operated under the presumption that the stories it produces should be “neutral”. Now, at the LA Times, CEO Patrick Soon-Shiong thinks he can enforce this neutrality by running an AI-based “bias meter” over the paper’s stories. If you remember, in the late stages of the US presidential election, Soon-Shiong blocked the paper from endorsing Kamala Harris. Reports say that the bias meter, due out next month, is meant to identify any bias the story’s source has and then deliver “both sides” of that story.

This is absurd. Few news stories have just two competing sides. A biased source can’t be countered by rewriting the story unless you include more sources and points of view, which means additional research. Most important, AI can’t think.

But readers can. And so what this story says is that Soon-Shiung doesn’t trust either the journalists who work for him or the paper’s readers to draw the conclusions he wants. If he knew more about journalism, he’d know that readers generally don’t adopt opinions just because someone tells them to. The far greater power, I recall reading years ago, lies in determining what readers *think about* by deciding what topics are important enough to cover. There’s bias there, too, but Soon-Shiong’s meter won’t show it.

Illustrations: Dominic Wilccox‘s concept driverless sleeper car, 2014.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Playing monopoly

If you were going to carve up today’s technology giants to create a more competitive landscape, how would you do it? This time the game’s for real. In August, US District Judge Amit Mehta ruled that, “Google is a monopolist and has acted as one to maintain its monopoly.” A few weeks ago, the Department of Justice filed preliminary proposals (PDF) for remedies. These may change before the parties reassemble in court next April.

Antitrust law traditionally aimed to ensure competition in order to create both a healthy business ecosystem and better serve consumers. “Free” – that is, pay-with-data – online services have been resistant to antitrust analysis through decades of focusing on lowered prices to judge success.

It’s always tempting to think of breaking monopolists up into business units. For example, a key moment in Meta’s march to huge was its purchase of WhatsApp (2014) and Instagram (2012), turning baby competitors into giant subsidiaries. In the EU, that permission was based on a promise, which Meta later broke, not to merge the three companies’ databases. Separating them back out again to create three giant privacy-invading behemoths in place of one is more like the sorceror’s apprentice than a win.

In the late 1990s case against Microsoft, which ended in settlement, many speculated about breaking it up into Baby Bills. The key question: create clones or divide up the Windows and office software?

In 2013, at ComputerWorld Gregg Keizer asked experts to imagine the post-Microsoft-breakup world. Maybe the office software company ported its products onto the iPad. Maybe the clones eventually diverged and one would have dominated search. Keizer’s experts generally agree, though, that the antitrust suit itself had its effects, slowing the company’s forward progress by making it fear provoking further suits, like IBM before it.

In Google’s case, the key turning point was likely the 2007-2008 acquisition of online advertising pioneer DoubleClick. Google was then ten years old and had been a public company for almost four years. At its IPO Wall Street pundits were dismissive, saying it had no customer lock-in and no business model.

Reading Google’s 2008 annual report is an exercise in nostalgia. Amid an explanation of contextual advertising, Google says it has never spent much on marketing because the quality of its products generated word of mouth momentum worldwide. This was all true – then.

At the time, privacy advocates opposed the DoubleClick merger. Both FTC and EU regulators raised concerns, but let it go ahead to become the heart of the advertising business Susan Wojcicki and Sheryl Sandberg built for Google. Despite growing revenues from its cloud services business, most of Google’s revenues still come from advertising.

Since then, Mehta ruled, Google cemented its dominance by paying companies like Apple, Samsung, and Verizon to make its search engine the default on the devices they make and/or sell. Further, Google’s dominance – 90% of search – allows it to charge premium rates for search ads, which in turn enhances its financial advantage. OK, one of those complaining competitors is Microsoft, but others are relative minnows like 15-year-old DuckDuckGo, which competes on privacy, buys TV ads, and hasn’t cracked 1% of the search market. Even Microsoft’s Bing, at number two, has less than 4%. Google can insist that it’s just that good, but complaints that its search results are degrading are everywhere.

Three aspects of the DoJ’s proposals seized the most attention: forcing Google to divest itself of the Chrome browser; second, if that’s not enough, to divest the Android mobile operating system; and third a block on paying other companies to make Google search the default. The latter risks crippling Mozilla and Firefox, and would dent Apple’s revenues, but not really harm Google. Saving $26.3 billion (2021 number) can’t be *all* bad.

At The Verge, Lauren Feiner summarizes the DoJ’s proposals. At the Guardian, Dan Milmo notes that the DoJ also wants Google to be barred from buying or investing in search rivals, query-based AI, or adtech – no more DoubleClicks.

At Google’s blog, chief legal officer Kent Walker calls the proposals “a radical interventionist agenda”. He adds that it would chill Google’s investment in AI like this is a bad thing, when – hello! – a goal is ensuring a competitive market in future technologies. (It could even be a good thing generally.)

Finally, Walker claims divesting Chrome and/or Android would endanger users’ security and privacy and frets that it would expose Americans’ personal search queries to “unknown foreign and domestic companies”. Adapting a line from the 1980 movie Hopscotch, “You mean, Google’s methods of tracking are more humane than the others?” While relaying DuckDuckGo’s senior vice-president’s similar reaction, Ars Technica’s Ashley Belanger dubs the proposals “Google’s nightmare”.

At Techdirt, Mike Masnick favors DuckDuckGo’s idea of forcing Google to provide access to its search results via an API so competitors can build services on top, as his company does with Bing. Masnick wants users to become custodians and exploiters of their own search histories. Finally, at Pluralistic, Cory Doctorow likes spinning out – not selling – Chrome. End adtech surveillance, he writes, don’t democratize it.

It’s too early to know what the DoJ will finally recommend. If nothing is done, however, Google will be too rich to fear future lawsuits.

Illustration: Mickey Mouse as the sorceror’s apprentice in (1940).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Return of the Four Horsemen

The themes at this week’s Scrambling for Safety, hosted by the Foundation for Information Policy Research, are topical but not new since the original 1997 event: chat control; the online safety act; and AI in government decision making.

The EU proposal chat control would require platforms served with a detection order to scan people’s phones for both new and previously known child sexual abuse materialclient-side scanning. Robin Wilton prefers to call this “preemptive monitoring” to clarify that it’s an attack.

Yet it’s not fit even for its stated purpose, as Claudia Peersman showed, based on research conducted at REPHRAIN. They set out to develop a human-centric evaluation framework for the AI tools needed at the scale chat control would require. Their main conclusion: AI tools are not ready to be deployed on end-to-end-encrypted private communications. This was also Ross Anderson‘s argument in his 2022 paper on chat control (PDF) showing why it won’t meet the stated goals. Peersman also noted an important oversight: none of the stakeholder groups consulted in developing these tools include the children they’re supposed to protect.

This led Jen Persson to ask: “What are we doing to young people?” Children may not understand encryption, she said, but they do know what privacy means to them, as numerous researchers have found. If violating children’s right to privacy by dismantling encryption means ignoring the UN Convention on the Rights of the Child, “What world are we leaving for them? How do we deal with a lack of privacy in trusted relationships?”

All this led Wilton to comment that if the technology doesn’t work, that’s hard evidence that it is neither “necessary” nor “proportionate”, as human rights law demands. Yet, Persson pointed out, legislators keep passing laws that technologists insist are unworkable. Studies in both France and Australia have found that there is no viable privacy-preserving age verification technology – but the UK’s Online Safety Act (2023) still requires it.

In both examples – and in introducing AI into government decision making – a key element is false positives, which swamp human adjudicators in any large-scale automated system. In outlining the practicality of the Online Safety Act, Graham Smith cited the recent case of Marieha Hussein, who carried a placard at a pro-Palestinian protest that depicted former prime minister Rishi Sunak and former home secretary Suella Braverman as coconuts. After two days of evidence, the judge concluded the placard was (allowed) political satire rather than (criminal) racial abuse. What automated system can understand that the same image means different things in different contexts? What human moderator has two days? Platforms will simply remove content that would never have led to a conviction in court.

Or, asked Monica Horten suggested, how does a platform identify the new offense of coercive control?

Lisa Sugiura, who campaigns to end violence against women and girls, had already noted that the same apps parents install so they can monitor their children (and are reluctant to give up later) are openly advertised with slogans like “Use this to check up on your cheating wife”. (See also Cindy Southworth, 2010, on stalker apps.) The dots connect into reports Persson heard at last week’s Safer Internet Forum that young women find it hard to refuse when potential partners want parental-style monitoring rights and then find it even harder to extricate themselves from abusive situations.

Design teams don’t count the cost of this sort of collateral damage, just as their companies have little liability for the human cost of false positives, and the narrow lens of child safety also ignores these wider costs. Yet they can be staggering: the 1990s US law requiring ISPs to facilitate wiretapping, CALEA, created the vulnerability that enabled widescale Chinese spying in 2024.

Wilton called laws that essentially treat all of us as suspects “a rule to make good people behave well, instead of preventing bad people from behaving badly”. Big organized crime cases like the Silk Road, Encrochat, and Sky ECC, relied on infiltration, not breaking encryption. Once upon a time, veterans know, there were four horsemen always cited by proponents of such laws: organized crime, drug dealers, terorrists, and child abusers. We hear little about the first three these days.

All of this will take new forms as the new government adopts AI in decision making with the same old hopes: increased efficiency, lowered costs. Government is not learning from the previous waves of technoutopianism, which brought us things like the Post Office Horizon scandal, said Gavin Freeguard. Under data protection law we were “data subjects”; now we are becoming “decision subjects” whose voices are not being heard.

There is some hope: Swee Leng Harris sees improvements in the reissued data bill, though she stresses that it’s important to remind people that the “cloud” is really material data centers that consume energy (and use water) at staggering rates (see also Kate Crawford’s book, Atlas of AI). It’s no help that UK ministers and civil servants move on to other jobs at pace, ensuring there is no accountability. As Sam Smith said, computers have made it possible to do things faster – but also to go wrong faster at a much larger scale.

Illustrations: Time magazine’s 1995 “Cyberporn” cover, the first children and online pornography scare, based on a fraudulent study.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Blue

The inxodus onto Bluesky noted here last week continues apace: the site’s added a million users a day for more than a week, gradually slowing down from 12 new users a second, per the live counter.

These are not lurkers. Suddenly, the site feels like Twitter circa 2009/2010, when your algorithm-free feed was filled with interesting people sharing ideas, there were no ads, and abuse was in its infancy. People missing in action for the last year or two are popping up; others I’ve wished would move off exTwitter so I could stop following them there have suddenly joined. Mastodon is also seeing an uptick, and (I hear) Threads continues to add users without, for me, adding interest to match…. I doubt this diaspora is all “liberals”, as some media have it – or if they are, it won’t be long before politicians and celebrities note the action is elsewhere and rush to stay relevant.

It takes a long time for a social medium to die if it isn’t killed by a corporation. Even after this week’s bonanza, Bluesky’s entire user base fits inside 5% of exTwitter, which still has around 500 million users as of September, about half of them active daily. What matters most are *posters*, who are about 10% or less of any social site’s user base. When they leave, engagement plummets, as shown in a 2017 paper in Nature.

An example in action: at Statnews, Katie Palmer reports that the science and medical community is adopting Bluesky.

I have to admit to some frustration over this: why not Mastodon? As retro-fun as this week on Bluesky has been, the problem noted here a few weeks ago of Bluesky’s venture capital funding remains. Yes, the company is incorporated as a public benefit company – but venture capitalists want exit strategies and return on investment. That tension looms.

Mastodon is a loose collection of servers that all run the same software, which in turn is written to the open protocol Activity Pub. Gergely Orosz has deep-dive looks at Bluesky’s development and culture; the goal was to write a new open protocol, AT, that would allow Bluesky, similarly, to federate with others. There is already a third-party bit of software, Bridgy, that provides interoperability among Bluesky, any system based on Activity Pub (“the Fediverse”, of which Mastodon is a subset), and the open web (such as blogs). For the moment, though, Bluesky remains the only site running its AT protocol, so the more users Bluesky adds, the more it feels like a platform rather than a protocol. And platforms can change according to the whims of their owners – which is exactly what those leaving exTwitter are escaping. So: why not Mastodon, which doesn’t have that problem?

In an exchange on Bluesky, Palmer said that those who mentioned it said they found Mastodon “too difficult to figure out”.

It can’t be the thing itself; typing and sending varies little. The problem has to be the initial uncertainty about choosing a server. What you really want is for institutions to set up their own, and then you sign up there. For most people that’s far too much heavy lifting. Still, this is what the BBC and the German government have done, and it has a significant advantage in that posting from an address on that server automatically verifies the poster as an authentic staffer. NPR simply found a server and opened an account, like I did when I joined Mastodon in 2019.

All that said, how Mastodon administrators will cope with increasing usage and resulting costs also remains an open question as discussed here last year.

So: some advice as you settle into your new online home:

– Plan for the site’s eventual demise. “On the Internet your home will always leave you” (I have lost the source of this quote). Every site, no matter how big and fast-growing it is now, or how much you all love it…assume that at some point in the future it will either die of outmoded business model (AOL forums); get bought and closed down (Television without Pity, CompuServe, Geocities); become intolerable because of cultural change (exTwitter); or be abandoned because the owner loses interest (countless blogs and comment boards). Plan for that day. Collect alternative means of contacting the people you come to know and value. Build multiple connections.

– Watch the data you’re giving the site. No one in 2007, when I joined Twitter, imagined their thousands of tweets would become fodder for a large language model to benefit one of the world’s richest multi-billionaires.

– If you are (re)building an online community for an organization, own that community. Use social media, by all means, but use it to encourage people to visit the organization’s website, or join its fully-controlled mailing list or web board. Otherwise, one day, when things change, you will have to start over from scratch, and may not even know who your members are or how to reach them.

– Don’t worry too much about the “filter bubble”, as John Elledge writes. Studies generally agree social media users encounter more, and more varied, sources of news than others. As he says, only journalists have to read widely among people whose views they find intolerable (see also the late, great Molly Ivins).

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

What’s next

“It’s like your manifesto promises,” Bernard Woolley (Derek Fowldes) tells eponymous minister Jim Hacker (Paul Eddington) in Antony Jay‘s and Jonathan Lynn’s Yes, Minister. “People *understand*.” In other words, people know your election promises aren’t real.

The current US president-elect is impulsive and chaotic, and there will be resistance. So it’s reasonable to assume that at least some of his pre-election rhetoric will remain words and not deeds. There is, however, no telling which parts. And: the chaos is the point.

At Ars Technica, Ashley Belanger considers the likely impact of the threatened 60% tariffs on Chinese goods and 20% from everywhere else: laptops could double, games consoles go up 40%, and smartphones rise 26%. Friends want to stockpile coffee, tea, and chocolate.

Also at Ars Technica, Benj Edwards predicts that the new administration will quickly reverse Joe Biden’s executive order regulating AI development.

At his BIG Substack, Matt Stoller predicts a wave of mergers following three years of restrictions. At TechDirt, Karl Bode agrees, with special emphasis on media companies and an order of enshittification on the side. At Hollywood Reporter, similarly, Alex Weprin reports that large broadcast station owners are eagerly eying up local stations, and David Zaslav, CEO of merger monster Warner Brothers Discovery, tells Georg Szalai that more consolidation would provide “real positive impact”. (As if.)

Many predict that current Federal Communications Commissioner Brendan Carr will be promoted to FCC chair. Carr set out his agenda in his chapter of Project 2025: as the Benton Institute for Broadband and Society reports. His policies, Jon Brodkin writes at Ars Technica, include reforming Section 230 of the Communications Decency Act and dropping consumer protection initiatives. John Hendel warned in October at Politico that the new FCC chair could also channel millions of dollars to Elon Musk for his Starlink satellite Internet service, a possibility the FCC turned down in 2023.

Also on Carr’s list is punishing critical news organizations. Donald Trump’s lawyers began before the election with a series of complaints, as Lachlan Cartwright writes at Columbia Journalism Review. The targets: CBS News for 60 Minutes, the New York Times, Penguin Random House, Saturday Night Live, the Washington Post, and the Daily Beast.

Those of us outside the US will be relying on the EU to stand up to parts of this through the AI Act, Digital Markets Act, Digital Services Act, and GDPR. Enforcement will be crucial. The US administration may resist this procedure. The UK will have to pick a side.

***

It’s now two years since Elon Musk was forced to honor his whim of buying Twitter, and much of what he and others said would happen…hasn’t. Many predicted system collapse or a major hack. Instead, despite mass departures for sites other, the hollowed-out site has survived technically while degrading in every other way that matters.

Other than rebranding to “X”, Musk has failed to deliver many of the things he was eagerly talking about when he took over. A helpful site chronicles these: a payments system, a content moderation council, a billion more users. X was going to be the “everything app”. Nope.

This week, the aftermath of the US election and new terms of service making user data fodder for AI training have sparked a new flood of departures. This time round there’s consensus: they’re going to Bluesky.

It’s less clear what’s happening with the advertisers who supply the platform’s revenues, which the now-private company no longer has to disclose. Since Musk’s takeover, reports have consistently said advertisers are leaving. Now, the Financial Times reports (unpaywalled, Ars Technica) they are plotting their return, seeking to curry favor given Musk’s influence within the new US administration – and perhaps escaping the lawsuit he filed against them in August. Even so, it will take a lot to rebuild. The platform’s valuation is currently estimated at $10 billion, down from the $44 billion Musk paid.

This slash-and-burn approach is the one Musk wants to take to Department of Government Efficiency (DOGE, as in Dogecoin; groan). Musk’s list of desired qualities for DOGE volunteers – no pay, long hours, “super” high IQ – reminds of Dominic Cummings in January 2020, when he was Boris Johnson’s most-favored adviser and sought super-talented weirdos to remake the UK government. Cummings was gone by November.

***

It says something about the madness of the week that the sanest development appears to be that The Onion has bought Infowars, the conspiracy theory media operation Alex Jones used to promote, alongside vitamins, supplements, and many other conspiracy theories, the utterly false claim that the Sandy Hook school shootings were a hoax. The sale was part of a bankruptcy auction held to raise funds Jones owes to the families of the slaughtered Sandy Hook children after losing to them in court in a $1.4 billion defamation case. Per the New York Times, the purchase was sanctioned by the Sandy Hook families. The Onion will relaunch the site in its own style with funding from Everytown for Gun Safety. There may not be a god, but there is an onion.

Illustrations: The front page of The Onion, showing the news about its InfoWars purchase.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Digital distrust

On Tuesday, at the UK Internet Governance Forum, a questioner asked this: “Why should I trust any technology the government deploys?”

She had come armed with a personal but generalizable anecdote. Since renewing her passport in 2017, at every UK airport the electronic gates routinely send her for rechecking to the human-staffed desk, even though the same passport works perfectly well in electronic gates at airports in other countries. A New Scientist article by Adam Vaughan that I can’t locate eventually explained: the Home Office had deployed the system knowing it wouldn’t work for “people with my skin type”. That is, as you’ve probably already guessed, dark.

She directed her question to Katherine Yesilirmak, director of strategy in the Responsible Tech Adoption Unit, formerly the Centre for Data Ethics and Innovation, a subsidiary of the Department for Skills, Innovation, and Technology.

Yesirlimak did her best, mentioning the problem of bias in training data, the variability of end users, fairness, governmental responsibility for understanding the technology it procures (since it builds very little itself these days) and so on. She is clearly up to date, even referring to the latest study finding that AIs used by human resources consistently prefer résumés with white and male-presenting names over non-white and female-presenting names. But Yesirlimak didn’t really answer the questioner’s fundamental conundrum. Why *should* she trust government systems when they are knowingly commissioned with flaws that exclude her? Well, why?

Pause to remember that 20 years ago, Jim Wayman, a pioneer in biometric identification told me, “People never have what you think they’re going to have where you think they’re going to have it.” Biometrics systems must be built to accommodate outliers – and it’s hard. For more, see Wayman’s potted history of third-party testing of modern biometric systems in the US (PDF).

Yesirlimak, whose LinkedIn profile indicates she’s been at the unit for a little under three years, noted that the government builds very little of its own technology these days. However, her group is partnering with analogues in other countries and international bodies to build tools and standards that she believes will help.

This panel was nominally about AI governance, but the connection that needed to be made was from what the questioner was describing – technology that makes some people second-class citizens – to digital exclusion, siloed in a different panel. Most people describe the “digital divide” as a binary statistical matter: 1.7 million households are not online, and 40% of households don’t meet the digital living standard, per the Liberal Democrat peer Timothy Clement-Jones, who ruefully noted the “serious gap in knowledge in Parliament” regarding digital inclusion.

Clement-Jones, who is the co-chair of the All Party Parliamentary Group on Artificial Intelligence, cited the House of Lords Communications and Digital Committee’s January 2024 report. Another statistic came from Helen Milner: 23% of people with long-term illness or disabilities are digitally excluded.

The report cites the annual consumer digital index Lloyds Bank releases each year; the last one found that Internet use is dropping among the over-60s, and for the first time the percentage of people offline in the previous three months had increased, to 4%. Fifteen percent of those offline are under 50, and overall about 4.7 million people can’t connect to wifi. Ofcom’s 2023 report found that 7% of households (disproportionately poor and/or elderly) have no Internet access, 20% of them because of cost.

“We should make sure the government always provides an analog alternative, especially as we move to digital IDs” Clement-Jones said. In 2010, when Martha Lane Fox was campaigning to get the last 10% online, one could push back: why should they have to be? Today, parying parking meters requires an app and, as Royal Holloway professor Lizzie Coles-Kemp noted, smartphones aren’t enough for some services.

Milner finds that a third of those offline already find it difficult to engage with the NHS, creating “two-tier public services”. Clement-Jones added another example: people in temporary housing have to reapply weekly online – but there is no Internet provision in temporary housing.

Worse, however, is thinking technology will magically fix intractable problems. In Coles-Kemp’s example, if someone can’t do their prescribed rehabilitation exercises at home because they lack space, support, or confidence, no app will fix it. In her work on inclusive security technologies, she has long pushed for systems to be less hostile to users in the name of preventing fraud: “We need to do more work on the difference between scammers and people who are desperate to get something done.”

In addition, Milner said, tackling digital exclusion has to be widely embraced – by the Department of Work and Pensions, for example – not just handed off to DSIT. Much comes down to designers who are unlike the people on whom their systems will be imposed and whose direct customers are administrators. “The emphasis needs to shift to the creators of these technologies – policy makers, programmers. How do algorithms make decisions? What is the impact on others of liking a piece of content?”

Concern about the “digital divide” has been with us since the beginning of the Internet. It seems to have been gradually forgotten as online has become mainstream. It shouldn’t be: digital exclusion makes all the other kinds of exclusion worse and adds anger and frustration to an already morbidly divided society.

Illustrations: Martha Lane Fox in 2011 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The master switch

In his 2010 book, The Master Switch, Columbia law professor Tim Wu quotes the television news pioneer Fred W. Friendly, who said in a 1970 article for Saturday Review that before any question of the First Amendment and free speech, is “who has exclusive control of the master switch. In his 1967 memoir, Due to Circumstances Beyond Our Control, Friendly tells numerous stories that illustrate the point, beginning with his resignation of the presidency of CBS News after the network insisted on showing a rerun of I Love Lucy rather than carry live the first Senate hearings on the US involvement in Vietnam.

This is the switch that Amazon founder Jeff Bezos flipped this week when he blocked the editorial board of the Washington Post, which he owns, from endorsing Kamala Harris and Tim Walz in the US presidential election. At that point, every fear people had in 2013, when Bezos paid $250 million to save the struggling 76-time Pulitzer prize-paper famed for breaking Watergate, came true. Bezos, like William Randolph Hearst, Rupert Murdoch, and others before him, exerted his ownership control. (See also the late, great film critic Roger Ebert on the day Rupert Murdoch took over the Chicago Sun-Times.)

If you think of the Washington Post as just a business, as opposed to a public service institution, you can see why Bezos preferred to hedge his bets. But, as former Post journalist Dan Froomkin called it in February 2023, ten years post-sale, the newspaper had reverted to its immediately pre-Bezos state, laying off staff and losing money. Then, Froomkin warned that Bezos’ newly-installed “lickspittle” publisher, editor, and editorial editor lacked vision and suggested Bezos turn it into a non-profit, give it an endowment, and leave it alone.

By October 2023, Froomkin was arguing that the Post had blown it by failing to cover the decade’s most important story, the threat to the US’s democratic system posed by “the increasingly demented and authoritarian Republican Party”. As of yesterday, more than 250,000 subscribers had canceled, literally decimating its subscriber base, though barely, as Jason Koebler writes at 404 Media, a rounding error in Bezos’ wealth.

Almost simultaneously, a similar story was playing out 3,000 miles across the country at the LA Times. There, owner Patrick Soon-Shiong overrode the paper’s editorial board’s intention to endorse Harris/Walz. Several board members have since resigned, along with editorials editor Mariel Garza.

At Columbia Journalism Review, Jeff Jarvis uses Timothy Snyder’s term, “anticipatory obedience” to describe these situations.

On his Mea Culpa podcast, former Trump legal fixer Michael Cohen has frequently issued a hard-to-believe warning that if Trump is elected he will assemble the country’s billionaires and take full control of their assets, Putin-style. As unAmerican as that sounds, Cohen has been improbably right before; in 2019 Congressional testimony he famously predicted that Trump would never allow a peaceful transition of power. If Trump wins and proves Cohen correct, anticipatory obedience won’t save Bezos or any other billionaire.

The Internet was supposed to provide an escape from this sort of control (in the 1990s, pundits feared The Drudge Report!). Into this context, several bits of social media news also dropped. Bluesky announced $15 million in venture capital funding and a user base of 13 million. Reddit announced its first-ever profit, apparently solely due to the deals the 19-year-old service signed to give Google and OpenAI to access user postings and use AI to translate users’ posts into multiple languages. Finally, the owner of the Mastodon server botsin.space, which allows users to run bots on Mastodon, is shutting down, ending new account signups and shifting to read-only by December. The owner blames unsustainably increasing costs as the user base and postings continue to grow.

Even though Bluesky is incorporated as a public benefit LLC, the acceptance of venture capital gives pause: venture capital always looks for a lucrative exit rather than value for users. Reddit served tens of millions of users for 19 years without ever making any money; it’s only profitable now because AI developers want its data.

Bluesky’s board includes the notable free speech advocate Techdirt’s Mike Masnick, who this week blasted the Washington Post’s decision in scathing terms. Masnick’s paper proposing promoting free speech by developing protocols rather than platforms serves as a sort of founding document. Platforms centralize user data and share it back out again; protocols are standards anyone can use to write compliant software to enable new connections. Think proprietary (Apple) versus open source (Linux, email, the web).

The point is this: platforms either start with or create billionaire owners; protocols allow participation by both large and small owners. That still leaves the long-term problem of how to make such services sustainable. Koebler writes of the hard work of going independent, but notes that the combination of new technology and the elimination of layers of management and corporate executives makes it vastly cheaper than before. Bluesky so far has no advertising, but plans to offer higher-level features by subscription, still implying a centralized structure. Mastodon instances survive on user donations and volunteer administrators. Its developers should target making it much easier and more efficient to run their instances: democratize the master switch.

Illustrations: Charles Foster Kane (Orson Welles) in his newsroom in the 1941 film Citizen Kane, (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Follow the business models

In a market that enabled the rational actions of economists’ fantasies, consumers would be able to communicate their preferences for “smart” or “dumb” objects by exercising purchasing power. Instead, everything from TVs and vacuum cleaners to cars is sprouting Internet connections and rampant data collection.

I would love to believe we will grow out of this phase as the risks of this approach continue to become clearer, but I doubt it because business models will increasingly insist on the post-sale money, which never existed in the analog market. Subscriptions to specialized features and embedded ads seem likely to take ever everything. Essentially, software can change the business model governing any object’s manufacture into Gillette’s famous gambit: sell the razors cheap, and make the real money selling razor blades. See also in particular printer cartridges. It’s going to be everywhere, and we’re all going to hate it.

***

My consciousness of the old ways is heightened at the moment because I spent last weekend participating in a couple of folk music concerts around my old home town, Ithaca, NY. Everyone played acoustic instruments and sang old songs to celebrate 58 years of the longest-running folk music radio show in North America. Some of us hadn’t really met for nearly 50 years. We all look older, but everyone sounded great.

A couple of friends there operate a “rock shop” outside their house. There’s no website, there’s no mobile app, just a table and some stone wall with bits of rock and other findings for people to take away if they like. It began as an attempt to give away their own small collection, but it seems the clearing space aspect hasn’t worked. Instead, people keep bringing them rocks to give away – in one case, a tray of carefully laid-out arrowheads. I made off with a perfect, peach-colored conch shell. As I left, they were taking down the rock shop to make way for fantastical Halloween decorations to entertain the neighborhood kids.

Except for a brief period in the 1960s, playing folk music has never been lucrative. However it’s still harder now: teens buy CDs to ensure they can keep their favorite music, and older people buy CDs because they still play their old collections. But you can’t even *give* a 45-year-old a CD because they have no way to play it. At the concert, Mike Agranoff highlighted musicians’ need for support in an ecosystem that now pays them just $0.014 (his number) for streaming a track.

***

With both Halloween and the US election scarily imminent, the government the UK elected in July finally got down to its legislative program this week.

Data protection reform is back in the form of the the Data Use and Access Bill, Lindsay Clark reports at The Register, saying the bill is intended to improve efficiency in the NHS, the police force, and businesses. It will involve making changes to the UK’s implementation of the EU’s General Data Protection Regulation. Care is needed to avoid putting the UK’s adequacy decision at risk. At the Open Rights Group Mariano della Santi warns that the bill weakens citizens’ protection against automated decision making. At medConfidential, Sam Smith details the lack of safeguards for patient data.

At Computer Weekly, Bill Goodwin and Sebastian Klovig Skelton outline the main provisions and hopes: improve patient care, free up police time to spend more protecting the public, save money.

‘Twas ever thus. Every computer system is always commissioned to save money and improve efficiency – they say this one will save 140,000 a years of NHS staff time! Every new computer system also always brings unexpected costs in time and money and messy stages of implementation and adaptation during which everything becomes *less* efficient. There are always hidden costs – in this case, likely the difficulties of curating data and remediating historical bias. An easy prediction: these will be non-trivial.

***

Also pending is the draft United Nations Convention Against Cybercrime; the goal is to get it through the General Assembly by the end of this year.

Human Rights Watch writes that 29 civil society organizations have written to the EU and member states asking them to vote against the treaty’s adoption and consider alternative approaches that would safeguard human rights. The EFF is encouraging all states to vote no.

Internet historians will recall that there is already a convention on cybercrime, sometimes called the Budapest Convention. Drawn up in 2001 by the Council of Europe to come into force in 2004, it was signed by 70 countries and ratified by 68. The new treaty has been drafted by a much broader range of countries, including Russia and China, is meant to be consistent with that older agreement. However, the hope is it will achieve the global acceptance its predecessor did not, in part because of the broader

However, opponents are concerned that the treaty is vague, failing to limit its application to crimes that can only be committed via a computer, and lacks safeguards. It’s understandable that law enforcement, faced with the kinds of complex attacks on computer systems we see today want their path to international cooperation eased. But, as EFF writes, that eased cooperation should not extend to “serious crimes” whose definition and punishment is left up to individual countries.

Illustrations: Halloween display seen near Mechanicsburg, PA.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: The Web We Weave

The Web We Weave
By Jeff Jarvis
Basic Books
ISBN: 9781541604124

Sometime in the very early 1990s, someone came up to me at a conference and told me I should read the work of Robert McChesney. When I followed the instruction, I found a history of how radio and TV started as educational media and wound up commercially controlled. Ever since, this is the lens through which I’ve watched the Internet develop: how do we keep the Internet from following that same path? If all you look at is the last 30 years of web development, you might think we can’t.

A similar mission animates retired CUNY professor Jeff Jarvis in his latest book, The Web We Weave. In it, among other things, he advocates reanimating the open web by reviving the blogs many abandoned when Twitter came along and embracing other forms of citizen media. Phenomena such as disinformation, misinformation, and other harms attributed to social media, he writes, have precursor moral panics: novels, comic books, radio, TV, all were once new media whose evils older generations fretted about. (For my parents, it was comic books, which they completely banned while ignoring the hours of TV I watched.) With that past in mind, much of today’s online harms regulation leaves him skeptical.

As a media professor, Jarvis is interested in the broad sweep of history, setting social media into the context that began with the invention of the printing press. That has its benefits when it comes to later chapters where he’s making policy recommendations on what to regulate and how. Jarvis is emphatically a free-speech advocate.

Among his recommendations are those such advocates typically support: users should be empowered, educated, and taught to take responsibility, and we should develop business models that support good speech. Regulation, he writes, should include the following elements: transparency, accountability, disclosure, redress, and behavior rather than content.

On the other hand, Jarvis is emphatically not a technical or design expert, and therefore has little to say about the impact on user behavior of technical design decisions. Some things we know are constants. For example, the willingness of (fully identified) online communicators to attack each other was noted as long ago as the 1980s, when Sara Kiesler studied the first corporate mailing lists.

Others, however, are not. Those developing Mastodon, for example, deliberately chose not to implement the ability to quote and comment on a post because they believed that feature fostered abuse and pile-ons. Similarly, Lawrence Lessig pointed out in 1999 in Code and Other Laws of Cyberspae (PDF) that you couldn’t foment a revolution using AOL chatrooms because they had a limit of 23 simultaneous users.

Understanding the impact of technical decisions requires experience, experimentation, and, above all, time. If you doubt this, read Mike Masnick’s series at Techdirt on Elon Musk’s takeover and destruction of Twitter. His changes to the verification system alone have undermined the ability to understand who’s posting and decide how trustworthy their information is.

Jarvis goes on to suggest we should rediscover human scale and mutual obligation, both crucial as the covid pandemic progressed. The money will always favor mass scale. But we don’t have to go that way.