Sovereign immunity

At the Gikii conference in 2018, a speaker told us of her disquiet after receiving a warning from Tumblr that she had replied to several messages posted there by a Russian bot. After inspecting the relevant thread, her conclusion was that this bot’s postings were designed to increase the existing divisions within her community. There would, she warned, be a lot more of this.

We’ve seen confirming evidence over the years since. This week provided even more when X turned on location identification for all accounts, whether they wanted it or not. The result has been, as Jason Koebler writes at 404 Media, to expose the true locations of accounts purporting to be American, posting on political matters. A large portion of the accounts behind viral posts designed to exacerbate tensions are being run by people in countries like Bangladesh, Vietnam, India, Cambodia, and Russia, among others, with generative AI acting as an accelerant.

Unlike the speaker we began with, in his analysis, Koebler finds that the intention behind most of this is not to stir up divisions but simply to make money from an automated ecosystem that makes it easy. The US is the main target simply because it’s the most lucrative market. He also points out that while X’s new feature has led people to talk about it, the similar feature that has long existed on Facebook and YouTube has never led to change because, he writes, “social media companies do not give a fuck about this”. Cue the Upton Sinclair quote: “It is difficult to get a man to understand something when his Salary depends upon his not understanding it”

The incident reminded that this type of fraud in general seems to be endemic, especially in the online advertising ecosystem. In March, Portsmouth senior lecturer Karen Middleton submitted evidence (PDF) to a UK Parliamentary Select Committee Inquiry arguing that the advertising ecosystem urgently needs regulatory attention as a threat to information integrity. At the Financial Times, Martin Wolf thinks that users should be able to sue the platforms for reimbursement when they are tricked by fraudulent ads – a model that might work for fraudulent ads that cause quantifiable harm but not for those that cause wider, less tangible, social harm. Wolf cites a Reuters report from Jeff Horwitz, who analyzes internal Facebook documents to find that the company itself expected 10% of its 2024 revenues – $16 billion – to come from ads for scams and banned goods.

Search Engine Land, citing Juniper Research, estimated in 2023 that $84 billion in advertising spend would be lost to ad fraud that year, and predicted a rise to $172 billion by 2028. Spider Labs estimates 2024 losses at over $37.7 billion, based on traffic data it’s analyzed through its fraud prevention tool, and 2025 losses at $41.4 billion. For context, DataReportal puts global online ad revenue at close to $790.3 billion in 2024. Also for comparison, Adblock Tester estimated last week that ad blockers cut publishers’ advertising revenues on average by 25% in 2023, costing them up to $50 billion a year.

If Koebler is correct in his assessment, until or unless advertisers rebel the incentives are misplaced and change will not happen.

***

Enforcement of the Online Safety Act has continued to develop since it came into force in July. This week, Substack became the latest to announce it would implement age verification for whatever content it deems to be potentially harmful. Paid subscribers are exempt on the basis that they have signed up with credit cards, which are unavailable in the UK to those under 18.

In October, we noted the arrival of a lawsuit against Ofcom brought in US courts by 4Chan and Kiwi Farms. The lawyer’s name, Preston Byrne, sounded familiar; I now remember he talked bitcoin at the 2015 Tomorrow’s Transaction Forum.

James Titcomb writes at the Daily Telegraph that Ofcom’s lawyers have told the US court that it is a public regulatory authority and therefore has “sovereign immunity”. The lawsuit contends that Ofcom is run as a “commercial enterprise” and therefore doesn’t get to claim sovereign immunity. Plus: the First Amendment.

Meanwhile, with age verification spreading to Australia and the EU, on X Byrne is advocating that US states enact foreign censorship shield laws. One state – Wyoming – has already introduced one. The draft GRANITE Act was filed on November 19. Among other provisions, the law would permit US citizens who have been threatened with fines to demand three times the amount in damages – potentially billions for a company like Meta, which can be fined up to 10% of global revenue under various UK and EU laws. That clause would have to pass the US Congress. In the current mood, it might; in July in a report the House of Representatives Judiciary Committee called the EU’s Digital Services Act a foreign censorship threat.

It’s hard to know how – or when – this will end. In 1990s debates, many imagined that the competition to enforce national standards for speech across the world would lead either to unrestricted free speech or to a “least common denominator” regime in which the most restrictive laws applied everywhere. Byrne’s battle isn’t about that; it’s about who gets to decide.

Illustrations: A wild turkey strutting (by Frank Schulenberg at Wikimedia). Happy Thanksgiving!

Also this week:
At Plutopia, we interview Jennifer Granick, surveillance and cybersecurity counsel at ACLU.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Simplification

We were warned this was coming at this year’s Computers, Privacy, and Data Protection, and now it’s really here. The data protection NGO Noyb reports that a leaked internal draft (PDF) of the European Commission’s Digital Omnibus threatens to undermine the architecture the EU has been building around data protection, AI, cybersecurity, and privacy generally. At The Register, Connor Jones summarizes the changes; Noyb has detail.

The EU’s workings are, as always, somewhat inscrutable to outsiders. Noyb explains that the omnibus tool is intended to allow multiple laws to be updated simultaneously to “improve the quality of the law and streamline paperwork obligations”. In this case, Noyb argues that the European Commission is abusing this option to fast-track far more substantial and contentious changes that should be subject to impact assessments and feedback from other EU institutions, as well as legal services.

If the move succeeds – the final draft will be presented on November 19 – Noyb believes it could remove fundamental rights to privacy and data protection that Europeans have been building for more than 30 years. Noyb, European Digital Rights, and the Irish Council for Civil Liberties have sent an open letter of objection to the Commission. The basic argument: this isn’t “simplification” but deregulation. The package would still have to be accepted by the European Parliament and a majority of EU member states.

As far as I can recall, business has never much liked data protection. In the early 1990s, when the first laws were being written, I remember being told data protection was a “tax on small business”. Privacy advocates instead see data protection as a way of redressing the power imbalance between large organizations and individuals.

By 1998, when data protection law was implemented in all EU member states, US companies were publicly insisting that the US didn’t need a privacy law in order to be in compliance. Companies could use corporate policies and sectoral laws to provide a “layered approach” that would be just as protective. When I wrote about this for Scientific American in 1999, privacy advocates in the UK predicted a trade war over this, calling it a failure to understand that you can’t cut a deal with a fundamental right – like the First Amendment.

In early 2013, it looked entirely possible that the period of negotiations over data protection reform would end with rollback. GDPR was the focus of intense lobbying efforts. There were, literally, 4,000 proposed amendments, so many that I recall being shown software written to manage and understand them all.

And then…Snowden. His revelations of government spying shifted the mood noticeably, and, under his shadow, when GDPR was finally adopted in 2016 and came into force in 2018, it expanded citizens’ rights and increased penalties for non-compliance. Since then, other countries around the world have used GDPR as a model, including China and several US states.

Those few states aside, at the US federal level data protection law has never been popular, and the pile of law growing around it – the Digital Services Act, the Digital Markets Act, and the AI Act – is particularly unwelcome to the current administration, which sees it as a deliberate attack on US technology companies.

In the UK the in-progress Data (Use and Access) Act, which passed in June, also weakened some data protection provisions. It will be implemented over the year to June 2026.

At its blog, the Open Rights Group argues that some aspects of the DUAA rest on the claim that innovation, economic growth, and public security are harmed by data protection law, a dubious premise.

Until this leak, it seemed possible that the DUAA would break Britain’s adequacy decision and remove the UK from the list of countries to which the EU allows data transfers. The rule is that to qualify a country must have legal protections equivalent to those of the EU. It would be the wrong way round if instead of the UK enhancing its law to match the EU, the EU weakened its law to match the UK.

There’s a whole secondary issue here, which is that a law is only useful if it’s enforced. Noyb actively brings legal cases to force enforcement in the EU. In the UK, privacy advocates, like ORG, have long complained that the Information Commissioner’s Office is increasingly quiescent.

Many of the EU’s changes appear to be aimed at making it easier for AI companies to exploit personal data to develop models. It’s hard to know where that will end, given that every company is sprinkling “AI” over itself in order to sound exciting and new (until the next thing comes along), if this thing comes into force you have to think data protection law will increasingly only apply to small businesses running older technology that can’t be massaged to qualify for exemption..

I blame this willingness to undermine fundamental rights at least partly on the fantasy of the “AI race”. This is nation-state-level FOMO. What race? What’s the end point? What does it mean to “win”? Why the AI race, and not the net-zero race, the renewables race, or the sustainability race? All of those would produce tangible benefits and solve known problems of long standing and existential impact.

Illustrations: A drunk parrot in a Putney garden (photo by Simon Bisson; used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Undue process

To the best of my knowledge, Imgur is the first mainstream company to quit the UK in response to the Online Safety Act (though many US news sites remain unavailable due to 2018’s General Data Protection Regulation. Widely used to host pictures for reuse on web forums and social media, Imgur shut off UK connections on Tuesday. In a statement on Wednesday, the company said UK users can still exercise their data protection rights. That is, Imgur will reply within the statutory timeframe to requests for copies of our data or for the account to be deleted.

In this case, the push came from the Information Commissioner’s Office. In a statement, the ICO explains that on September 10 it notified Imgur’s owner, MediaLab AI of its provisional findings from its previously announced investigation into “how the company uses children’s information and its approach to age assurance”. The ICO proposed to fine Imgur. Imgur promptly shut down UK access. The ICO’s statement says departure changes nothing: “We have been clear that exiting the UK does not allow an organisation to avoid responsibility for any prior infringement of data protection law, and our investigation remains ongoing.”

The ICO calls Imgur’s departure “a commercial decision taken by the company”. While that’s true, EU and UK residents have dealt for years with unwanted cookie consent banners because companies subject to data protection laws have engaged in malicious compliance intended to spark a rebellion against the law. So: wash.

Many individual users stick to Imgur’s free tier, but it profits from subscriptions and advertising. MediaLab AI bought it in 2021, and uses it as a platform to mount advertising campaigns at scale for companies like Kraft-Heinz and Alienware.

Meanwhile, UK users’ Imgur accounts are effectively hostages. We don’t want lawless companies. We also don’t want bad laws – or laws that are badly drafted and worse implemented. Children’s data should be protected – but so should everyone’s. There remains something fundamentally wrong with having a service many depend upon yanked with no notice.

Companies’ threats to leave the market rather than comply with the law are often laughable – see for example Apple’s threat to leave the EU if it doesn’t repeal the Digital Markets Act. This is the rare occasion when a company has actually done it (although presumably they can turn access back on at any time). If there’s a lesson here, it may be that without EU membership Britain is now too small for foreign companies to bother complying with its laws.

***

Boundary disputes and due process are also the subject of a lawsuit launched in the US against Ofcom. At the end of August, 4chan and Kiwi Farms filed a complaint in a Washington, DC federal court against Ofcom, claiming the regulator is attempting to censor them and using the OSA to “target the free speech rights of Americans”.

We hear less about 4chan these days, but in his book The Other Pandemic, journalist James Ball traces much of the spread of QAnon and other conspiracy theories to the site. In his account, these memes start there, percolate through other social media, and become mainstream and monetized on YouTube. Kiwi Farms is equally notorious for targeted online and offline harassment.

The argument mooted by the plaintiffs’ lawyer Preston Byrne is that their conduct is lawful within the jurisdictions where they’re based and that UK and EU countries seeking to enforce their laws should do so through international treaties and courts. There’s some precedent to the first bit, albeit in a different context. In 2010. the New York State legislature and then the US Congress passed the Libel Tourism Protection Act. Under it, US courts are prevented from enforcing British libel judgments if the rulings would not stand in a US court. The UK went on to modify its libel laws in 2013.

Any country has the sovereignty to demand that companies active within its borders comply with its laws, even laws that are widely opposed, and to punish them if they don’t, which is another thing 4chan’s lawyers are complaining about. The question the Internet has raised since the beginning (see also the Apple case and, before it the 1996 case United States v. Thomas) is where the boundary is and how it can be enforced. 4chan is trying to argue that the penalties Ofcom provisionally intends to apply are part of a campaign of targeted harassment of US technology companies. Odd to see *4chan* adopting the technique long ago advocated by staid, old IBM: when under attack, wrap yourself in the American flag.

***

Finally, in the consigned-to-history category, AOL shut down dialup on September 30. I recall traveling with a file of all of the dialup numbers the even earlier service, CompuServe maintained around the world. It was, in its time, a godsend. (Then AOL bought up the service, its biggest competitor before the web, and shut it down, seemingly out of spite.) For this reason, my sympathies are with the 124,000 US users the US Census Bureau says still rely on dial-up – only a few thousand of them were paying for AOL, per CNBC – and the uncounted others elsewhere. It’s easy to forget when you’re surrounded by wifi and mobile connections that Internet access remains hard for many people.

Elsewhere this week: Childproofing the Internet, at Skeptical Inquirer.

Illustrations: Imgur’s new UK home page.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Remediating monopoly

This week Judge Amit P. Mehta handed down his ruling on remedies in the antitrust case on search. Decided in 2024, this was the first to find that Google acted illegally as a monopolist . Any decision Mehta made was going to displease a lot of people, and so it has. What the US Department of Justice wanted: Google to divest the Chrome browser and perhaps Android, end the agreements by which Apple, Samsung, and others pay Google to make its search engine the default, and share its search index with competitors. What Mehta says: Google can keep Chrome and Android and go on paying people to make its search engine the default, but it cannot make those agreements exclusive for six years. And it it must share search data with competitors.

So Apple gets to keep the $20 or so billion (in 2022) that comes from Google. Mozilla wins, too: the money it gets from Google is 85% of its income. So do small players such as Opera, as Mike Masnick details at TechDirt. Masnick is a rarity in liking Mehta’s ruling, which he calls “elegant”.

It’s good the judge recognizes the importance of not crippling Google’s browser competitors. But it also shows how distorted and filled with dependencies the market has become.

Most commentators think Google got off very lightly considering it was convicted as a monopolist and it will be allowed to continue doing most of the things the court said it did to exploit its position. Even the Wall Street Journal called the ruling “a notable victory” for both Apple and Google. At Big, where you expect to find anger at monopoly power, Matt Stoller is scathing, arguing that Mehta’s remedies will “obviously” fail, most especially at humbling Google’s leadership so that the company changes its behavior. He compares it – correctly, from memory – to the 1995 Microsoft case. Even though that company avoided being broken up, the case left the leadership averse to risking further regulatory actions.

Google’s appeals are still to come. Also pending are remedies in the *other* case that convicted Google of monopoly behavior, this one in adtech. By the time all is settled, as Mehta writes in his ruling, AI could have profoundly changed the market. This belief defies what former FCC chair Lina Khan wrote to kick antitrust enforcement into a new era. In her career-making 2017 paper on Amazon, she argued that the era in which powerful companies were routinely unseated by the two guys in a garage Bill Gates feared in the late 1990s was over. The big technology companies have become so wealthy they can buy up any startup that seems like it might become a threat.

Mehta is comparing the arrival of generative AI chatbots to those earlier disruptions. Recent studies don’t necessarily agree. Tim Bajarin reports at Forbes that a two-year study by One Little Web finds that as of March chatbots accounted just for 2.96% of searches – and among those chatbots, Google’s Gemini is number three, only a little behind DeepSeek – though a *long* way behind leader ChatGPT (1.7 billion queries versus 47.7 billion).

Expecting “pre-monopolized” generative AI to change the market is a gamble, and possibly a bigger one than Mehta thinks. By the time Google exhausts its appeals, it could indeed have overwhelmed the business of general search and shoved Google up its YouTube. But equally, it could have fizzled entirely.

At his blog, Ed Zitron has compiled a list of all the reasons why AI is a bubble getting ready to go volcanic all over everyone. Among his references is the recent MIT study that found that 95% of US companies investing in generative AI derive no benefit. To be fair, the study blames lack of integration and organizational support rather than the quality of large language models or the technology itself. At Forbes, Arafat Kabir suggests that MIT has measured the wrong thing, failing to recognize how many employees and others are using generative AI to automate small, routine tasks. A friend tells me he uses it to start research on new topics by having it compile a list of sources and references to read further, the sort of assignment he might give a junior researcher could he afford one.

But Zitron is not alone. At the LA Times, Michael Hiltzick argues that the AI bubble began losing air on August 7, when OpenAI launched the latest version of ChatGPT to a worldwide response of “meh”. As predicted here two years ago, generative AI may be hitting a wall in terms of improvement. Hiltzick compares what he thinks is coming to the “dot-com mirage”.

Thing is, while there were many mirages connected with the dot-com boom, and there was a bubble that burst, the infrastructure that was built out to support it was no mirage; it went on to support the massive Internet growth that’s happened since.

But perhaps disruption will come from an entirely different direction. This week Mariella Moon reported at The Verge that Switzerland has released Aspertus, an open source AI language model that its public-institution creators, the Swiss Federal Technology Institute of Lausanne (EPFL), ETH Zurich and the Swiss National Supercomputing Centre (CSCS), say was trained solely on publicly available data that conforms to copyright and data protection laws. Maybe the new “two guys in a garage” is a national government.

Illustrations: “The kind of anti-trust legislation that is needed”, by J.S. Pughe (via Library of Congress).

Wendy M. Grossman is an aware-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Drought conditions

At 404 Media, Matthew Gault was first to spot a press release from the UK’s National Drought Group offering a list of things we can do to save water. The meeting makes sense: people think of the UK as a rainy country, but an increasing number of parts of the UK are experiencing extraordinarily dry weather. This “green and pleasant England” is brown.

Last on the Group’s list of things we can do to save water at home: “Delete old emails and pictures as data centres require vast amounts of water to cool their systems.”

I had to look up the National Drought Group. Says Water Magazine: “The National Drought Group includes the Met[eorology] Office, government, regulators, water companies, farmers, the [Canal and River Trust], angling groups and conservation experts. With further warm, dry weather expected, the NDG will continue to meet regularly to coordinate the national response and safeguard water supplies for people, agriculture, and the environment.”

For those outside the UK: its ten water companies are particular unpopular just now. Created by privatization during Margaret Thatcher’s decade as prime minister, six are being sued for ÂŁ500 million for “underreporting sewage spills”. Others are being sued for overcharging 35 million household water customers. As just one example, Thames Water will raise prices by 35% over the next three years (on top of other recent rises), and expects customers to pay ÂŁ7.5 billion for a new reservoir in Oxfordshire. It already has ÂŁ17 billion in debt, and this week we learned environment secretary Steve Reed has made contingency plans in case the company goes bust. As George Monbiot writes at the Guardian, money that should have been invested in infrastructure went instead to shareholders. Climate change is a factor, sure, but so is poor water management.

All this being the case, the impact consumers can have by doing even the most effective things is dwarfed by the water companies’ failures. Deleting emails is not one of the most effective things.

At his The Weird Turn Pro Substack, Andy Masley provides some useful comparisons. Basic conclusion: you’d have to delete billions of emails to equal the savings of fixing your leaking toilet (if you have one). The whole thing reminds me of a while back when everyone was being told to save electricity by unplugging everything to extinguish all those standby lights. Last year, Which pointed out that the savings are really, really small.

The bizarre idea of deleting emails is coming, at least in part, from a government that is proposing a raft of technology-related legislation and wants, in the next five to ten years, to mastermind all sorts of IT projects, from making AI pervasive throughout government to bringing in a digital ID card. Are they thinking about the data centers they’ll need and the impact they’ll have on water management? Maybe instead tell people not to use generative AI or mine cryptocurrencies?

This much is true: data centers are a problem across the world because they require extreme amounts of water for cooling. In recent examples: at the New York Times, Eli Tan visits the US state of Georgia. At Rest of World, last year Ushar Daniele and Khadija Alam predicted upcoming water shortages in Malaysia, and Claudia Urquieta and Daniela Dib found protests in Chile, where 28 new data centers are planned.

Telling people to delete emails and pictures is just embarrassing – and sad, if people actually do it and sacrifice personal history they care about. As Masley writes, “Major governments should really know better than this.”

***

Two weeks ago we noted the arrival of age verification in the UK. Related, on May 8 the Wikimedia Foundation announced it had filed a legal challenge to the categorization provisions of the Online Safety Act (not the Act itself). The basic problem: there is little in the Act to distinguish between Wikipedia, a crowd-edited provider of highly curated information, and Facebook…or X.

The Foundation says nearly 260,000 volunteers worldwide in 300 languages contribute to Wikipedia. I do myself, but verified or not, I’m in no danger. Many are contributing factual information in countries where the facts offend an authoritarian government intent on shutting them up. The Foundation argues that 1) Wikipedia is “one of the world’s most trusted and widely used digital public goods; 2) it is at risk of being placed in the highest-risk category because of its size and interactive structure; 2) being so categorized would force it to verify the identity of contributors, placing many at risk; 4) could endanger the existence of tools the site uses to combat harmful content; 5) “criminal anonymous abuse”, which is what the Category 1 duty is supposed to help solve, isn’t a problem Wikipedia has. Instead, identifying volunteers is more likely to expose them to it.

So bad news: on August 11, the High Court of Justice dismissed the case.

The better news is that Justice Jeremy Johnson warned that if Ofcom does place Wikipedia in Category 1, it would have to be justifiable as proportionate. The judge also acknowledged the testimony of a user identified as “BLN”, who provided evidence of the extensive threats editors can face.

No one claims Wikipedia is perfect. But it remains an extraordinary collaborative achievement and a public good. It would be a horrifying consequence if legislation intended to protect children deprived them of it.

Illustrations: Kew Green, August 2025.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Magic math balls

So many ironies, so little time. According to the Financial Times (and syndicated at Ars Technica), the US government, which itself has traditionally demanded law enforcement access to encrypted messages and data, is pushing the UK to drop its demand that Apple weaken its encryption. Normally, you want to say, Look here, countries are entitled to have their own laws whether the US likes it or not. But this is not a law we like!

This all began in February, when the Washington Post reported that the UK’s Home Office had issued Apple with a Technical Capability Notice. Issued under the Investigatory Powers Act (2016) and supposed to be kept secret, the TCN demanded that Apple undermine the end-to-end encryption used for iCloud’s Advanced Data Protection feature. Much protest ensued, followed by two legal cases in front of the Investigatory Powers Tribunal, one brought by Apple, the other by Privacy International and Liberty. WhatsApp has joined Apple’s legal challenge.

Meanwhile, Apple withdrew ADP in the UK. Some people argued this didn’t really matter, as few used it, which I’d call a failure of user experience design rather than an indication that people didn’t care about it. More of us saw it as setting a dangerous precedent for both encryption and the use of secret notices undermining cybersecurity.

The secrecy of TCNs is clearly wrong and presents a moral hazard for governments that may prefer to keep vulnerabilities secret so they can take advantage for surveillance purposes. Hopefully, the Tribunal will eventually agree and force a change in the law. The Foundation for Information Policy Research (obDisclosure: I’m a FIPR board member) has published a statement explaining the issues.

According to the Financial Times, the US government is applying a sufficiently potent threat of tariffs to lead the UK government to mull how to back down. Even without that particular threat, it’s not clear how much the UK can resist. As Angus Hanton documented last year in the book Vassal State, the US has many well-established ways of exerting its influence here. And the vectors are growing; Keir Starmer’s Labour government seems intent on embedding US technology and companies into the heart of government infrastructure despite the obvious and increasing risks of doing so. When I read Hanton’s book earlier this year, I thought remaining in the EU might have provided some protection, but Caroline Donnelly warns at Computer Weekly that they, too, are becoming dangerously dependent on US technology, specifically Microsoft.

It’s tempting to blame everything on the present administration, but the reality is that the US has long used trade policy and treaties to push other countries into adopting laws regardless of their citizens’ preferences.

***

As if things couldn’t get any more surreal, this week the Trump administration *also* issued an executive order banning “woke AI” in the federal government. AI models are in future supposed to be “politically neutral”. So, as Kevin Roose writes at the New York Times, the culture wars are coming for AI.

The US president is accusing chatbots of “Marxist lunacy”, where the rest of the world calls them inaccurate, biased toward repeating and expanding historical prejudices, and inconsistent. We hear plenty about chatbots adopting Nazi tropes; I haven’t heard of one promoting workers’ and migrants’ rights.

If we know one thing about AI models it’s that they’re full of crap all the way down. The big problem is that people are deploying them anyway. At the Canary, Steve Topple reports that the UK’s Department of Work and Pensions admits in a newly-published report that its algorithm for assessing whether benefit claimants might commit fraud is ageist and and racist. A helpful executive order would set must-meet standards for *accuracy*. But we do not live in those times.

The Guardian reports that two more Trump EOs expedite building new data centers, promote exports of American AI models, expand the use of AI in the federal government, and intend to solidify US dominance in the field. Oh, and Trump would really like if it people would stop calling it “artificial” and find a new name. Seven years ago, aspirational intelligence” seemed like a good idea. But that was back when we heard a lot about incorporating ethics. So…”magic math ball”?

These days, development seems to proceed ethics-free. DWP’s report, for example, advocates retraining its flawed algorithm but says continuing to operate it is “reasonable and proportionate”. In 2021, for European Digital Rights Initiative, Agathe Balayn and Seda GĂĽrses found, “Debiasing locates the problems and solutions in algorithmic inputs and outputs, shifting political problems into the domain of design, dominated by commercial actors.” In other words, no matter what you think is “neutral”, training data, model, and algorithms are only as “neutral” as their wider context allows them to be.

Meanwhile, nothing to curb the escalating waste. At 404 Media, Emanuel Maiberg finds that Spotify is publishing AI-generated songs from dead artists without anyone’s’ permission. On Monday, MSNBC’s Rachel Maddow told viewers that there’s so much “AI slop ” about her that they’ve posted Is That Really Rachel? to catalog and debunk them.

As Ed Zitron writes, the opportunity costs are enormous.

In the UK, the US, and many other places, data centers are threatening the water supply.

But sure, let’s make more of that.

Illustrations: Magic 8 ball toy (via frankieleon at Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her website has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Cautionary tales

I’ve been online for nearly 34 years, and I’m thinking of becoming a child. Or at least, a child to big user-to-user social media services, which next week will start asking for proof of adulthood. On July 25, the new age verification requirements under the Online Safety Act come into effect in the UK. The regulator, Ofcom, has published a guide.

Plenty of companies aim to join this new market. Some are familiar: credit scorers Experian and Transunion. Others are new: Yoti, which we saw demonstrated back in 2016, and Sam Altman and Andreessen Horowitz-backed six-year-old startup World, which recently did a promotional tour for the UK launch of its Orb identification system. Summary: many happy privacy words, but still dubious.

Reddit picked Persona; Dearbail Jordan at the BBC says Redditors will need to upload either a selfie for age estimation or a government-issued ID. Reddit says it will not see this data, only storing each user’s verification status along with the birth date they’ve (optionally) provided.

Bluesky has chosen Kids’ Web Services from Epic Games. The announcement says KWS accepts multiple options: payment cards, ID scans, and face scans. Users who decline to supply this information will be denied access to adult content and direct messaging. How much do I care about either? Would I rather just be a child to two-year-old Bluesky?

On older sites my adulthood ought to be obvious: I joined Twitter/X in 2008 and Reddit in 2015. Do the math, guys! I suppose there is a chance I could have created the account, forgotten it, and then revived it for a child (the “older brother problem”), but I’m not sure these third-party verifiers solve that either.

Everyone wants to protect children. But it doesn’t make sense to do it by creating a system that exposes everyone, including children, to new privacy risks. In its report on how to fix the OSA, the Open Rights Group argues that interoperability and portability should be first principles, and that users should be able to choose providers and methods. Today, the social media companies don’t see age verification data; in five years will they be buying up those providers? These first steps matter, as they are setting the template for what is to come.

This is the opening of a floodgate. On June 27 the US Supreme Court ruled in Free Speech Coalition v. Paxton to uphold a law requiring pornographic websites to verify users’ ages through government-issued ID. At TechDirt, Mike Masnick called the ruling taking a chainsaw to the First Amendment.

It’s easy to predict that here will be scandals surrounding the data age verifiers collect, and others where technological failures let children access the wrong sort of content. We’ll hear less about the frustrations of people who are blocked by age verification from essential information. Meanwhile, child safety folks will continue pushing for new levels of control.

The big question is this: how will we know if it’s working? What does “success” look like?

***

At Platformer, Casey Newton covers Substack’s announcement that it has closed series C funding round of $100 million, valuing the company at $1.1 billion. The eight-year-old company gets to say it’s a unicorn.

Newton tries to understand how Substack is worth that. He predicts – logically – that its only choice to justify its venture capitalists’ investment will be rampant enshittification. These guys don’t put in that kind of money without expecting a full-bore return, which is why Newton is dubious about the founders’ promise to invest most of that newly-raised capital in creators. Recall the stages Cory Doctorow laid out: first they amass as many users as possible; then they abuse those users to amass as many business customers (advertisers) as possible; then they squeeze everyone.

Substack, which announced four months ago that it – or, more correctly, its creators – has more than 5 million paid subscriptions, is different in that its multi-sided market structure is more like Uber or Amazon Marketplace than like a social media site or traditional publisher. It has users (readers and listeners), creators (like Uber’s drivers or Amazon’s sellers), and customers (advertisers). Viewed that way, it’s easy to see Substack’s most likely path: raise prices (users and advertisers), raise thresholds and commissions (creators), and, like Amazon, force sellers (creators) into using fee-based additional services in order to stay afloat. Plus, it must crush the competition. See similar math from Anil Dash.

Less ponderable is the headwind of Substack’s controversial hospitality to extremists, noted in 2023 by Jonathan Katz at The Atlantic. Some creators – like Newton – have opted to leave for competitor Ghost, which is both open source and cheaper. Many friends refuse to pay Substack even when they want to support creators whose work they admire. At the time, Stephen Bush responded at the Financial Times that Substack should admit that it’s not a publisher but a “handy bit of infrastructure for sending newsletters”. Is that worth $1.1 billion?

Like earlier Silicon Valley companies, Substack is planning to reverse its previous disdain for advertising, as Benjamin Mullin and Jessica Testa report at the New York Times. The company is apparently also looking forward to embracing social networking.

So, no really new ideas, then?

Illustrations: Unicorn (by Pearson Scott Foresman via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A thousand small safety acts

“The safest place in the world to be online.”

I think I remember that slogan from Tony Blair’s 1990s government, when it primarily related to ecommerce. It morphed into child safety – for example, in 2010, when the first Digital Economy Act was passed, or 2017, when the Online Safety Act, passed in 2023 and entering into force in March 2025, was but a green paper. Now, Ofcom is charged with making it reality.

As prior net.wars posts attest, the 2017 green paper began with the idea that social media companies could be forced to pay, via a levy, for the harm they cause. The key remaining element of that is a focus on the large, dominant companies. The green paper nodded toward designing proportionately for small businesses and startups. But the large platforms pull the attention: rich, powerful, and huge. The law that’s emerged from these years of debate takes in hundreds of thousands of divergent services.

On Mastodon, I’ve been watching lawyer Neil Brown scrutinize the OSA with a particular eye on its impact on the wide ecosystem of what we might call “the community Internet” – the thousands of web boards, blogs, chat channels, and who-knows-what-else with no business model because they’re not businesses. As Brown keeps finding in his attempts to help provide these folks with tools they can use are struggling to understand and comply with the act.

First things first: everyone agrees that online harm is bad. “Of course I want people to be safe online,” Brown says. “I’m lucky, in that I’m a white, middle-aged geek. I would love everyone to have the same enriching online experience that I have. I don’t think the act is all bad.” Nonetheless, he sees many problems with both the act itself and how it’s being implemented. In contacts with organizations critiquing the act, he’s been surprised to find how many unexpectedly agree with him about the problems for small services. However, “Very few agreed on which was the worst bit.”

Brown outlines two classes of problem: the act is “too uncertain” for practical application, and the burden of compliance is “too high for insufficient benefit”.

Regarding the uncertainty, his first question is, “What is a user?” Is someone who reads net.wars a user, or just a reader? Do they become a user if they post a comment? Do they start interacting with the site when they read a comment, make a comment, or only when they comment to another user’s comment? In the fediverse, is someone who reads postings he makes via his private Mastodon instance its user? Is someone who replies from a different instance to that posting a user of his instance?

His instance has two UK users – surely insignificant. Parliament didn’t set a threshold for the “significant number of UK users” that brings a service into scope, so Ofcom says it has no answer to that question. But if you go by percentage, 100% of his user base is in Britain. Does that make Britain his “target market”? Does having a domain name in the UK namespace? What is a target market for the many community groups running infrastructure for free software projects? They just want help with planning, or translation; they’re not trying to sign up users.

Regarding the burden, the act requires service providers to perform a risk assessment for every service they run. A free software project will probably have a dozen or so – a wiki, messaging, a documentation server, and so on. Brown, admittedly not your average online participant, estimates that he himself runs 20 services from his home. Among them is a photo-sharing server, for which the law would have him write contractual terms of service for the only other user – his wife.

“It’s irritating,” he says. “No one is any safer for anything that I’ve done.”

So this is the mismatch. The law and Ofcom imagine a business with paid staff signing up users to profit from them. What Brown encounters is more like a stressed-out woman managing a small community for fun after she puts the kids to bed.

Brown thinks a lot could be done to make the act less onerous for the many sites that are clearly not the problem Parliament was trying to solve. Among them, carve out low-risk services. This isn’t just a question of size, since a tiny terrorist cell or a small ring sharing child sexual abuse material can pose acres of risk. But Brown thinks it shouldn’t be too hard to come up with criteria to rule services out of scope such as a limited user base coupled with a service “any reasonable person” would consider low risk.

Meanwhile, he keeps an In Memoriam list of the law’s casualties to date. Some have managed to move or find new owners; others are simply gone. Not on the list are non-UK sites that now simply block UK users. Others, as Brown says, just won’t start up. The result is an impoverished web for all of us.

“If you don’t want a web dominated by large, well-lawyered technology companies,” Brown sums up, “don’t create a web that squeezes out small low-risk services.”

Illustrations: Early 1970s cartoon illustrating IT project management.

Wendy M. Grossman is an award-winning journalist. Her Web site has extensive links to her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Negative externalities

A sheriff’s office in Texas searched a giant nationwide database of license plate numbers captured by automatic cameras to look for a woman they suspected of self-managing an abortion. As Rindala Alajazi writes at EFF, that’s 83,000 cameras in 6,809 networks belonging to Flock Safety, many of them in states where abortion is legal or protected as a fundamental right until viability.

We’ve known something like this was coming ever since 2022, when the US Supreme Court overturned Roe v. Wade and returned the power to regulate abortion to the individual US states. The resulting unevenness made it predictable that the strongest opponents to legal abortion would turn their attention to interstate travel.

The Electronic Frontier Foundation has been warning for some time about Flock’s database of camera-captured license plates. Recently, Jason Koebler reported at 404 Media that US Immigration and Customs Enforcement has been using Flock’s database to find prospects for deportation. Since ICE does not itself have a contract with Flock, it’s been getting local law enforcement to perform search on its behalf. “Local” refers only to the law enforcement personnel; they have access to camera data that’s shared nationally.

The point is that once the data has been collected it’s very hard to stop mission creep. On its website, Flock says its technology is intended to “solve and eliminate crime” and “protect your community”. That might have worked when we all agreed what was a crime.

***

A new MCTD Cambridge report makes a similar point about menstrual data, when sold at scale. Now, I’m from the generation that managed fertility with a paper calendar, but time has moved on, and fertility tracking apps allow a lot more of the self-quantification that can be helpful in many situations. As Stephanie Felsberger writes in introducing the report, menstrual data is highly revealing of all sorts of sensitive information. Privacy International has studied period-tracking apps, and found that they’ve improved but still pose serious privacy risks.

On the other hand, I’m not so sure about the MCTD report’s third recommendation – that government build a public tracker app within the NHS. The UK doesn’t have anything like the kind of divisive rhetoric around abortion that the US does, but the fact remains that legal abortion is a 1967 carve-out from an 1861 law. In the UK, procuring an abortion is criminal *except* during the first 24 weeks, or if the mother’s life is in danger, or if the fetus has a serious abnormality. And even then, sign-off is required from two doctors.

Investigations and prosecutions of women under that 1861 law have been rising, as Shanti Das reported at the Guardian in January. Pressure in the other direction from US-based anti-choice groups such as the Alliance for Defending Freedom has also been rising. For years it’s seemed like this was a topic no one really wanted to reopen. Now, health care providers are calling for decriminalization, and, as Hannah Al-Oham reported this week, there are two such proposals currently in front of Parliament.

Also relevant: a month ago, Phoebe Davis reported at the Observer that in January the National Police Chiefs’ Council quietly issued guidance advising officers to search homes for drugs that can cause abortions in cases of stillbirths and to seize and examine devices to check Internet searches, messages, and health apps to “establish a woman’s knowledge and intention in relation to the pregnancy”. There was even advice on how to bypass the requirement for a court order to access women’s medical records.

In this context, it’s not clear to me that a publicly owned app is much safer or more private than a commercial one. What’s needed is open source code that can be thoroughly examined that keeps all data on the device itself, encrypted, in a segregated storage space over which the user has control. And even then…you know, paper had a lot of benefits.

***

This week the UK Parliament passed the Data (Use and Access) bill, which now just needs a royal signature to become law. At its site, the Open Rights Group summarizes the worst provisions, mostly a list of ways the bill weakens citizens’ rights over their data.

Brexit was sold to the public on the basis of taking back national sovereignty. But, as then-MEP Felix Reda said the morning after the vote, national sovereignty is a fantasy in a globalized world. Decisions about data privacy can’t be made imagining they are only about *us*.

As ORG notes, the bill has led European Digital Rights to write to the European Commission asking for a review of the UK’s adequacy status. This decision, granted in 2020, was due to expire in June 2025, but the Commission granted a six-month extension to allow the bill’s passage to complete. In 2019, when the UK was at peak Brexit chaos, it seemed possible that the Conservative then-government would allow the UK to leave the EU with no deal in place, net.wars noted the risk to data flows. The current Labour government, with its AI and tech policy ambitions, ought to be more aware of the catastrophe losing adequacy would present. And yet.

Illustrations: Map from the Center for Reproductive Rights showing the current state of abortion rights across the US.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast and a regular guest on the TechGrumps podcast. Follow on Mastodon or Bluesky.

Nephology

For an hour yesterday (June 5, 2025), we were treated to the spectacle of the US House Judiciary Committee, both Republicans and Democrats, listening – really listening, it seemed – to four experts defending strong encryption. The four: technical expert Susan Landau and lawyers Caroline Wilson-Palow, Richard Salgado, and Gregory Nejeim.

The occasion was a hearing on the operation of the Clarifying Lawful Overseas Use of Data Act (2018), better known as the CLOUD Act. It was framed as collecting testimony on “foreign influence on Americans’ data”. More precisely, the inciting incident was a February 2025 Washington Post article revealing that the UK’s Home Office had issued Apple with a secret demand that it provide backdoor law enforcement access to user data stored using the Advanced Data Protection encryption feature it offers for iCloud. This type of demand, issued under S253 of the Investigatory Powers Act (2016), is known as a “technical capability notice”, and disclosing its existence is a crime.

The four were clear, unambiguous, and concise, incorporating the main points made repeatedly over the last the last 35 years. Backdoors, they all agreed, imperil everyone’s security; there is no such thing as a hole only “good guys” can use. Landau invoked Salt Typhoon and, without ever saying “I warned you at the time”, reminded lawmakers that the holes in the telecommunications infrastructure that they mandated in 1994 became a cybersecurity nightmare in 2024. All four agreed that with so much data being generated by all of us every day, encryption is a matter of both national security as well as privacy. Referencing the FBI’s frequent claim that its investigations are going dark because of encryption, Nojeim dissented: “This is the golden age of surveillance.”

The lawyers jointly warned that other countries such as Canada and Australia have similar provisions in national legislation that they could similarly invoke. They made sensible suggestions for updating the CLOUD Act to set higher standards for nations signing up to data sharing: set criteria for laws and practices that they must meet; set criteria for what orders can and cannot do; and specify additional elements countries must include. The Act could be amended to include protecting encryption, on which it is currently silent.

The lawmakers reserved particular outrage for the UK’s audacity in demanding that Apple provide that backdoor access for *all* users worldwide. In other words, *Americans*.

Within the UK, a lot has happened since that February article. Privacy advocates and other civil liberties campaigners spoke up in defense of encryption. Apple soon withdrew ADP in the UK. In early March, the UK government and security services removed advice to use Apple encryption from their websites – a responsible move, but indicative of the risks Apple was being told to impose on its users. A closed-to-the-public hearing was scheduled for March 14. Shortly before it, Privacy International, Liberty, and two individual claimants filed a complaint with the Investigatory Powers Tribunal seeking for the hearing to be held in public, and disputing the lawfulness, necessity, and secrecy of TCNs in general. Separately, Apple appealed against the TCN.

On April 7, the IPT released a public judgment summarizing the more detailed ruling it provided only to the UK government and Apple. Short version: it rejected the government’s claim that disclosing the basic details of the case will harm the public interest. Both this case and Apple’s appeal continue.

As far as the US is concerned, however, that’s all background noise. The UK’s claim to be able to compel the company to provide backdoor access worldwide seems to have taken Congress by surprise, but a day like this has been on its way ever since 2014, when the UK included extraterritorial power in the Data Retention and Investigatory Powers Act (2014). At the time, no one could imagine how they would enforce this novel claim, but it was clearly something other governments were going to want, too.

This Judiciary Committee hearing was therefore a festival of ironies. For one thing, the US’s own current administration is hatching plans to merge government departments’ carefully separated databases into one giant profiling machine for US citizens. Second, the US has always regarded foreigners as less deserving of human rights than its own citizens; the notion that another country similarly privileges itself went down hard.

More germane, subsidiaries of US companies remain subject to the PATRIOT Act, under which, as the late Caspar Bowden pointed out long ago, the US claims the right to compel them to hand over foreign users’ data. The CLOUD Act itself was passed in response to Microsoft’s refusal to violate Irish data protection law by fulfilling a New York district judge’s warrant for data relating to an Irish user. US intelligence access to European users’ data under the PATRIOT Act has been the big sticking point that activist lawyer Max Schrems has used to scuttle a succession of US-EU data sharing arrangements under GDPR. Another may follow soon: in January, the incoming Trump administration fired most of the Privacy and Civil Liberties Oversight board tasked to protect Europeans’ rights under the latest such deal.

But, no mind. Feast, for a moment, on the thought of US lawmakers hearing, and possibly willing to believe, that encryption is a necessity that needs protection.

Illustrations: Gregory Nejeim, Richard Salgado, Caroline Wilson-Palow, and Susan Landau facing the Judiciary Committee on June 5, 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.