Simplification

We were warned this was coming at this year’s Computers, Privacy, and Data Protection, and now it’s really here. The data protection NGO Noyb reports that a leaked internal draft (PDF) of the European Commission’s Digital Omnibus threatens to undermine the architecture the EU has been building around data protection, AI, cybersecurity, and privacy generally. At The Register, Connor Jones summarizes the changes; Noyb has detail.

The EU’s workings are, as always, somewhat inscrutable to outsiders. Noyb explains that the omnibus tool is intended to allow multiple laws to be updated simultaneously to “improve the quality of the law and streamline paperwork obligations”. In this case, Noyb argues that the European Commission is abusing this option to fast-track far more substantial and contentious changes that should be subject to impact assessments and feedback from other EU institutions, as well as legal services.

If the move succeeds – the final draft will be presented on November 19 – Noyb believes it could remove fundamental rights to privacy and data protection that Europeans have been building for more than 30 years. Noyb, European Digital Rights, and the Irish Council for Civil Liberties have sent an open letter of objection to the Commission. The basic argument: this isn’t “simplification” but deregulation. The package would still have to be accepted by the European Parliament and a majority of EU member states.

As far as I can recall, business has never much liked data protection. In the early 1990s, when the first laws were being written, I remember being told data protection was a “tax on small business”. Privacy advocates instead see data protection as a way of redressing the power imbalance between large organizations and individuals.

By 1998, when data protection law was implemented in all EU member states, US companies were publicly insisting that the US didn’t need a privacy law in order to be in compliance. Companies could use corporate policies and sectoral laws to provide a “layered approach” that would be just as protective. When I wrote about this for Scientific American in 1999, privacy advocates in the UK predicted a trade war over this, calling it a failure to understand that you can’t cut a deal with a fundamental right – like the First Amendment.

In early 2013, it looked entirely possible that the period of negotiations over data protection reform would end with rollback. GDPR was the focus of intense lobbying efforts. There were, literally, 4,000 proposed amendments, so many that I recall being shown software written to manage and understand them all.

And then…Snowden. His revelations of government spying shifted the mood noticeably, and, under his shadow, when GDPR was finally adopted in 2016 and came into force in 2018, it expanded citizens’ rights and increased penalties for non-compliance. Since then, other countries around the world have used GDPR as a model, including China and several US states.

Those few states aside, at the US federal level data protection law has never been popular, and the pile of law growing around it – the Digital Services Act, the Digital Markets Act, and the AI Act – is particularly unwelcome to the current administration, which sees it as a deliberate attack on US technology companies.

In the UK the in-progress Data (Use and Access) Act, which passed in June, also weakened some data protection provisions. It will be implemented over the year to June 2026.

At its blog, the Open Rights Group argues that some aspects of the DUAA rest on the claim that innovation, economic growth, and public security are harmed by data protection law, a dubious premise.

Until this leak, it seemed possible that the DUAA would break Britain’s adequacy decision and remove the UK from the list of countries to which the EU allows data transfers. The rule is that to qualify a country must have legal protections equivalent to those of the EU. It would be the wrong way round if instead of the UK enhancing its law to match the EU, the EU weakened its law to match the UK.

There’s a whole secondary issue here, which is that a law is only useful if it’s enforced. Noyb actively brings legal cases to force enforcement in the EU. In the UK, privacy advocates, like ORG, have long complained that the Information Commissioner’s Office is increasingly quiescent.

Many of the EU’s changes appear to be aimed at making it easier for AI companies to exploit personal data to develop models. It’s hard to know where that will end, given that every company is sprinkling “AI” over itself in order to sound exciting and new (until the next thing comes along), if this thing comes into force you have to think data protection law will increasingly only apply to small businesses running older technology that can’t be massaged to qualify for exemption..

I blame this willingness to undermine fundamental rights at least partly on the fantasy of the “AI race”. This is nation-state-level FOMO. What race? What’s the end point? What does it mean to “win”? Why the AI race, and not the net-zero race, the renewables race, or the sustainability race? All of those would produce tangible benefits and solve known problems of long standing and existential impact.

Illustrations: A drunk parrot in a Putney garden (photo by Simon Bisson; used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The absurdity card

Fifteen years ago, a new incoming government swept away a policy its immediate predecessors had been pushing since shortly after the 2001 9/11 attacks: identity cards. That incoming government was led by David Cameron’s conservatives, in tandem with Nick Clegg’s liberal democrats. The outgoing government was Tony Blair’s. When Keir Starmer’s reinvented Labour party swept the 2024 polls, probably few of us expected he would adopt Blair’s old policies so soon.

But here we are: today’s papers announce Starmer’s plan for mandatory “digital ID”.

Fifteen years is an unusually long time between ID card proposals in Britain. Since they were scrapped at the end of World War II, there has usually been a new proposal about every five years. In 2002, at a Scrambling for Safety event held by the Foundation for Information Policy Research and Privacy International, former minister Peter Lilley observed that during his time in Margaret Thatcher’s government ID card proposals were brought to cabinet every time there was a new minister for IT. Such proposals were always accompanied with a request for suggestions how it could be used. A solution looking for a problem.

In a 2005 paper I wrote for the University of Edinburgh’s SCRIPT-ED journal, I found evidence to support that view: ID card proposals are always framed around current obsessions. In 1993, it was going to combat fraud, illegal immigration, and terrorism. In 1995 it was supposed to cut crime (at that time, Blair argued expanding policing would be a better investment). In 1989, it was ensuring safety at football grounds following the Hillsborough disaster. The 2001-2010 cycle began with combating terrorism, benefit fraud, and convenience. Today, it’s illegal immigration and illegal working.

A report produced by the LSE in 2005 laid out the concerns. It has dated little, despite preceding smartphones, apps, covid passes, and live facial recognition. Although the cost of data storage has continued to plummet, it’s also worth paying attention to the chapter on costs, which the report estimated at roughly £11 billion.

As I said at the time, the “ID card”, along with the 51 pieces of personal information it was intended to store, was a decoy. The real goal was the databases. It was obvious even then that soon real time online biometric checking would be a reality. Why bother making a card mandatory when police could simply demand and match a biometric?

We’re going to hear a lot of “Well, it works in Estonia”. *A* digital ID works in Estonia – for a population of 1.3 million who regained independence in 1991. Britain has a population of 68.3 million, a complex, interdependent mass of legacy systems, and a terrible record of failed IT projects.

We’re also going to hear a lot of “people have moved on from the debates of the past”, code for “people like ID cards now” – see for example former Conservative leader William Hague. Governments have always claimed that ID cards poll well but always come up against the fact that people support the *goals*, but never like the thing when they see the detail. So it will probably prove now. Twelve years ago, I think they might have gotten away with that claim – smartphones had exploded, social media was at its height, and younger people thought everything should be digital (including voting). But the last dozen years began with Snowden‘s revelations, and continued with the Cambridge Analytica Scandal, ransomware, expanding acres of data breaches, policing scandals, the Horizon / Post Office disaster, and wider understanding of accelerating passive surveillance by both governments and massive companies. I don’t think acceptance of digital ID is a slam-dunk. I think the people who have failed to move on are the people who were promoting ID cards in 2002, when they had cross-party support, and are doing it again now.

So, to this new-old proposal. According to The Times, there will be a central database of everyone who has the right to work. Workers must show their digital ID when they start a new job to prove their employment is legal. They already have to show one of a variety of physical ID documents, but “there are concerns some of these can be faked”. I can think of a lot cheaper and less invasive solution for that. The BBC last night said checks for the right to live here would also be applied to anyone renting a home. In the Guardian, Starmer is quoted calling the card “an enormous opportunity” and saying the card will offer citizens “countless benefits” in streamlining access to key services, echoes of 2002’s “entitlement card”. I think it was on the BBC’s Newsnight that I heard someone note the absurdity of making it easier to prove your entitlement to services that no longer exist because of cuts.

So keep your eye on the database. Keep your eye on which department leads. Immigration suggests the Home Office, whose desires have little in common with the need of ordinary citizens’ daily lives. Beware knock-on effects. Think “poll tax”. And persistently ask: what problem do we have for which a digital ID is the right, the proportionate, the *necessary* solution?

There will be detailed proposals, consultations, and draft legislation, so more to come. As an activist friend says, “Nothing ever stays won.”

Illustrations: British National Identity document circa 1949 (via Wikimedia.)

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Drought conditions

At 404 Media, Matthew Gault was first to spot a press release from the UK’s National Drought Group offering a list of things we can do to save water. The meeting makes sense: people think of the UK as a rainy country, but an increasing number of parts of the UK are experiencing extraordinarily dry weather. This “green and pleasant England” is brown.

Last on the Group’s list of things we can do to save water at home: “Delete old emails and pictures as data centres require vast amounts of water to cool their systems.”

I had to look up the National Drought Group. Says Water Magazine: “The National Drought Group includes the Met[eorology] Office, government, regulators, water companies, farmers, the [Canal and River Trust], angling groups and conservation experts. With further warm, dry weather expected, the NDG will continue to meet regularly to coordinate the national response and safeguard water supplies for people, agriculture, and the environment.”

For those outside the UK: its ten water companies are particular unpopular just now. Created by privatization during Margaret Thatcher’s decade as prime minister, six are being sued for £500 million for “underreporting sewage spills”. Others are being sued for overcharging 35 million household water customers. As just one example, Thames Water will raise prices by 35% over the next three years (on top of other recent rises), and expects customers to pay £7.5 billion for a new reservoir in Oxfordshire. It already has £17 billion in debt, and this week we learned environment secretary Steve Reed has made contingency plans in case the company goes bust. As George Monbiot writes at the Guardian, money that should have been invested in infrastructure went instead to shareholders. Climate change is a factor, sure, but so is poor water management.

All this being the case, the impact consumers can have by doing even the most effective things is dwarfed by the water companies’ failures. Deleting emails is not one of the most effective things.

At his The Weird Turn Pro Substack, Andy Masley provides some useful comparisons. Basic conclusion: you’d have to delete billions of emails to equal the savings of fixing your leaking toilet (if you have one). The whole thing reminds me of a while back when everyone was being told to save electricity by unplugging everything to extinguish all those standby lights. Last year, Which pointed out that the savings are really, really small.

The bizarre idea of deleting emails is coming, at least in part, from a government that is proposing a raft of technology-related legislation and wants, in the next five to ten years, to mastermind all sorts of IT projects, from making AI pervasive throughout government to bringing in a digital ID card. Are they thinking about the data centers they’ll need and the impact they’ll have on water management? Maybe instead tell people not to use generative AI or mine cryptocurrencies?

This much is true: data centers are a problem across the world because they require extreme amounts of water for cooling. In recent examples: at the New York Times, Eli Tan visits the US state of Georgia. At Rest of World, last year Ushar Daniele and Khadija Alam predicted upcoming water shortages in Malaysia, and Claudia Urquieta and Daniela Dib found protests in Chile, where 28 new data centers are planned.

Telling people to delete emails and pictures is just embarrassing – and sad, if people actually do it and sacrifice personal history they care about. As Masley writes, “Major governments should really know better than this.”

***

Two weeks ago we noted the arrival of age verification in the UK. Related, on May 8 the Wikimedia Foundation announced it had filed a legal challenge to the categorization provisions of the Online Safety Act (not the Act itself). The basic problem: there is little in the Act to distinguish between Wikipedia, a crowd-edited provider of highly curated information, and Facebook…or X.

The Foundation says nearly 260,000 volunteers worldwide in 300 languages contribute to Wikipedia. I do myself, but verified or not, I’m in no danger. Many are contributing factual information in countries where the facts offend an authoritarian government intent on shutting them up. The Foundation argues that 1) Wikipedia is “one of the world’s most trusted and widely used digital public goods; 2) it is at risk of being placed in the highest-risk category because of its size and interactive structure; 2) being so categorized would force it to verify the identity of contributors, placing many at risk; 4) could endanger the existence of tools the site uses to combat harmful content; 5) “criminal anonymous abuse”, which is what the Category 1 duty is supposed to help solve, isn’t a problem Wikipedia has. Instead, identifying volunteers is more likely to expose them to it.

So bad news: on August 11, the High Court of Justice dismissed the case.

The better news is that Justice Jeremy Johnson warned that if Ofcom does place Wikipedia in Category 1, it would have to be justifiable as proportionate. The judge also acknowledged the testimony of a user identified as “BLN”, who provided evidence of the extensive threats editors can face.

No one claims Wikipedia is perfect. But it remains an extraordinary collaborative achievement and a public good. It would be a horrifying consequence if legislation intended to protect children deprived them of it.

Illustrations: Kew Green, August 2025.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Conundrum

It took me six hours of listening to people with differing points of view discuss AI and copyright at a workshop, organized by the Sussex Centre for Law and Technology at the Sussex Humanities Lab (SHL), to come up with a question that seemed to me significant: what is all this talk about who “wins the AI race”? The US won the “space race” in 1969, and then for 50 years nothing happened.

Fretting about the “AI race”, an argument at least one participant used to oppose restrictions on using copyrighted data for training AI models, is buying into several ideas that are convenient for Big Tech.

One: there is a verifiable endpoint everyone’s trying to reach. That isn’t anything like today’s “AI”, which is a pile of math and statistics predicting the most likely answers to prompts. Instead, they mean artificial general intelligence, which would be as much like generative AI as I am like a mushroom.

Two: it’s a worthy goal. But is it? Why don’t we talk about the renewables race, the zero carbon race, or the sustainability race? All of those could be achievable. Why just this well-lobbied fantasy scenario?

Three: we should formulate public policy to eliminate “barriers” that might stop us from winning it. *This* is where we run up against copyright, a subject only a tiny minority used to care about, but that now affects everyone. And, accordingly, everyone has had time to formulate an opinion since the Internet first challenged the historical operation of intellectual property.

The law as it stands is clear: making a copy is the exclusive right of the rightsholder. This is the basis of AI-related lawsuits. For training data to escape that law, it would have to be granted an exemption: ruled fair use (as in the Anthropic and Meta cases), create an exception for temporary copies, or shoehorned into existing exceptions such as parody. Even then, copyright law is administered territorially, so the US may call it fair use but the rest of the world doesn’t have to agree. This is why the esteemed legal scholar Pamela Samuelson has said copyright law poses an existential threat to generative AI.

But, as one participant pointed out, although the entertainment industry dominates these discussions, there are many other sectors with different needs. Science, for example, both uses and studies AI, and is built on massive amounts of public funding. Surely that data should be free to access?

I wanted to be at this meeting because what should happen with AI, training data, and copyright is a conundrum. You do not have to work for a technology company to believe that there is value in allowing researchers both within and outwith companies to work on machine learning and build AI tools. When people balk at the impossible scale of securing permission from every copyright holder of every text, image, or sound, they have a point. The only organizations that could afford that are the companies we’re already mad at for being too big, rich, and powerful.

At the same time, why should we allow those big, rich, powerful companies to plunder our cultural domain without compensating anyone and extract even larger fortunes while doing it? To a published author who sees years of work reflected in a chatbot’s split-second answer to a prompt, it’s lost income and readers.

So for months, as Parliament has wrangled over the Data bill, the argument narrowed to copyright. Should there be an exception for data mining? Should technology companies have to get permission from creators and rights holders? Or should use of their work be automatically allowed, unless they opt out? All answers seem equally impossible. Technology companies would have to find every copyright holder of every datum to get permission. Licensing by the billion.

If creators must opt out, does that mean one piece at a time? How will they know when they need to opt out and who they have to notify? At the meeting, that was when someone said that the US and China won’t do this. Britain will fall behind internationally. Does that matter?

And yet, we all seemed to converge on this: copyright is the wrong tool. As one person said, technologies that threaten the entertainment industry always bring demands to tighten or expand copyright. See the last 35 years, in which Internet-fueled copying spawned the Digital Millennium Copyright Act and the EU Copyright Directive, and copyright terms expanded from 28 years, renewable once, to author’s life plus 70.

No one could suggest what the right tool would be. But there are good questions. Such as: how do we grant access to information? With business models breaking, is copyright still the right way to compensate creators? One of us believed strongly in the capabilities of collection societies – but these tend to disproportionately benefit the most popular creators, who will survive anyway.

Another proposed the highly uncontroversial idea of taxing the companies. Or levies on devices such as smartphones. I am dubious on this one: we have been there before.

And again, who gets the money? Very successful artists like Paul McCartney, who has been vocal about this? Or do we have a broader conversation about how to enable people to be artists? (And then, inevitably, who gets to be called an artist.)

I did not find clarity in all this. How to resolve generative AI and copyright remains complex and confusing. But I feel better about not having an answer.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A thousand small safety acts

“The safest place in the world to be online.”

I think I remember that slogan from Tony Blair’s 1990s government, when it primarily related to ecommerce. It morphed into child safety – for example, in 2010, when the first Digital Economy Act was passed, or 2017, when the Online Safety Act, passed in 2023 and entering into force in March 2025, was but a green paper. Now, Ofcom is charged with making it reality.

As prior net.wars posts attest, the 2017 green paper began with the idea that social media companies could be forced to pay, via a levy, for the harm they cause. The key remaining element of that is a focus on the large, dominant companies. The green paper nodded toward designing proportionately for small businesses and startups. But the large platforms pull the attention: rich, powerful, and huge. The law that’s emerged from these years of debate takes in hundreds of thousands of divergent services.

On Mastodon, I’ve been watching lawyer Neil Brown scrutinize the OSA with a particular eye on its impact on the wide ecosystem of what we might call “the community Internet” – the thousands of web boards, blogs, chat channels, and who-knows-what-else with no business model because they’re not businesses. As Brown keeps finding in his attempts to help provide these folks with tools they can use are struggling to understand and comply with the act.

First things first: everyone agrees that online harm is bad. “Of course I want people to be safe online,” Brown says. “I’m lucky, in that I’m a white, middle-aged geek. I would love everyone to have the same enriching online experience that I have. I don’t think the act is all bad.” Nonetheless, he sees many problems with both the act itself and how it’s being implemented. In contacts with organizations critiquing the act, he’s been surprised to find how many unexpectedly agree with him about the problems for small services. However, “Very few agreed on which was the worst bit.”

Brown outlines two classes of problem: the act is “too uncertain” for practical application, and the burden of compliance is “too high for insufficient benefit”.

Regarding the uncertainty, his first question is, “What is a user?” Is someone who reads net.wars a user, or just a reader? Do they become a user if they post a comment? Do they start interacting with the site when they read a comment, make a comment, or only when they comment to another user’s comment? In the fediverse, is someone who reads postings he makes via his private Mastodon instance its user? Is someone who replies from a different instance to that posting a user of his instance?

His instance has two UK users – surely insignificant. Parliament didn’t set a threshold for the “significant number of UK users” that brings a service into scope, so Ofcom says it has no answer to that question. But if you go by percentage, 100% of his user base is in Britain. Does that make Britain his “target market”? Does having a domain name in the UK namespace? What is a target market for the many community groups running infrastructure for free software projects? They just want help with planning, or translation; they’re not trying to sign up users.

Regarding the burden, the act requires service providers to perform a risk assessment for every service they run. A free software project will probably have a dozen or so – a wiki, messaging, a documentation server, and so on. Brown, admittedly not your average online participant, estimates that he himself runs 20 services from his home. Among them is a photo-sharing server, for which the law would have him write contractual terms of service for the only other user – his wife.

“It’s irritating,” he says. “No one is any safer for anything that I’ve done.”

So this is the mismatch. The law and Ofcom imagine a business with paid staff signing up users to profit from them. What Brown encounters is more like a stressed-out woman managing a small community for fun after she puts the kids to bed.

Brown thinks a lot could be done to make the act less onerous for the many sites that are clearly not the problem Parliament was trying to solve. Among them, carve out low-risk services. This isn’t just a question of size, since a tiny terrorist cell or a small ring sharing child sexual abuse material can pose acres of risk. But Brown thinks it shouldn’t be too hard to come up with criteria to rule services out of scope such as a limited user base coupled with a service “any reasonable person” would consider low risk.

Meanwhile, he keeps an In Memoriam list of the law’s casualties to date. Some have managed to move or find new owners; others are simply gone. Not on the list are non-UK sites that now simply block UK users. Others, as Brown says, just won’t start up. The result is an impoverished web for all of us.

“If you don’t want a web dominated by large, well-lawyered technology companies,” Brown sums up, “don’t create a web that squeezes out small low-risk services.”

Illustrations: Early 1970s cartoon illustrating IT project management.

Wendy M. Grossman is an award-winning journalist. Her Web site has extensive links to her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Negative externalities

A sheriff’s office in Texas searched a giant nationwide database of license plate numbers captured by automatic cameras to look for a woman they suspected of self-managing an abortion. As Rindala Alajazi writes at EFF, that’s 83,000 cameras in 6,809 networks belonging to Flock Safety, many of them in states where abortion is legal or protected as a fundamental right until viability.

We’ve known something like this was coming ever since 2022, when the US Supreme Court overturned Roe v. Wade and returned the power to regulate abortion to the individual US states. The resulting unevenness made it predictable that the strongest opponents to legal abortion would turn their attention to interstate travel.

The Electronic Frontier Foundation has been warning for some time about Flock’s database of camera-captured license plates. Recently, Jason Koebler reported at 404 Media that US Immigration and Customs Enforcement has been using Flock’s database to find prospects for deportation. Since ICE does not itself have a contract with Flock, it’s been getting local law enforcement to perform search on its behalf. “Local” refers only to the law enforcement personnel; they have access to camera data that’s shared nationally.

The point is that once the data has been collected it’s very hard to stop mission creep. On its website, Flock says its technology is intended to “solve and eliminate crime” and “protect your community”. That might have worked when we all agreed what was a crime.

***

A new MCTD Cambridge report makes a similar point about menstrual data, when sold at scale. Now, I’m from the generation that managed fertility with a paper calendar, but time has moved on, and fertility tracking apps allow a lot more of the self-quantification that can be helpful in many situations. As Stephanie Felsberger writes in introducing the report, menstrual data is highly revealing of all sorts of sensitive information. Privacy International has studied period-tracking apps, and found that they’ve improved but still pose serious privacy risks.

On the other hand, I’m not so sure about the MCTD report’s third recommendation – that government build a public tracker app within the NHS. The UK doesn’t have anything like the kind of divisive rhetoric around abortion that the US does, but the fact remains that legal abortion is a 1967 carve-out from an 1861 law. In the UK, procuring an abortion is criminal *except* during the first 24 weeks, or if the mother’s life is in danger, or if the fetus has a serious abnormality. And even then, sign-off is required from two doctors.

Investigations and prosecutions of women under that 1861 law have been rising, as Shanti Das reported at the Guardian in January. Pressure in the other direction from US-based anti-choice groups such as the Alliance for Defending Freedom has also been rising. For years it’s seemed like this was a topic no one really wanted to reopen. Now, health care providers are calling for decriminalization, and, as Hannah Al-Oham reported this week, there are two such proposals currently in front of Parliament.

Also relevant: a month ago, Phoebe Davis reported at the Observer that in January the National Police Chiefs’ Council quietly issued guidance advising officers to search homes for drugs that can cause abortions in cases of stillbirths and to seize and examine devices to check Internet searches, messages, and health apps to “establish a woman’s knowledge and intention in relation to the pregnancy”. There was even advice on how to bypass the requirement for a court order to access women’s medical records.

In this context, it’s not clear to me that a publicly owned app is much safer or more private than a commercial one. What’s needed is open source code that can be thoroughly examined that keeps all data on the device itself, encrypted, in a segregated storage space over which the user has control. And even then…you know, paper had a lot of benefits.

***

This week the UK Parliament passed the Data (Use and Access) bill, which now just needs a royal signature to become law. At its site, the Open Rights Group summarizes the worst provisions, mostly a list of ways the bill weakens citizens’ rights over their data.

Brexit was sold to the public on the basis of taking back national sovereignty. But, as then-MEP Felix Reda said the morning after the vote, national sovereignty is a fantasy in a globalized world. Decisions about data privacy can’t be made imagining they are only about *us*.

As ORG notes, the bill has led European Digital Rights to write to the European Commission asking for a review of the UK’s adequacy status. This decision, granted in 2020, was due to expire in June 2025, but the Commission granted a six-month extension to allow the bill’s passage to complete. In 2019, when the UK was at peak Brexit chaos, it seemed possible that the Conservative then-government would allow the UK to leave the EU with no deal in place, net.wars noted the risk to data flows. The current Labour government, with its AI and tech policy ambitions, ought to be more aware of the catastrophe losing adequacy would present. And yet.

Illustrations: Map from the Center for Reproductive Rights showing the current state of abortion rights across the US.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast and a regular guest on the TechGrumps podcast. Follow on Mastodon or Bluesky.

Nephology

For an hour yesterday (June 5, 2025), we were treated to the spectacle of the US House Judiciary Committee, both Republicans and Democrats, listening – really listening, it seemed – to four experts defending strong encryption. The four: technical expert Susan Landau and lawyers Caroline Wilson-Palow, Richard Salgado, and Gregory Nejeim.

The occasion was a hearing on the operation of the Clarifying Lawful Overseas Use of Data Act (2018), better known as the CLOUD Act. It was framed as collecting testimony on “foreign influence on Americans’ data”. More precisely, the inciting incident was a February 2025 Washington Post article revealing that the UK’s Home Office had issued Apple with a secret demand that it provide backdoor law enforcement access to user data stored using the Advanced Data Protection encryption feature it offers for iCloud. This type of demand, issued under S253 of the Investigatory Powers Act (2016), is known as a “technical capability notice”, and disclosing its existence is a crime.

The four were clear, unambiguous, and concise, incorporating the main points made repeatedly over the last the last 35 years. Backdoors, they all agreed, imperil everyone’s security; there is no such thing as a hole only “good guys” can use. Landau invoked Salt Typhoon and, without ever saying “I warned you at the time”, reminded lawmakers that the holes in the telecommunications infrastructure that they mandated in 1994 became a cybersecurity nightmare in 2024. All four agreed that with so much data being generated by all of us every day, encryption is a matter of both national security as well as privacy. Referencing the FBI’s frequent claim that its investigations are going dark because of encryption, Nojeim dissented: “This is the golden age of surveillance.”

The lawyers jointly warned that other countries such as Canada and Australia have similar provisions in national legislation that they could similarly invoke. They made sensible suggestions for updating the CLOUD Act to set higher standards for nations signing up to data sharing: set criteria for laws and practices that they must meet; set criteria for what orders can and cannot do; and specify additional elements countries must include. The Act could be amended to include protecting encryption, on which it is currently silent.

The lawmakers reserved particular outrage for the UK’s audacity in demanding that Apple provide that backdoor access for *all* users worldwide. In other words, *Americans*.

Within the UK, a lot has happened since that February article. Privacy advocates and other civil liberties campaigners spoke up in defense of encryption. Apple soon withdrew ADP in the UK. In early March, the UK government and security services removed advice to use Apple encryption from their websites – a responsible move, but indicative of the risks Apple was being told to impose on its users. A closed-to-the-public hearing was scheduled for March 14. Shortly before it, Privacy International, Liberty, and two individual claimants filed a complaint with the Investigatory Powers Tribunal seeking for the hearing to be held in public, and disputing the lawfulness, necessity, and secrecy of TCNs in general. Separately, Apple appealed against the TCN.

On April 7, the IPT released a public judgment summarizing the more detailed ruling it provided only to the UK government and Apple. Short version: it rejected the government’s claim that disclosing the basic details of the case will harm the public interest. Both this case and Apple’s appeal continue.

As far as the US is concerned, however, that’s all background noise. The UK’s claim to be able to compel the company to provide backdoor access worldwide seems to have taken Congress by surprise, but a day like this has been on its way ever since 2014, when the UK included extraterritorial power in the Data Retention and Investigatory Powers Act (2014). At the time, no one could imagine how they would enforce this novel claim, but it was clearly something other governments were going to want, too.

This Judiciary Committee hearing was therefore a festival of ironies. For one thing, the US’s own current administration is hatching plans to merge government departments’ carefully separated databases into one giant profiling machine for US citizens. Second, the US has always regarded foreigners as less deserving of human rights than its own citizens; the notion that another country similarly privileges itself went down hard.

More germane, subsidiaries of US companies remain subject to the PATRIOT Act, under which, as the late Caspar Bowden pointed out long ago, the US claims the right to compel them to hand over foreign users’ data. The CLOUD Act itself was passed in response to Microsoft’s refusal to violate Irish data protection law by fulfilling a New York district judge’s warrant for data relating to an Irish user. US intelligence access to European users’ data under the PATRIOT Act has been the big sticking point that activist lawyer Max Schrems has used to scuttle a succession of US-EU data sharing arrangements under GDPR. Another may follow soon: in January, the incoming Trump administration fired most of the Privacy and Civil Liberties Oversight board tasked to protect Europeans’ rights under the latest such deal.

But, no mind. Feast, for a moment, on the thought of US lawmakers hearing, and possibly willing to believe, that encryption is a necessity that needs protection.

Illustrations: Gregory Nejeim, Richard Salgado, Caroline Wilson-Palow, and Susan Landau facing the Judiciary Committee on June 5, 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Sovereign

On May 19, a group of technologists, researchers, economists, and scientists published an open letter calling on British prime minister Keir Starmer to prioritize the development of “sovereign advanced AI capabilities through British startups and industry”. I am one of the many signatories. Britain’s best shot at the kind of private AI research lab under discussion was Deepmind, sold to Google in 2014; the country has nothing now that’s domestically owned. ”

Those with long memories know that Leo was the first computer used for a business application – running Lyons tea rooms. In the 1980s, Britain led personal computing.

But the bigger point is less about AI in specific and more about information technology generally. At a panel at Computers, Privacy, and Data Protection in 2022, the former MEP Jan Philipp Albrecht, who was the special rapporteur for the General Data Protection Regulation, outlined his work building up cloud providers and local hardware as the Minister for Energy, Agriculture, the Environment, Nature and Digitalization of Schleswig-Holstein. As he explained, the public sector loses a great deal when it takes the seemingly easier path of buying proprietary software and services. Among the lost opportunities: building capacity and sovereignty. While his organization used services from all over the world, it set its own standards, one of which was that everything must be open source,

As the events of recent years are making clear, proprietary software fails if you can’t trust the country it’s made in, since you can’t wholly audit what it does. Even more important, once a company is bedded in, it can be very hard to excise it if you want to change supplier. That “customer lock-in” is, of course, a long-running business strategy, and it doesn’t only apply to IT. If we’re going to spend large sums of money on IT, there’s some logic to investing it in building up local capacity; one of the original goals in setting up the Government Digital Service was shifting to smaller, local suppliers instead of automatically turning to the largest and most expensive international ones.

The letter calls relying on US technology companies and services a “national security risk. Elsewhere, I have argued that we must find ways to build trusted systems out of untrusted components, but the problem here is more complex because of the sensitivity of government data. Both the US and China have the right to command access to data stored by their companies, and the US in particular does not grant foreigners even the few privacy rights it grants its citizens.

It’s also long past time for countries to stop thinking in terms of “winning the AI race”. AI is an umbrella term that has no single meaning. Instead, it would be better to think in terms of there being many applications of AI, and trying to build things that matter.

***

As predicted here two years ago, AI models are starting to collapse, Stephen J. Vaughan writes at The Register.

The basic idea is that as the web becomes polluted with synthetically-generated data, the quality of the data used to train the large language models degrades, so the models themselves become less useful. Even without that, the AI-with-everything approach many search engines are taking is poisoning their usefulness. Model collapse just makes it worse.

We would point out to everyone frantically adding “AI” to their services that the historical precedents are not on their side. In the late 1990s, every site felt it had to be a portal, so they all had search, and weather, and news headlines, and all sorts of crap that made it hard to find the search results. The result? Google disrupted all that with a clean, white page with no clutter (those were the days). Users all switched. Yahoo is the most obvious survivor from that period, and I think it’s because it does have some things – notably financial data – that it does extremely well.

It would be more satisfying to be smug about this, but the big issue is that companies are going on spraying toxic pollution over the services we all need to be able to use. How bad does it have to get before they stop?

***

At Privacy Law Scholars this week, in a discussion of modern corporate oligarchs and their fantasies of global domination, an attendee asked if any of us had read the terms of service for Starlink. She wanted to draw out attention to the following passage, under “Governing Law”:

For Services provided to, on, or in orbit around the planet Earth or the Moon, this Agreement and any disputes between us arising out of or related to this Agreement, including disputes regarding arbitrability (“Disputes”) will be governed by and construed in accordance with the laws of the State of Texas in the United States. For Services provided on Mars, or in transit to Mars via Starship or other spacecraft, the parties recognize Mars as a free planet and that no Earth-based government has authority or sovereignty over Martian activities. Accordingly, Disputes will be settled through self-governing principles, established in good faith, at the time of Martian settlement.

Reminder: Starlink has contracts worth billions of dollars to provide Internet infrastructure in more than 100 countries.

So who’s signing this?

Illustrations: The Martian (Ray Walston) in the 1963-1966 TV series My Favorite Martian.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Vassal State

Vassal State: How America Runs Britain
by Angus Hanton
Swift Press
978-1-80075390-7

Tax organizations estimate that a bit under 200,000 expatriate Americans live in the UK. It’s only a tiny percentage of the overall population of 70 million, but of course we’re not evenly distributed. In my bit of southwest London, the (recently abruptly shuttered due to rising costs) butcher has advertised “Thanksgiving turkeys” for more than 30 years.

In Vassal State, however, Angus Hanton shows that US interests permeate and control the UK in ways far more significant than a handful of expatriates. This is not, he stresses, an equal partnership, despite the perennial photos of the British prime minister being welcomed to the White House by the sitting president, as shown satirically in 1986’s Yes, Prime Minister. Hunton cites the 2020 decision to follow the US and ban Huawei as an example, writing that the US pressure at the time “demonstrated the language of partnership coupled with the actions of control”. Obama staffers, he is told, used to joke about the “special relationship”.

Why invade when you can buy and control? Hanton lists a variety of vectors for US influence. Many of Britain’s best technology startups wind up sold to US companies, permanently alienating their profits – see, for example, DeepMind, sold to Google in 2014, and Worldpay, sold to Vantiv in 2019, which then took its name. US buyers also target long-established companies, such as 176-year-old Boots, which since 2014 has been part of Walgreens and is now being bought up by the Sycamore Partners private equity fund. To Americans, this may not seem like much, but Boots is a national icon and an important part of delivering NHS services such as vaccinations. No one here voted for Sycamore Partners to benefit from that, nor did they vote for Kraft to buy Cadbury’s in 2010 and abandon its Bournville headquarters since 1824.

In addition, US companies are burrowed into British infrastructure. Government ministers communicate with each other over WhatsApp. Government infrastructure is supplied by companies like Oracle and IBM, and, lately, Palantir, which are hard to dig out once embedded. A seventh of the workforce are precariously paid by the US-dominated gig economy. The vast majority of cashless transactions pay a slice to Visa or Mastercard. And American companies use the roads, local services, and other infrastructure while paying less in tax than their UK competition. More controversially for digital rights activists, Hanton complains about the burden that US-based streamers like Netflix, Apple, and Amazon place on the telecommunications networks. Among the things he leaves out: the technology platforms in education.

Hanton’s book comes at a critical moment. Previous administrations have perhaps been more polite about demanding US-friendly policies, but now Britain, on its own outside the EU, is facing Donald Trump’s more blatant demands. Among them: that suppliers to the US government comply with its anti-DEI policies. In countries where diversity, equity, and inclusion are fundamental rights, the US is therefore demanding that its law should take precedence.

In a timeline fork in which Britain remained in the EU, it would be in a much better position to push back. In *this* timeline, Hanton’s proposed remedies – reform the tax structure, change policies, build technological independence – are much harder to implement.

Optioned

The UK’s public consultation on creating a copyright exception for AI model training closed on Tuesday, and it was profoundly unsatisfying.

Many, many creators and rights holders (who are usually on opposing sides when it comes to contract negotiations) have opposed the government’s proposals. Every national newspaper ran the same Make It Fair front page opposing them; musicians released a silent album. In the Guardian, the peer and independent filmmaaker Beeban Kidron calls the consultation “fixed” in favor of the AI companies. Kidron’s resume includes directing Bridget Jones: The Edge of Reason (2004) and the meticulously researched 2013 study of teens online, InRealLife, and she goes on to call the government’s preferred option a “wholesale transfer of wealth from hugely successful sector that invests hundreds of millions in the UK to a tech industry that extracts profit that is not assured and will accrue largely to the US and indeed China.”

The consultation lists four options: leave the situation as it is; require AI companies to get licenses to use copyrighted work (like everyone else has to); allow AI companies to use copyrighted works however they want; and allow AI companies to use copyrighted works but grant rights holders the right to opt out.

I don’t like any of these options. I do believe that creators will figure out how to use AI tools to produce new and valuable work. I *also* believe that rights holders will go on doing their best to use AI to displace or impoverish creators. That is already happening in journalism and voice acting, and was a factor in the 2023 Hollywood writers’ strike. AI companies have already shown that won’t necessarily abide by arrangements that lack the force of law. The UK government acknowledged this in its consultation document, saying that “more than 50% of AI companies observe the longstanding Internet convention robots.txt.” So almost half of them *don’t*.

At Pluralistic, Cory Doctorow argued in February 2023 that copyright won’t solve the problems facing creators. His logic is simple: after 40 years of expanding copyright terms (from a maximum of 56 years in 1975 to “author’s life plus 70” now), creators are being paid *less* than they were then. Yes, I know Taylor Swift has broken records for tour revenues and famously took back control of her own work. but millions of others need, as Doctorow writes, structural market changes. Doctorow highlights what happened with sampling: the copyright maximalists won, and now musicians are required to sign away sampling rights to their labels, who pocket the resulting royalties.

For this sort of reason, the status quo, which the consultation calls “option 0”, seems likely to open the way to lots more court cases and conflicting decisions, but provide little benefit to anyone. A licensing regime (“option 1”) will likely go the way of sampling. If you think of AI companies as inevitably giant “pre-monopolized” outfits, like Vladen Joler at last year’s Computers, Privacy, and Data Protection conference, “Option 2” looks like simply making them richer and more powerful at the expense of everyone else in the world. But so does “option 3”, since that *also* gives AI companies the ability to use anything they want. Large rights holders will opt out and demand licensing fees, which they will keep, and small ones will struggle to exercise their rights.

As Kidron said, the government’s willingness to take chances with the country’s creators’ rights is odd, since intellectual property is a sector in which Britain really *is* a world leader. On the other hand, as Moody says, all of it together is an anthill compared to the technology sector.

None of these choices is a win for creators or the public. The government’s preferred option 3 seems unlikely to achieve its twin goals of making Britain a world leader in AI and mainlining AI into the veins of the nation, as the government put it last month.

China and the US both have complete technology stacks *and* gigantic piles of data. The UK is likely better able to matter in AI development than many countries – see for example DeepMind, which was founded here in 2010. On the other hand, also see DeepMind for the probable future: Google bought it in 2014, and now its technology and profits belong to that giant US company.

At Walled Culture, Glyn Moody argued last May that requiring the AI companies to pay copyright industries makes no sense; he regards using creative material for training purposes as “just a matter of analysis” that should not require permission. And, he says correctly, there aren’t enough such materials anyway. Instead, he and Mike Masnick at Techdirt propose that the generative AI companies should pay creators of all types – journalists, musicians, artists, filmmakers, book authors – to provide them with material they can use to train their models, and the material so created should be placed in the public domain. In turn it could become new building blocks the public can use to produce even more new material. As a model for supporting artists, patronage is old.

I like this effort to think differently a lot better than any of the government’s options.

Illustrations:: Tuesday’s papers, unprecedentedly united to oppose the government’s copyright plan.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.