The bottom drawer

It only now occurs to me how weirdly archaic the UK government’s rhetoric around digital ID really is. Here’s prime minister Keir Starmer in India, quoted in the Daily Express (and many elsewheres):

“I don’t know how many times the rest of you have had to look in the bottom drawer for three bills when you want to get your kids into school or apply for this or apply for that – drives me to frustration.”

His image of the bottom drawer full of old bills is the bit. I asked an 82-year-old female friend: “What do you do if you have to supply a utility bill to confirm your address?” Her response: “I download one.”

Right. And she’s in the exact demographic geeks so often dismiss as technically incompetent. Starmer’s children are teenagers. Lots of people under 40 have never seen a paper statement.

Sure, many people can’t do that download, for various reasons. But they are the same people who will struggle with digital IDs, largely for the same reasons. So claiming people will want digital IDs because they’re more “convenient” is specious. The inconvenience isn’t in obtaining the necessary documentation. It lies in inconsistent, poorly designed submission processes – this format but not that, or requiring an in-person appointment. Digital IDs will provide many more opportunities for technical failure, as the system’s first targets, veterans, may soon find out.

A much cheaper solution for meeting the same goal would be interoperable systems that let you push a button to send the necessary confirmation direct to those who need it, like transferring a bank payment. This is, of course, close to the structure Mydex and researcher Derek McAuley have been working on for years, the idea being to invert today’s centralized databases to give us control of our own data. Instead, Starmer has rummaged in Tony Blair’s bottom drawer to pull out old ID proposals.

In an analysis published by the research organization Careful Industries, Rachel Coldicutt finds a clash: people do want a form of ID that would make life easier, but the government’s interest is in creating an ID that will make public services more efficient. Not the same.

Starmer himself has been in India this week, taking advantage to study its biometric ID system Aadhaar. Per Bloomberg, Starmer met with Infosys co-founder Nandan Nilekani, Aadhaar’s architect, because 16-year-old Aadhaar is a “massive success”.

According to the Financial Times, Aadhaar has 99% penetration in India, and “has also become the bedrock for India’s domestic online payments network, which has become the world’s largest, and enabled people to easily access capital markets, contributing to the country’s booming domestic investor base.” The FT also reports that Starmer claims Aadhaar has saved India $10 billion a year by reducing fraud and “leakages” in welfare schemes. In April, authentication using Aadhaar passed 150 billion transactions, and continues to expand through myriad sectors where its use was never envisioned. Visitors to India often come away impressed. However…

At Yale Insights, Ted O’Callahan tells the story of Aadhaar’s development. Given India’a massive numbers of rural poor with no way to identify themselves or access financial services, he writes, the project focused solely on identification.

Privacy International examines the gap between principle and practice. There have been myriad (and continuing) data breaches, many hit barriers to access, and mandatory enrollment for accessing many social protection schemes adds to preexisting exclusion.

In a posting at Open Democracy, Aman Sethi is even less impressed after studying Aadhaar for a decade. The claim of annual savings of $10 billion is not backed by evidence, he writes, and Aadhaar has brought “mass surveillance; a denial of services to the elderly, the impoverished and the infirm; compromised safety and security, and a fundamentally altered relationship between citizen and state.” As in Britain in 2003, when then-prime minister Tony Blair proposed the entitlement card, India cited benefit fraud as a key early justification for Aadhaar. Trying to get it through, Blair moved on to preventing illegal working and curbing identity theft. For Sethi, a British digital ID brings a society “where every one of us is a few failed biometrics away from being postmastered” (referring to the postmaster Horizon scandal).

In a recent paper for the Indian Journal of Law and Legal Research, Angelia Sajeev finds economic benefits but increased social costs. At the Christian Science Monitor, Riddhima Dave reports that many other countries that lack ID systems, particularly developing countries, are looking to India as a model. The law firm AM Legals warns of the spread of data sharing as Aadhaar has become ubiquitous, increasing privacy risks. Finally, at the Financial Times, John Thornhill noted in 2021 the system’s extraordinary mission creep: the “narrow remit” of 2009 to ease welfare payments and reduce fraud has sprawled throughout the public sector from school enrollment to hospital admissions, and into private companies.

Technology secretary Liz Kendall told Parliament this week that the digital ID will absolutely not be used for tracking. She is utterly powerless to promise that on behalf of the governments of the future.

If Starmer wants to learn from another country, he would do well to look at those problems and consider the opportunity costs. What has India been unable to do while pursuing Aadhaar? What could *we* do with the money and resources digital IDs will cost?

Illustrations: In 1980’s Yes, Minister (S01e04, “Big Brother”), minister Jim Hacker (Paul Eddington) tries to explain why his proposed National Integrated Database is not a “Big Brother”.

Update: Spelling of “Aadhaar” corrected.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The absurdity card

Fifteen years ago, a new incoming government swept away a policy its immediate predecessors had been pushing since shortly after the 2001 9/11 attacks: identity cards. That incoming government was led by David Cameron’s conservatives, in tandem with Nick Clegg’s liberal democrats. The outgoing government was Tony Blair’s. When Keir Starmer’s reinvented Labour party swept the 2024 polls, probably few of us expected he would adopt Blair’s old policies so soon.

But here we are: today’s papers announce Starmer’s plan for mandatory “digital ID”.

Fifteen years is an unusually long time between ID card proposals in Britain. Since they were scrapped at the end of World War II, there has usually been a new proposal about every five years. In 2002, at a Scrambling for Safety event held by the Foundation for Information Policy Research and Privacy International, former minister Peter Lilley observed that during his time in Margaret Thatcher’s government ID card proposals were brought to cabinet every time there was a new minister for IT. Such proposals were always accompanied with a request for suggestions how it could be used. A solution looking for a problem.

In a 2005 paper I wrote for the University of Edinburgh’s SCRIPT-ED journal, I found evidence to support that view: ID card proposals are always framed around current obsessions. In 1993, it was going to combat fraud, illegal immigration, and terrorism. In 1995 it was supposed to cut crime (at that time, Blair argued expanding policing would be a better investment). In 1989, it was ensuring safety at football grounds following the Hillsborough disaster. The 2001-2010 cycle began with combating terrorism, benefit fraud, and convenience. Today, it’s illegal immigration and illegal working.

A report produced by the LSE in 2005 laid out the concerns. It has dated little, despite preceding smartphones, apps, covid passes, and live facial recognition. Although the cost of data storage has continued to plummet, it’s also worth paying attention to the chapter on costs, which the report estimated at roughly £11 billion.

As I said at the time, the “ID card”, along with the 51 pieces of personal information it was intended to store, was a decoy. The real goal was the databases. It was obvious even then that soon real time online biometric checking would be a reality. Why bother making a card mandatory when police could simply demand and match a biometric?

We’re going to hear a lot of “Well, it works in Estonia”. *A* digital ID works in Estonia – for a population of 1.3 million who regained independence in 1991. Britain has a population of 68.3 million, a complex, interdependent mass of legacy systems, and a terrible record of failed IT projects.

We’re also going to hear a lot of “people have moved on from the debates of the past”, code for “people like ID cards now” – see for example former Conservative leader William Hague. Governments have always claimed that ID cards poll well but always come up against the fact that people support the *goals*, but never like the thing when they see the detail. So it will probably prove now. Twelve years ago, I think they might have gotten away with that claim – smartphones had exploded, social media was at its height, and younger people thought everything should be digital (including voting). But the last dozen years began with Snowden‘s revelations, and continued with the Cambridge Analytica Scandal, ransomware, expanding acres of data breaches, policing scandals, the Horizon / Post Office disaster, and wider understanding of accelerating passive surveillance by both governments and massive companies. I don’t think acceptance of digital ID is a slam-dunk. I think the people who have failed to move on are the people who were promoting ID cards in 2002, when they had cross-party support, and are doing it again now.

So, to this new-old proposal. According to The Times, there will be a central database of everyone who has the right to work. Workers must show their digital ID when they start a new job to prove their employment is legal. They already have to show one of a variety of physical ID documents, but “there are concerns some of these can be faked”. I can think of a lot cheaper and less invasive solution for that. The BBC last night said checks for the right to live here would also be applied to anyone renting a home. In the Guardian, Starmer is quoted calling the card “an enormous opportunity” and saying the card will offer citizens “countless benefits” in streamlining access to key services, echoes of 2002’s “entitlement card”. I think it was on the BBC’s Newsnight that I heard someone note the absurdity of making it easier to prove your entitlement to services that no longer exist because of cuts.

So keep your eye on the database. Keep your eye on which department leads. Immigration suggests the Home Office, whose desires have little in common with the need of ordinary citizens’ daily lives. Beware knock-on effects. Think “poll tax”. And persistently ask: what problem do we have for which a digital ID is the right, the proportionate, the *necessary* solution?

There will be detailed proposals, consultations, and draft legislation, so more to come. As an activist friend says, “Nothing ever stays won.”

Illustrations: British National Identity document circa 1949 (via Wikimedia.)

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Email to Ofgem

So, the US has claimed victory against the UK.

Regular readers may recall that in February the UK’s Home Office secretly asked Apple to put a backdoor in the Advanced Data Protection encryption it offers as a feature for iCloud users. In March, Apple challenged the order. The US objected to the requirement that the backdoor should apply to all users worldwide. How dare the Home Office demand the ability to spy on Americans?

On Tuesday, US director of national intelligence Tulsi Gabbard announced the UK is dropping its demand for the backdoor in Apple’s encryption “that would have enabled access to the protected encrypted data of American citizens”. The key here is “American citizens”. The announcement – which the Home Office is refusing to comment on – ignores everyone else and also the requirement for secrecy. It’s safe to say that few other countries would succeed in pressuring the UK in this way.

As Bll Goodwin reports at Computer Weekly, the US deal does nothing to change the situation for people in Britain or elsewhere. The Investigatory Powers Act (2016) is unchanged. As Parmy Olson writes at Bloomberg, the Home Office can go on issuing Technical Capability Notices to Apple and other companies demanding information on their users that the criminalization of disclosure will keep the companies silent. The Home Office can still order technology companies operating in the UK to weaken their security. And we will not know they’ve done it. Surprisingly, support for this point of view comes from the Federal Trade Commission, which has posted a letter to companies deploring foreign anti-encryption policy (ignoring how often undermining encryption has been US policy, too) and foreign censorship of Americans’ speech. This is far from over, even in the US.

Within the UK, the situation remains as dangerously uncertain as ever. With all countries interconnected, the UK’s policy risks the security of everyone everywhere. And, although US media may have forgotten, the US has long spied on its citizens by getting another country to do it.

Apple has remained silent, but so far has not withdrawn its legal challenge. Also continuing is the case filed by Privacy International, Liberty, and two individuals. In a recent update, PI says both legal cases will be heard over seven days in 2026 as much as possible in the open.

***

For non-UK folk: The Office of Gas and Electricity Markets (Ofgem) is the regulator for Britain’s energy market. Its job is to protect consumers.

To Ofgem:

Today’s Guardian (and many others) carries the news that Tesla EMEA has filed an application to supply British homes and businesses with energy.

Please do not approve this application.

I am a journalist who has covered the Internet and computer industries for 35 years. As we all know, Tesla is owned by Elon Musk. Quite apart from his controversial politics and actions within the US government, Elon Musk has shown himself to be an unstable personality who runs his companies recklessly. Many who have Tesla cars love them – but the cars have higher rates of quality control problems than those from other manufacturers, and Musk’s insistence on marketing the “Full Self Drive” feature has cost lives according to the US National Highway and Transportation Safety Agency, which launched yet another investigation into the company just yesterday. In many cases, when individuals have sought data from Tesla to understand why their relatives died in car fires or crashes the company has refused to help them. During the covid emergency, thousands of Tesla workers got covid because Musk insisted on reopening the Tesla factory. This is not a company people should trust with their homes.

With Starlink, Musk has exercised his considerable global power by turning off communications in Ukraine while it was fighting back Russian attacks. SpaceX launches continue to crash. According to the children’s commissioner’s latest report, far more children encounter pornography online on Musk’s X than on pornography sites, a problem that has gotten far worse since Musk took it over.

More generally, he is an enemy of workers’ rights. Misinformation on X helped fuel the Southport riots, and Musk himself has considered trying to oust Keir Starmer as prime minister.

Many are understandably awed by his technological ideas. But he uses these to garner government subsidies and undermine public infrastructure, which he then is able to wield as a weapon to suit his latest whims.

Musk is already far too powerful in the world. His actions in the White House have shown he is either unable to understand or entirely uninterested in the concerns and challenges that face people living on sums that to him seem negligible. He is even less interested in – and often actively opposes – social justice, fairness, and equity. No amount of separation between him and Tesla EMEA will be sufficient to counter his control of and influence over his company. Tesla’s board, just weeks ago, voted to award him $30 billion in shares to “energise and focus” him.

Please do not grant him a foothold in Britain’s public infrastructure. Whatever his company is planning, it does not have British interests at heart.

Ofgem is accepting public comments on Tesla’s application until close of business on Friday, August 22, 2025.

Illustration: Artist Dominic Wilcox’s Stained Glass Driverless Sleeper Car..

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Big bang

In 2008, when the recording industry was successfully lobbying for an extension to the term of copyright to 95 years, I wrote about a spectacular unfairness that was affecting numerous folk and other musicians. Because of my own history and sometimes present with folk music, I am most familiar with this area of music, which aside from a few years in the 1960s has generally operated outside of the world of commercial music.

The unfairness was this: the remnants of a label that had recorded numerous long-serving and excellent musicians in the 1970s were squatting on those recordings and refusing to either rerelease them or return the rights. The result was both artistic frustration and deprivation of a sorely-needed source of revenue.

One of these musicians is the Scottish legend Dick Gaughan, who had a stroke in 2016 and was forced to give up performing. Gaughan, with help from friends, is taking action: a GoFundMe is raising the money to pay “serious lawyers” to get his rights back. Whether one loved his early music or not – and I regularly cite Gaughan as an important influence on what I play – barring him from benefiting from his own past work is just plain morally wrong. I hope he wins through; and I hope the case sets a precedent that frees other musicians’ trapped work. Copyright is supposed to help support creators, not imprison their work in a vault to no one’s benefit.

***

This has been the first week of requiring age verification for access to online content in the UK; the law came into effect on July 25. Reddit and Bluesky, as noted here two weeks ago, were first, but with Ofcom starting enforcement, many are following. Some examples: Spotify; X (exTwitter); Pornhub.

Two classes of problems are rapidly emerging: technical and political. On the technical side, so far it seems like every platform is choosing a different age verification provider. These AVPs are generally unfamiliar companies in a new market, and we are being asked to trust them with passports, driver’s licenses, credit cards, and selfies for age estimation. Anyone who uses multiple services will find themselves having to widely scatter this sensitive information. The security and privacy risks of this should be obvious. Still, Dan Malmo reports at the Guardian that AVPs are already processing five million age checks a day. It’s not clear yet if that’s a temporary burst of one-time token creation or a permanently growing artefact of repetitious added friction, like cookie banners.

X says it will examine users’ email addresses and contact books to help estimate ages. Some systems reportedly send referring page links, opening the way for the receiving AVP to store these and build profiles. Choosing a trustworthy VPN can be tricky, and these intermediaries are in a position to log what you do and exploit the results.

The BBC’s fact-checking service finds that a wide range of public interest content, including news about Ukraine and Gaza and Parliamentary debates, is being blocked on Reddit and X. Sex workers see adults being locked out of legal content.

Meanwhile, many are signing up for VPNs at pace, as predicted. The spike has led to rumors that the government is considering banning them. This seems unrealistic: many businesses rely on VPNs to secure connections for remote workers. But the idea is alarming; its logical extension is the war on general-purpose computation Cory Doctorow foresaw as a consequence of digital rights management in 2011. A terrible and destructive policy can serve multiple masters’ interests and is more likely to happen if it does.

On the political side, there are three camps. One wants the legislation repealed. Another wants to retain aspects many people agree on, such criminalizing cyberflashing and some other types of online abuse, and fix its flaws. The third thinks the OSA doesn’t go far enough, and they’re already saying they want it expanded to include all services, generative AI, and private messaging.

More than 466,000 people have signed a petition calling on the government to repeal the OSA. The government responded: thanks, but no. It will “work with Ofcom” to ensure enforcement will be “robust but proportionate”.

Concrete proposals for fixing the OSA’s worst flaws are rare, but a report from the Open Rights Group offers some; it advises an interoperable system that gives users choice and control over methods and providers. Age verification proponents often compare age-gating websites to ID checks in bars and shops, but those don’t require you to visit a separate shop the proprietor has chosen and hand over personal information. At Ctrl-Shift, Kirra Pendergast explains some of the risks.

Surrounding all that is noise. A US lawyer wants to sue Ofcom in a US federal court (huh?). Reform leader Nigel Farage has called for the Act’s repeal, which led technology secretary Peter Kyle to accuse him – and then anyone else who criticizes the act – of being on the side of sexual predators. Kyle told Mumsnet he apologizes to the generation of UK kids who were “let down” by being exposed to toxic online content because politicians failed to protect them all this time. “Never again…”

In other news, this government has lowered the voting age to 16.

Illustrations: The back cover of Dick Gaughan’s out-of-print 1972 first album, No More Forever.

Wendy M. Grossman is an award-winnning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Magic math balls

So many ironies, so little time. According to the Financial Times (and syndicated at Ars Technica), the US government, which itself has traditionally demanded law enforcement access to encrypted messages and data, is pushing the UK to drop its demand that Apple weaken its encryption. Normally, you want to say, Look here, countries are entitled to have their own laws whether the US likes it or not. But this is not a law we like!

This all began in February, when the Washington Post reported that the UK’s Home Office had issued Apple with a Technical Capability Notice. Issued under the Investigatory Powers Act (2016) and supposed to be kept secret, the TCN demanded that Apple undermine the end-to-end encryption used for iCloud’s Advanced Data Protection feature. Much protest ensued, followed by two legal cases in front of the Investigatory Powers Tribunal, one brought by Apple, the other by Privacy International and Liberty. WhatsApp has joined Apple’s legal challenge.

Meanwhile, Apple withdrew ADP in the UK. Some people argued this didn’t really matter, as few used it, which I’d call a failure of user experience design rather than an indication that people didn’t care about it. More of us saw it as setting a dangerous precedent for both encryption and the use of secret notices undermining cybersecurity.

The secrecy of TCNs is clearly wrong and presents a moral hazard for governments that may prefer to keep vulnerabilities secret so they can take advantage for surveillance purposes. Hopefully, the Tribunal will eventually agree and force a change in the law. The Foundation for Information Policy Research (obDisclosure: I’m a FIPR board member) has published a statement explaining the issues.

According to the Financial Times, the US government is applying a sufficiently potent threat of tariffs to lead the UK government to mull how to back down. Even without that particular threat, it’s not clear how much the UK can resist. As Angus Hanton documented last year in the book Vassal State, the US has many well-established ways of exerting its influence here. And the vectors are growing; Keir Starmer’s Labour government seems intent on embedding US technology and companies into the heart of government infrastructure despite the obvious and increasing risks of doing so. When I read Hanton’s book earlier this year, I thought remaining in the EU might have provided some protection, but Caroline Donnelly warns at Computer Weekly that they, too, are becoming dangerously dependent on US technology, specifically Microsoft.

It’s tempting to blame everything on the present administration, but the reality is that the US has long used trade policy and treaties to push other countries into adopting laws regardless of their citizens’ preferences.

***

As if things couldn’t get any more surreal, this week the Trump administration *also* issued an executive order banning “woke AI” in the federal government. AI models are in future supposed to be “politically neutral”. So, as Kevin Roose writes at the New York Times, the culture wars are coming for AI.

The US president is accusing chatbots of “Marxist lunacy”, where the rest of the world calls them inaccurate, biased toward repeating and expanding historical prejudices, and inconsistent. We hear plenty about chatbots adopting Nazi tropes; I haven’t heard of one promoting workers’ and migrants’ rights.

If we know one thing about AI models it’s that they’re full of crap all the way down. The big problem is that people are deploying them anyway. At the Canary, Steve Topple reports that the UK’s Department of Work and Pensions admits in a newly-published report that its algorithm for assessing whether benefit claimants might commit fraud is ageist and and racist. A helpful executive order would set must-meet standards for *accuracy*. But we do not live in those times.

The Guardian reports that two more Trump EOs expedite building new data centers, promote exports of American AI models, expand the use of AI in the federal government, and intend to solidify US dominance in the field. Oh, and Trump would really like if it people would stop calling it “artificial” and find a new name. Seven years ago, aspirational intelligence” seemed like a good idea. But that was back when we heard a lot about incorporating ethics. So…”magic math ball”?

These days, development seems to proceed ethics-free. DWP’s report, for example, advocates retraining its flawed algorithm but says continuing to operate it is “reasonable and proportionate”. In 2021, for European Digital Rights Initiative, Agathe Balayn and Seda Gürses found, “Debiasing locates the problems and solutions in algorithmic inputs and outputs, shifting political problems into the domain of design, dominated by commercial actors.” In other words, no matter what you think is “neutral”, training data, model, and algorithms are only as “neutral” as their wider context allows them to be.

Meanwhile, nothing to curb the escalating waste. At 404 Media, Emanuel Maiberg finds that Spotify is publishing AI-generated songs from dead artists without anyone’s’ permission. On Monday, MSNBC’s Rachel Maddow told viewers that there’s so much “AI slop ” about her that they’ve posted Is That Really Rachel? to catalog and debunk them.

As Ed Zitron writes, the opportunity costs are enormous.

In the UK, the US, and many other places, data centers are threatening the water supply.

But sure, let’s make more of that.

Illustrations: Magic 8 ball toy (via frankieleon at Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her website has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Dangerous corner

This year’s Computers. Privacy, and Data Protection conference arrived at a crossroads moment. The European Commission, wanting to compete to “win the AI race”, is pursuing an agenda of simplification. Based on a recent report by former European Central Bank president Mario Draghi, it’s looking to streamline or roll back some of the regulation the EU is famous for.

Cue discussion of “The Brussels Effect”, derived from The California Effect, which sees compliance with regulation voluntarily shift towards the strictest regime. As Mireille Hildebrandt explained in her opening keynote, this phenomenon requires certain conditions. In the case of data protection legislation, that means three things: that companies will comply with the most stringent rules to ensure they are universally compliant, and that they want and need to compete in the EU. If you want your rules to dominate, it seems like a strategy. Except: China’s in-progress data protection regime may well be the strongest when it’s complete, but in that very different culture it will include no protection against the government. So maybe not a winning game?

Hildebrandt went on to prove with near-mathematical precision that an artificial general intelligence can never be compatible with the General Data Protection Regulation – AGI is “based on an incoherent conceptualization” and can’t be tested.

“Systems built with the goal of performing any task under any circumstances are fundamentally unsafe,” she said. “They cannot be designed for safety using fundamental engineering principles.”

AGI failing to meet existing legal restrictions seems minor in one way, since AGI doesn’t exist now, and probably never will. But as Hildebrandt noted, huge money is being poured into it nonetheless, and the spreading impact of that is unavoidable even if it fails.

The money also makes politicians take the idea seriously, which is the likely source of the EU’s talk of “simplification” instead of fundamental rights. Many fear that forthcoming simplification packages will reopen GDPR with a view to weakening the core principles of data minimization and purpose limitation. As one conference attendee asked, “Simplification for whom?”

In a panel on conflicting trends in AI governance, Shazeda Ahmed agreed: “There is no scientific basis around the idea of sentient AI, but it’s really influential in policy conversations. It takes advantage of fear and privileges technical knowledge.”

AI is having another impact technology companies may not have notidced yet: it is aligning the interests of the environmental movement and the privacy field.

Sustainability and privacy have often been played off against each other. Years ago, for example, there were fears that councils might inspect household garbage for elements that could have been recycled. Smart meters may or may not reduce electricity usage, but definitely pose privacy risks. Similarly, many proponents of smart cities stress the sustainability benefits but overlook the privacy impact of the ubiquitous sensors.

The threat generative AI poses to sustainability is well-documented by now. The threat the world’s burgeoning data centers pose to the transition to renewables is less often clearly stated and it’s worse than we might think. Claude Turmes, for example, highlighted the need to impose standards for data centers. Where an individual is financially incentivized to charge their electric vehicle at night and help even out the load on the grid, the owners of data centers don’t care. They just want the power they need – even if that means firing up coal plants to get it. Absent standards, he said, “There will be a whole generation of data centers that…use fossil gas and destroy the climate agenda.” Small nuclear power reactors, which many are suggesting, won’t be available for years. Worse,, he said, the data centers refuse to provide information to help public utilities plan despite their huge cosumption.

Even more alarming was the panel on the conversion of the food commons into data spaces. So far, most of what I had heard about agricultural data revolved around precision agriculture and its impact on farm workers, as explored in work (PDF) by Karen Levy, Solon Barocas, and Alexandra Mateescu. That was plenty disturbing, covering the loss of autonomy as sensors collect massive amounts of fine-grained information, everything from soil moisture to the distribution of seeds and fertilizer.

Much more alarming to see Monja Sauvagerd connect up in detail the large companies that are consolidating our food supply into a handful of platforms. Chinese government-owned Sinochem owns Syngenta; John Deere expanded by buying the machine learning company Blue River; and in 2016 Bayer bought Monsanto.

“They’re blurring the lines between seeds, agrichemicals, bio technology, and digital agriculture,” Sauvagerd said. So: a handful of firms in charge of our food supply are building power based on existing concentration. And, selling them cloud and computing infrastructure services, the array of big technology platforms that are already dangerously monopolistic. In this case, “privacy”, which has always seemed abstract, becomes a factor in deciding the future of our most profoundly physical system. What rights should farmers have to the data their farms generate?

In her speech, Hildebrandt called the goals of TESCREAL – transhumanism, extropianism, singularitarianism, cosmism, rationalist ideology, effective altruism, and long-termism – “paradise engineering”. She proposed three questions for assessing new technologies: What will it solve? What won’t it solve? What new problems will it create? We could add a fourth: while they’re engineering paradise, how do we live?

Illustrations: Brussels’ old railway hub, next to its former communications hub, the Maison de la Poste, now a conference center.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Vassal State

Vassal State: How America Runs Britain
by Angus Hanton
Swift Press
978-1-80075390-7

Tax organizations estimate that a bit under 200,000 expatriate Americans live in the UK. It’s only a tiny percentage of the overall population of 70 million, but of course we’re not evenly distributed. In my bit of southwest London, the (recently abruptly shuttered due to rising costs) butcher has advertised “Thanksgiving turkeys” for more than 30 years.

In Vassal State, however, Angus Hanton shows that US interests permeate and control the UK in ways far more significant than a handful of expatriates. This is not, he stresses, an equal partnership, despite the perennial photos of the British prime minister being welcomed to the White House by the sitting president, as shown satirically in 1986’s Yes, Prime Minister. Hunton cites the 2020 decision to follow the US and ban Huawei as an example, writing that the US pressure at the time “demonstrated the language of partnership coupled with the actions of control”. Obama staffers, he is told, used to joke about the “special relationship”.

Why invade when you can buy and control? Hanton lists a variety of vectors for US influence. Many of Britain’s best technology startups wind up sold to US companies, permanently alienating their profits – see, for example, DeepMind, sold to Google in 2014, and Worldpay, sold to Vantiv in 2019, which then took its name. US buyers also target long-established companies, such as 176-year-old Boots, which since 2014 has been part of Walgreens and is now being bought up by the Sycamore Partners private equity fund. To Americans, this may not seem like much, but Boots is a national icon and an important part of delivering NHS services such as vaccinations. No one here voted for Sycamore Partners to benefit from that, nor did they vote for Kraft to buy Cadbury’s in 2010 and abandon its Bournville headquarters since 1824.

In addition, US companies are burrowed into British infrastructure. Government ministers communicate with each other over WhatsApp. Government infrastructure is supplied by companies like Oracle and IBM, and, lately, Palantir, which are hard to dig out once embedded. A seventh of the workforce are precariously paid by the US-dominated gig economy. The vast majority of cashless transactions pay a slice to Visa or Mastercard. And American companies use the roads, local services, and other infrastructure while paying less in tax than their UK competition. More controversially for digital rights activists, Hanton complains about the burden that US-based streamers like Netflix, Apple, and Amazon place on the telecommunications networks. Among the things he leaves out: the technology platforms in education.

Hanton’s book comes at a critical moment. Previous administrations have perhaps been more polite about demanding US-friendly policies, but now Britain, on its own outside the EU, is facing Donald Trump’s more blatant demands. Among them: that suppliers to the US government comply with its anti-DEI policies. In countries where diversity, equity, and inclusion are fundamental rights, the US is therefore demanding that its law should take precedence.

In a timeline fork in which Britain remained in the EU, it would be in a much better position to push back. In *this* timeline, Hanton’s proposed remedies – reform the tax structure, change policies, build technological independence – are much harder to implement.

Cognitive dissonance

The annual State of the Net, in Washington, DC, always attracts politically diverse viewpoints. This year was especially divided.

Three elements stood out: the divergence between the only remaining member of the Privacy and Civil Liberties Oversight Board (PCLOB) and a recently-fired colleague; a contentious panel on content moderation; and the yay, American innovation! approach to regulation.

As noted previously, on January 29 the days-old Trump administration fired PCLOB members Travis LeBlanc, Ed Felten, and chair Sharon Bradford Franklin; the remaining seat was already empty.

Not to worry, remaining member Beth Williams, said. “We are open for business. Our work conducting important independent oversight of the intelligence community has not ended just because we’re currently sub-quorum.” Flying solo she can greenlight publication, direct work, and review new procedures and policies; she can’t start new projects. A review is ongoing of the EU-US Privacy Framework under Executive Order 14086 (2022). Williams seemed more interested in restricting government censorship and abuse of financial data in the name of combating domestic terrorism.

Soon afterwards, LeBlanc, whose firing has him considering “legal options”, told Brian Fung that the outcome of next year’s reauthorization of Section 702, which covers foreign surveillance programs, keeps him awake at night. Earlier, Williams noted that she and Richard E. DeZinno, who left in 2023, wrote a “minority report” recommending “major” structural change within the FBI to prevent weaponization of S702.

LeBlanc is also concerned that agencies at the border are coordinating with the FBI to surveil US persons as well as migrants. More broadly, he said, gutting the PCLOB costs it independence, expertise, trustworthiness, and credibility and limits public options for redress. He thinks the EU-US data privacy framework could indeed be at risk.

A friend called the panel on content moderation “surreal” in its divisions. Yael Eisenstat and Joel Thayer tried valiantly to disentangle questions of accountability and transparency from free speech. To little avail: Jacob Mchangama and Ari Cohn kept tangling them back up again.

This largely reflects Congressional debates. As in the UK, there is bipartisan concern about child safety – see also the proposed Kids Online Safety Act – but Republicans also separately push hard on “free speech”, claiming that conservative voices are being disproportionately silenced. Meanwhile, organizations that study online speech patterns and could perhaps establish whether that’s true are being attacked and silenced.

Eisenstat tried to draw boundaries between speech and companies’ actions. She can still find on Facebook the sme Telegram ads containing illegal child sexual abuse material that she found when Telegram CEO Pavel Durov was arrested. Despite violating the terms and conditions, they bring Meta profits. “How is that a free speech debate as opposed to a company responsibility debate?”

Thayer seconded her: “What speech interests do these companies have other than to collect data and keep you on their platforms?”

By contrast, Mchangama complained that overblocking – that is, restricting legal speech – is seen across EU countries. “The better solution is to empower users.” Cohn also disliked the UK and European push to hold platforms responsible for fulfilling their own terms and conditions. “When you get to whether platforms are living up to their content moderation standards, that puts the government and courts in the position of having to second-guess platforms’ editorial decisions.”

But Cohn was talking legal content; Eisenstat was talking illegal activity: “We’re talking about distribution mechanisms.” In the end, she said, “We are a democracy, and part of that is having the right to understand how companies affect our health and lives.” Instead, these debates persist because we lack factual knowledge of what goes on inside. If we can’t figure out accountability for these platforms, “This will be the only industry above the law while becoming the richest companies in the world.”

Twenty-five years after data protection became a fundamental right in Europe, the DC crowd still seem to see it as a regulation in search of a deal. Representative Kat Cammack (R-FL), who described herself as the “designated IT person” on the energy and commerce committee, was particularly excited that policy surrounding emerging technologies could be industry-driven, because “Congress is *old*!” and DC is designed to move slowly. “There will always be concerns about data and privacy, but we can navigate that. We can’t deter innovation and expect to flourish.”

Others also expressed enthusiasm for “the great opportunities in front of our country”, compared the EU’s Digital Markets Act to a toll plaza congesting I-95. Samir Jain, on the AI governance panel, suggested the EU may be “reconsidering its approach”. US senator Marsha Blackburn (R-TN) highlighted China’s threat to US cybersecurity without noting the US’s own goal, CALEA.

On that same AI panel, Olivia Zhu, the Assistant Director for AI Policy for the White House Office of Science and Technology Policy, seemed more realistic: “Companies operate globally, and have to do so under the EU AI Act. The reality is they are racing to comply with [it]. Disengaging from that risks a cacophony of regulations worldwide.”

Shortly before, Johnny Ryan, a Senior Fellow at the Irish Council for Civil Liberties posted: “EU Commission has dumped the AI Liability Directive. Presumably for “innovation”. But China, which has the toughest AI law in the world, is out innovating everyone.”

Illustrations: Kat Cammack (R-FL) at State of the Net 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

What we talk about when we talk about computers

The climax of Nathan Englander‘s very funny play What We Talk About When We Talk About Anne Frank sees the four main characters play a game – the “Anne Frank game” – that two of them invented as children. The play is on at the Marylebone Theatre until February 15.

The plot: two estranged former best friends in a New York yeshiva have arranged a reunion for themselves and their husbands. Debbie (Caroline Catz), has let her religious attachment lapse in the secular environs of Miami, Florida, where her husband, Phil (Joshua Malina), is an attorney. Their college-age son, Trevor (Gabriel Howell), calls the action.

They host Hasidic Shosh (Dorothea Myer-Bennett) and Yuri (Simon Yadoo), formerly Lauren and Mark, whose lives in Israel and traditional black dress and, in Shosh’s case, hair-covering wig, have left them unprepared for the bare arms and legs of Floridians. Having spent her adult life in a cramped apartment with Yuri and their eight daughters, Shosh is astonished at the size of Debbie’s house.

They talk. They share life stories. They eat. And they fight: what is the right way to be Jewish? Trevor asks: given climate change, does it matter?

So, the Anne Frank game: who among your friends would hide you when the Nazis are coming? The rule that you must tell the truth reveals the characters’ moral and emotional cores.

I couldn’t avoid up-ending this question. There are people I trust and who I *think* would hide me, but it would often be better not to ask them. Some have exceptionally vulnerable families who can’t afford additional risk. Some I’m not sure could stand up to intensive questioning. Most have no functional hiding place. My own home offers nowhere that a searcher for stray humans wouldn’t think to look, and no opportunities to create one. With the best will in the world, I couldn’t make anyone safe, though possibly I could make them temporarily safer.

But practical considerations are not the game. The game is to think about whether you would risk your life for someone else, and why or why not. It’s a thought experiment. Debbie calls it “a game of ultimate truth”.

However, the game is also a cheat, in that the characters have full information about all parts of the story. We know the Nazis coming for the Frank family are unquestionably bent on evil, because we know the Franks’ fates when they were eventually found. It may be hard to tell the truth to your fellow players, but the game is easy to think about because it’s replete with moral clarity.

Things are fuzzier in real life, even for comparatively tiny decisions. In 2012, the late film critic Roger Ebert mulled what he would do if he were a Transport Security Administration agent suddenly required to give intimate patdowns to airline passengers unwilling to go through the scanner. Ebert considered the conflict between moral and personal distaste and TSA officers’ need to keep their reasonably well-paid jobs with health insurance benefits. He concluded that he hoped he’d quit rather than do the patdowns. Today, such qualms are ancient history; both scanners and patdowns have become normalized.

Moral and practical clarity is exactly what’s missing as the Department of Government Efficiency arrives in US government departments and agencies to demand access to their computer systems. Their motives and plans are unclear, as is their authority for the access they’re demanding. The outcome is unknown.

So, instead of a vulnerable 13-year-old girl and her family, what if the thing under threat is a computer? Not the sentient emotional robot/AI of techie fantasy but an ordinary computer system holding boring old databases. Or putting through boring old payments. Or underpinning the boring old air traffic control system. Do you see a computer or the millions of people whose lives depend on it? How much will you risk to protect it? What are you protecting it from? Hinder, help, quit?

Meanwhile, DOGE is demanding that staff allow its young coders to attach unauthorized servers, take control of websites. In addition: mass firings, and a plan to do some sort of inside-government AI startup.

DOGE itself appears to be thinking ahead; it’s told staff to avoid Slack while awaiting a technology that won’t be subject to FOIA requests.

The more you know about computers the scarier this all is. Computer systems of the complexity and accuracy of those the US government has built over decades are not easily understood by incoming non-experts who have apparently been visited by the Knowledge Fairy. After so much time and effort on security and protecting against shadowy hackers, the biggest attack – as Mike Masnick calls it – on government systems is coming from inside the house in full view.

Even if “all” DOGE has is read-only access as Treasury claims – though Wired and Talking Points Memo have evidence otherwise – those systems hold comprehensive sensitive information on most of the US population. Being able to read – and copy? – is plenty bad enough. In both fiction (Margaret Atwood’s The Handmaid’s Tale) and fact (IBM), computers have been used to select populations to victimize. Americans are about to find out they trusted their government more than they thought.

Illustration: Changing a tube in the early computer ENIAC (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.

The Gulf of Google

In 1945, the then mayor of New York City, Fiorello La Guardia signed a bill renaming Sixth Avenue. Eighty years later, even with street signs that include the new name, the vast majority of New Yorkers still say things like, “I’ll meet you at the southwest corner of 51st and Sixth”. You can lead a horse to Avenue of the Americas, but you can’t make him say it.

US president Donald Trump’s order renaming the Gulf of Mexico offers a rarely discussed way to splinter the Internet (at the application layer, anyway; geography matters!), and on Tuesday Google announced it would change the name for US users of its Maps app. As many have noted, this contravenes Google’s 2008 policy on naming bodies of water in Google Earth: “primary local usage”. A day later, reports came that Google has placed the US on its short list of sensitive countries – that is, ones whose rulers dispute the names and ownership of various territories: China, Russia, Israel, Saudi Arabia, Iraq.

Sharpieing a new name on a map is less brutal than invading, but it’s a game anyone can play. Seen on Mastodon: the bay, now labeled “Gulf of Fragile Masculinity”.

***

Ed Zitron has been expecting the generative AI bubble to collapse disastrously. Last week provided an “Is this it?” moment when the Chinese company DeepSeek released reasoning models that outperform the best of the west at a fraction of the cost and computing power. US stock market investors: “Let’s panic!”

The code, though not the training data, is open source, as is the relevant research. In Zitron’s analysis, the biggest loser here is OpenAI, though it didn’t seem like that to investors in other companies, especially Nvidia, whose share price dropped 17% on Tuesday alone. In an entertaining sideshow, OpenAI complains that DeepSeek stole its code – ironic given the history.

On Monday, Jon Stewart quipped that Chinese AI had taken American AI’s job. From there the countdown started until someone invoked national security.

Nvidia’s chips have been the picks and shovels of generative AI, just as they were for cryptocurrency mining. In the latter case, Nvidia’s fortunes waned when cryptocurrency prices crashed, ethercoin, among others, switched to proof of stake, and miners shifted to more efficient, lower-cost application-specific integrated circuits. All of these lowered computational needs. So it’s easy to believe the pattern is repeating with generative AI.

There are several ironies here. The first is that the potential for small language models to outshine large ones has been known since at least 2020, when Timnit Gebru, Emily Bender, Margaret Mitchell, and Angelina McMillan-Major published their stochastic parrots paper. Google soon fired Gebru, who told Bloomberg this week that AI development is being driven by FOMO rather than interesting questions. Second, as an AI researcher friend points out, Hugging Face, which is trying to replicate DeepSeek’s model from scratch, said the same thing two years ago. Imagine if someone had listened.

***

A work commitment forced me to slog through Ross Douthat’s lengthy interview with Marc Andreessen at the New York Times. Tl;dr: Andreessen says Silicon Valley turned right because Democrats broke The Deal under which Silicon Valley supported liberal democracy and the Democrats didn’t regulate them. In his whiny victimhood, Andreessen has no recognition that changes in Silicon Valley’s behavior – and the scale at which it operates – are *why* Democrats’ attitudes changed. If Silicon Valley wants its Deal back, it should stop doing things that are obviously exploitive. Random case in point: Hannah Ziegler reports at the Washington Post that a $1,700 bassinet called a “Snoo” suddenly started demanding $20 per month to keep rocking a baby all night. I mean, for that kind of money I pretty much expect the bassinet to make its own breast milk.

***

Almost exactly eight years ago, Donald Trump celebrated his installation in the US presidency by issuing an executive order that risked up-ending the legal basis for data flows between the EU, which has strict data protection laws, and the US, which doesn’t. This week, he did it again.

In 2017, Executive Order 13768 dominated Computers, Privacy, and Data Protection. The deal in place at the time, Privacy Shield, eventually survived until 2020, when it was struck down in lawyer Max Schrems’s second such case. It was replaced by the Transatlantic Data Privacy Framework, which established the five-member Privacy and Civil Liberties Oversight Board to oversee surveillance and, as Politico explains, handle complaints from Europeans about misuse of their data.

This week, Trump rendered the board non-operational by firing its three Democrats, leaving just one Republican-member in place.*

At Techdirt, Mike Masnick warns the framework could collapse, costing Facebook, Instagram, WhatsApp, YouTube, exTwitter, and other US-based services (including Truth Social) their European customers. At his NGO, noyb, Schrems himself takes note: “This deal was always built on sand.”

Schrems adds that another Trump Executive Order gives 45 days to review and possibly scrap predecessor Joe Biden’s national security decisions, including some the framework also relies on. Few things ought to scare US – and, in a slew of new complaints, Chinese – businesses more than knowing Schrems is watching.

Illustrations: The Gulf of Mexico (NASA, via Wikimedia).

*Corrected to reflect that the three departing board members are described as Democrats, not Democrat-appointed. In fact, two of them, Ed Felten and Travis LeBlanc, were appointed by Trump in his original term.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.