In search of the future Internet

“What kind of Internet do you want [him] to inherit?” “Him” was then measuring his age in weeks.

“Not *this* Internet.”

Now, when said son has grown to measure his life in months, my friend and I are no closer to a positive vision. But notably many more people seem to be asking the same kind of question.

In the last week I’ve been to two meetings convened to pull together a cross-section of activists, policy wonks, and techies to talk about building movements to push back against the spread of technological control. The goals of these groups, like my friend’s and mine remain fuzzy, but they reflect a widespread growing alarm about AI, US entanglement, and our other technological ills.

“When did the future stop being something we plan for and become something done to us?” a friend asked about five years ago. That sense of being held hostage by the inevitability narrative is there, too, in a jumble including job loss, the evils of capitalism, the embedding of companies like Palantir in the health service and soon in policing, the speed of change, widespread loneliness, sustainability, and existential threats. So the overall feel has been part-Occupy, part consciousness-raising session.

Those who did have visions to propose often seemed to be describing things that already exist: trusted, authoritative content (the BBC, Wikipedia); ending capitalism in favor of shared ownership and distributed power (“there’s always someone reinventing communism,” the person next to me muttered), and recreating the impossible dream of micropayments.

One meeting polled us with a list of concerns about AI and asked us to pick the most important. The winner, by far, was “consolidation of power”. This speaks to a wider movement than merely opposing AI or resisting the encroachment of the worst technology surveillance practices into daily life.

Similar discussions have been growing for at least a couple of years. At The Register, long-time open source advocate Liam Proven writes after attending the Open Source Policy Summit that Europe is reassessing its technological reliance on US IT services, which offers the potential for a US president to order disconnection. The lack of European billion-dollar technology companies leads people to forget the technology invented here that instead embraced openness: the web, Linux, Raspberry Pi, Open StreetMap, the Fediverse.

It’s a little alarming, however, that all of this discussion hovers at the application layer. Old-timers who’ve watched the Internet build up understand that underneath the social media and smartphones lies the physical layer, the infrastructure that is also condolidated and controlled: chips, cables, wireless spectrum. For younger folks, those elements are near-invisible; their adult lives have been dominated by concerns about data. Yet in the last year we’ve been warned of sabotage to undersea cables and chip shortages. There’s more general recognition of the issues surrounding data centers’ demand for power and water.

Even so, there’s a good amount of recognition that all the strands of our present polycrisis are intertwined – see for example the mission statement at Germany’s Cables of Resistance. A broader group, building on the 2024 conference convened by Cristina Caffarra, who called out policy makers at CPDP 2024 for ignoring physical infrastructure, is working on a EuroStack to provide a European cloud alternative.

At the political layer, we have Dutch News reporting that Dutch MPs are pushing their government to move away from depending on US technology companies to provide essential infrastructure. In the UK, LibDem and Green MPs are calling on the government to reconsider its contracts with Palantir.

A group called Pull the Plug will lead a “march against the machines” in London on February 28 to demand the UK government create citizens’ assemblies and implement their decisions on AI.

It feels like change is gathering here. In the US, the future still looks much like the past. In a blog post this week, here is Anthropic, presumably responding to OpenAI’s plan to add advertising to ChatGPT:

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking…We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

Compare and contrast to Google founders Sergey Brin and Larry Page in their 1998 Google-founding paper:

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users…we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

No wonder Anthropic adds this caution: “Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.” Translation: we may need the money.” Of course they’ll frame it as serving the customer better.

Illustrations: (One of) the first Internet ad, for AT&T, on HotWired (via The Internet History Podcast.

Also this week:
At the Plutopia podcast we talk to science fiction writer Ken MacLeod.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of causality

The debates over children’s use of social media, screens, and phones continue, exacerbated in the UK by ongoing Parliamentary scrutiny of the Children’s Wellbeing and Schools bill and continuing disgust over Grok‘s sexualized image generation. Robert Booth reports at the Guardian that the Center for Countering Digital Hate estimates that Grok AI generated 3 million sexualized images in under two weeks and that a third of them are still viewable on X. In that case, X and Grok appear to be a more general problem than children’s access.

We continue to need better evidence establishing causality or its absence. This week, researchers from the Bradford Centre for Health Data Science (led by Dan Lewer) and the University of Cambridge (led by Amy Orben) announced a six-week trial that will attempt to find the actual impact on teens of limiting – not ending – social media access. The BBC reports, that the trial will split 4,000 Bradford secondary school pupils into two groups, One will download an app hat turns off access to services like TikTok and Snapchat from 9pm to 7am and limits use at other times to a “daily budget”. The restrictions won’t include WhatsApp, which the researchers recognize is central to many family groups. The other half will go on using social media as before.

The researchers will compare the two groups by assessing their’ levels of anxiety, depression, sleep, bullying, and time spent with friends and family.

In earlier research, Orben developed a framework for data donation, which allows teens to understand their own use of social media. Another forthcoming study, Youth Perspectives on Social Media Harms: A Large-Scale Micro-Narrative Study, collects 901 first-person tales from 18- to 22-year-olds in the UK. From these Orben’s group derive four types of harm: harms from other people’s behavior, personal harmful behavior evoked by social media, harms related to the content they encounter, and harms related to platform features. In the first category they include bullying and scams; in the second, compulsive use and social comparison; in the third, graphic material; and in the fourth, algorithmic manipulation. They also note the study’s limitations. A longer-term or differently-timed study might show different effects – during the study period the 2024 US presidential election took place. The teens’ stories don’t establish causality. Finally, there may be other harms not captured in this study.

The most important element, however, is that they sought the perspective of young people themselves, who are to date rarely heard in these discussions.

As this research begins, at Techdirt Mike Masnick reports on two new finished papers also covering teens and social media. The first, Social Media Use and Well-Being Across Adolescent Development, published in JAMA Pediatrics, is a three-year study of 100,991 Australian adolescents to find whether well-being was associated with social media use. The researchers, from the University of South Australia, found a U-shaped curve: moderate social media use was associated with the best outcomes, while both the highest users and the non-users showed less well-being. Girls benefited increasingly from moderate social media use from mid-adolescence onwards, while in boys’ non-use became increasingly problematic, leading to worse outcomes than high use by their late teens.

The second, a study from the University of Manchester published in the Journal of Public Health, followed a group of 25,000 11- to 14-year-olds to find out whether the use of technology such as social media and gaming accurately predicted later mental health issues. The study found no evidence that heavier use of social media or gaming led to increased symptoms of anxiety or depression in the following year.

In his discussion of these two papers, Masnick argues that this research gives weight to his contention that the widespread claim that social media is inherently harmful is wrong.

In the UK and elsewhere, however, politicians are proceeding on the basis that social media *is* inevitably harmful. . This week, the government announced a consultation on children’s use of technology. The consultation seems, as Carly Page writes at The Register, geared toward increasing restrictions, Also this week, the House of Lords voted 261 to 150, defeating the government to add an amendment to the Children’s Wellbeing and Schools bill that would require social media services to add age verification to block under-16s from accessing them within a year. MPs will now have to vote to remove the amendment or it will become law, a backdoor preemption of the House of Commons’ prerogative to legislate.

UK prime minister Keir Starmer has been edging toward a social media ban for under-16s; now with added pressure from not only the Lords but also the Conservative Party leader, Kemi Badenoch, and 61 MPs sent an open letter supporting a ban like the one in Australia. Ofcom reports that 22% of children aged eight to 17 have a false user age of over 18 – but also that often it’s with their parents’ help. Would this be different under a national ban?

Starmer reportedly wants to delay deciding until evidence from Australia and, one presumes, from the consultation, is available. A sensible idea we hope is not doomed to failure.

Illustrations: Time magazine’s 1995 “Cyberporn” cover, which raised early alarm about kids online. Based on a fraudulent study, it nonetheless influenced policy-making for some years.

Also this week:
At the Plutopia podcast, we interview Dave Evans on his work to combat misinformation.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The censorship-industrial complex

In a sign of the times, the Academy of Motion Picture Arts and Sciences has announced that in 2029 the annual Oscars ceremony will move from ABC to YouTube, where it will be viewable worldwide for free. At Variety, Clayton Davis speculates how advertising will work – perhaps mid-roll? The obvious answer is to place the ads between the list of nominees and opening the envelope to announce the winner. Cliff-hanger!

The move is notable. Ratings for the awards show have been declining for decades. In 1960, 45.8 million people in the US watched the Oscars – live, before home video recording. In 1998, the peak, 55.2 million, after VCRs, but before YouTube. In 2024: 19.5 million. This year, the Oscars drew under 18.1 million viewers.

On top of that, broadcast TV itself is in decline. One of the biggest audiences ever gathered for a single episode of a scripted show was in 1983: 100 million, for the series finale of M*A*S*H. In 2004, the Friends finale drew 52.5 million. In 2019, the Big Bang Theory finale drew just 17.9 million. YouTube has more than 2.7 billion active users a month. Whatever ABC was paying for the Oscars, reach may matter more than money, especially in an industry that is also threatened by shrinking theater audiences. In the UK, YouTube is second most-watched TV service ($), after only the BBC.

The move suggests that the US audience itself may also not be as uniquely important as it was historically. The Academy’s move fits into many other similar trends.

***

During this week’s San Francisco power outage, an apparently unexpected consequence was that non-functioning traffic lights paralyzed many of the city’s driverless Waymo taxis. In its blog posting, the company says, “While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.”

Friends in San Francisco note that the California Driver’s Handbook (under “Traffic Control”) is specific about what to do in such situations: treat the intersection as if it had all-way stop signs. It’s a great example of trusting human social cooperation.

Robocars are, of course, not in on this game. In an uncertain situation they can’t read us. So the volume of requests overwhelmed the remote human controllers and the cars froze, blocking intersections and even sidewalks. Waymo suspended the service temporarily, and says it is updating the cars’ software to make them act “more decisively” in such situations in future.

Of course, all these companies want to do away with the human safety drivers and remote controllers as they improve cars’ programming to incorporate more edge cases. I suspect, however, that we’ll never really reach the point where humans aren’t needed; there will always be new unforeseen issues. Driving a car is a technical challenge. Sharing the roads with others is a social effort requiring the kind of fuzzy flexibility computers are bad at. Getting rid of the humans will mean deciding what level of dysfunction we’re willing to accept from the cars.

Self-driving taxis are coming to London in 2026, and I’m struggling to imagine it. It’s a vastly more complex city to navigate than San Francisco, and has many narrow, twisty little streets to flummox programmers used to newer urban grids.

***

The US State Department has announced sanctions barring five people and potentially their families from obtaining visas to enter or stay in the US, labeling them radical activists and weaponized NGOs. They are: Imran Ahmed, an ex-Labour advisor and founder and CEO of the Centre for Countering Digital Hate; Clare Melford, founder of the Global Disinformation Index; Thierry Breton, a former member of the European Commission, whom under secretary of state for public diplomacy Sarah B. Rogers, called “a mastermind” of the Digital Services Act; and Josephine Ballon and Anna-Lena von Hodenberg, managing directors of the independent German organization HateAid, which supports people affected by digital violence. Ahmed, who lived in Washington, DC, has filed suit to block his deportation; a judge has issued a temporary restraining order.

It’s an odd collection as a “censorship-industrial complex”. Breton is no longer in a position to make laws calling US Big Tech to account; his inclusion is presumably a warning shot to anyone seeking to promote further regulation of this type. GDI’s site’s last “news” posting was in 2022. HateAid has helped a client file suit against Google in August 2025, and sued X in July for failing to remove criminal antisemitic content. The Center for Countering Digital Hate has also been in court to oppose antisemitic content on X and Instagram; in 2024 Elon Musk called it a ‘criminal organization’. There was more logic to”the three people in hell” taught to an Irish friend as a child (Cromwell, Queen Elizabeth I, and Martin Luther).

Whatever the Trump administration’s intention, the result is likely to simply add more fuel to initiatives to lessen European dependence on US technology.

Illustrations: Christmas tree in front of the US Capitol in 2020 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Sovereign immunity

At the Gikii conference in 2018, a speaker told us of her disquiet after receiving a warning from Tumblr that she had replied to several messages posted there by a Russian bot. After inspecting the relevant thread, her conclusion was that this bot’s postings were designed to increase the existing divisions within her community. There would, she warned, be a lot more of this.

We’ve seen confirming evidence over the years since. This week provided even more when X turned on location identification for all accounts, whether they wanted it or not. The result has been, as Jason Koebler writes at 404 Media, to expose the true locations of accounts purporting to be American, posting on political matters. A large portion of the accounts behind viral posts designed to exacerbate tensions are being run by people in countries like Bangladesh, Vietnam, India, Cambodia, and Russia, among others, with generative AI acting as an accelerant.

Unlike the speaker we began with, in his analysis, Koebler finds that the intention behind most of this is not to stir up divisions but simply to make money from an automated ecosystem that makes it easy. The US is the main target simply because it’s the most lucrative market. He also points out that while X’s new feature has led people to talk about it, the similar feature that has long existed on Facebook and YouTube has never led to change because, he writes, “social media companies do not give a fuck about this”. Cue the Upton Sinclair quote: “It is difficult to get a man to understand something when his Salary depends upon his not understanding it”

The incident reminded that this type of fraud in general seems to be endemic, especially in the online advertising ecosystem. In March, Portsmouth senior lecturer Karen Middleton submitted evidence (PDF) to a UK Parliamentary Select Committee Inquiry arguing that the advertising ecosystem urgently needs regulatory attention as a threat to information integrity. At the Financial Times, Martin Wolf thinks that users should be able to sue the platforms for reimbursement when they are tricked by fraudulent ads – a model that might work for fraudulent ads that cause quantifiable harm but not for those that cause wider, less tangible, social harm. Wolf cites a Reuters report from Jeff Horwitz, who analyzes internal Facebook documents to find that the company itself expected 10% of its 2024 revenues – $16 billion – to come from ads for scams and banned goods.

Search Engine Land, citing Juniper Research, estimated in 2023 that $84 billion in advertising spend would be lost to ad fraud that year, and predicted a rise to $172 billion by 2028. Spider Labs estimates 2024 losses at over $37.7 billion, based on traffic data it’s analyzed through its fraud prevention tool, and 2025 losses at $41.4 billion. For context, DataReportal puts global online ad revenue at close to $790.3 billion in 2024. Also for comparison, Adblock Tester estimated last week that ad blockers cut publishers’ advertising revenues on average by 25% in 2023, costing them up to $50 billion a year.

If Koebler is correct in his assessment, until or unless advertisers rebel the incentives are misplaced and change will not happen.

***

Enforcement of the Online Safety Act has continued to develop since it came into force in July. This week, Substack became the latest to announce it would implement age verification for whatever content it deems to be potentially harmful. Paid subscribers are exempt on the basis that they have signed up with credit cards, which are unavailable in the UK to those under 18.

In October, we noted the arrival of a lawsuit against Ofcom brought in US courts by 4Chan and Kiwi Farms. The lawyer’s name, Preston Byrne, sounded familiar; I now remember he talked bitcoin at the 2015 Tomorrow’s Transaction Forum.

James Titcomb writes at the Daily Telegraph that Ofcom’s lawyers have told the US court that it is a public regulatory authority and therefore has “sovereign immunity”. The lawsuit contends that Ofcom is run as a “commercial enterprise” and therefore doesn’t get to claim sovereign immunity. Plus: the First Amendment.

Meanwhile, with age verification spreading to Australia and the EU, on X Byrne is advocating that US states enact foreign censorship shield laws. One state – Wyoming – has already introduced one. The draft GRANITE Act was filed on November 19. Among other provisions, the law would permit US citizens who have been threatened with fines to demand three times the amount in damages – potentially billions for a company like Meta, which can be fined up to 10% of global revenue under various UK and EU laws. That clause would have to pass the US Congress. In the current mood, it might; in July in a report the House of Representatives Judiciary Committee called the EU’s Digital Services Act a foreign censorship threat.

It’s hard to know how – or when – this will end. In 1990s debates, many imagined that the competition to enforce national standards for speech across the world would lead either to unrestricted free speech or to a “least common denominator” regime in which the most restrictive laws applied everywhere. Byrne’s battle isn’t about that; it’s about who gets to decide.

Illustrations: A wild turkey strutting (by Frank Schulenberg at Wikimedia). Happy Thanksgiving!

Also this week:
At Plutopia, we interview Jennifer Granick, surveillance and cybersecurity counsel at ACLU.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The bottom drawer

It only now occurs to me how weirdly archaic the UK government’s rhetoric around digital ID really is. Here’s prime minister Keir Starmer in India, quoted in the Daily Express (and many elsewheres):

“I don’t know how many times the rest of you have had to look in the bottom drawer for three bills when you want to get your kids into school or apply for this or apply for that – drives me to frustration.”

His image of the bottom drawer full of old bills is the bit. I asked an 82-year-old female friend: “What do you do if you have to supply a utility bill to confirm your address?” Her response: “I download one.”

Right. And she’s in the exact demographic geeks so often dismiss as technically incompetent. Starmer’s children are teenagers. Lots of people under 40 have never seen a paper statement.

Sure, many people can’t do that download, for various reasons. But they are the same people who will struggle with digital IDs, largely for the same reasons. So claiming people will want digital IDs because they’re more “convenient” is specious. The inconvenience isn’t in obtaining the necessary documentation. It lies in inconsistent, poorly designed submission processes – this format but not that, or requiring an in-person appointment. Digital IDs will provide many more opportunities for technical failure, as the system’s first targets, veterans, may soon find out.

A much cheaper solution for meeting the same goal would be interoperable systems that let you push a button to send the necessary confirmation direct to those who need it, like transferring a bank payment. This is, of course, close to the structure Mydex and researcher Derek McAuley have been working on for years, the idea being to invert today’s centralized databases to give us control of our own data. Instead, Starmer has rummaged in Tony Blair’s bottom drawer to pull out old ID proposals.

In an analysis published by the research organization Careful Industries, Rachel Coldicutt finds a clash: people do want a form of ID that would make life easier, but the government’s interest is in creating an ID that will make public services more efficient. Not the same.

Starmer himself has been in India this week, taking advantage to study its biometric ID system Aadhaar. Per Bloomberg, Starmer met with Infosys co-founder Nandan Nilekani, Aadhaar’s architect, because 16-year-old Aadhaar is a “massive success”.

According to the Financial Times, Aadhaar has 99% penetration in India, and “has also become the bedrock for India’s domestic online payments network, which has become the world’s largest, and enabled people to easily access capital markets, contributing to the country’s booming domestic investor base.” The FT also reports that Starmer claims Aadhaar has saved India $10 billion a year by reducing fraud and “leakages” in welfare schemes. In April, authentication using Aadhaar passed 150 billion transactions, and continues to expand through myriad sectors where its use was never envisioned. Visitors to India often come away impressed. However…

At Yale Insights, Ted O’Callahan tells the story of Aadhaar’s development. Given India’a massive numbers of rural poor with no way to identify themselves or access financial services, he writes, the project focused solely on identification.

Privacy International examines the gap between principle and practice. There have been myriad (and continuing) data breaches, many hit barriers to access, and mandatory enrollment for accessing many social protection schemes adds to preexisting exclusion.

In a posting at Open Democracy, Aman Sethi is even less impressed after studying Aadhaar for a decade. The claim of annual savings of $10 billion is not backed by evidence, he writes, and Aadhaar has brought “mass surveillance; a denial of services to the elderly, the impoverished and the infirm; compromised safety and security, and a fundamentally altered relationship between citizen and state.” As in Britain in 2003, when then-prime minister Tony Blair proposed the entitlement card, India cited benefit fraud as a key early justification for Aadhaar. Trying to get it through, Blair moved on to preventing illegal working and curbing identity theft. For Sethi, a British digital ID brings a society “where every one of us is a few failed biometrics away from being postmastered” (referring to the postmaster Horizon scandal).

In a recent paper for the Indian Journal of Law and Legal Research, Angelia Sajeev finds economic benefits but increased social costs. At the Christian Science Monitor, Riddhima Dave reports that many other countries that lack ID systems, particularly developing countries, are looking to India as a model. The law firm AM Legals warns of the spread of data sharing as Aadhaar has become ubiquitous, increasing privacy risks. Finally, at the Financial Times, John Thornhill noted in 2021 the system’s extraordinary mission creep: the “narrow remit” of 2009 to ease welfare payments and reduce fraud has sprawled throughout the public sector from school enrollment to hospital admissions, and into private companies.

Technology secretary Liz Kendall told Parliament this week that the digital ID will absolutely not be used for tracking. She is utterly powerless to promise that on behalf of the governments of the future.

If Starmer wants to learn from another country, he would do well to look at those problems and consider the opportunity costs. What has India been unable to do while pursuing Aadhaar? What could *we* do with the money and resources digital IDs will cost?

Illustrations: In 1980’s Yes, Minister (S01e04, “Big Brother”), minister Jim Hacker (Paul Eddington) tries to explain why his proposed National Integrated Database is not a “Big Brother”.

Update: Spelling of “Aadhaar” corrected.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The absurdity card

Fifteen years ago, a new incoming government swept away a policy its immediate predecessors had been pushing since shortly after the 2001 9/11 attacks: identity cards. That incoming government was led by David Cameron’s conservatives, in tandem with Nick Clegg’s liberal democrats. The outgoing government was Tony Blair’s. When Keir Starmer’s reinvented Labour party swept the 2024 polls, probably few of us expected he would adopt Blair’s old policies so soon.

But here we are: today’s papers announce Starmer’s plan for mandatory “digital ID”.

Fifteen years is an unusually long time between ID card proposals in Britain. Since they were scrapped at the end of World War II, there has usually been a new proposal about every five years. In 2002, at a Scrambling for Safety event held by the Foundation for Information Policy Research and Privacy International, former minister Peter Lilley observed that during his time in Margaret Thatcher’s government ID card proposals were brought to cabinet every time there was a new minister for IT. Such proposals were always accompanied with a request for suggestions how it could be used. A solution looking for a problem.

In a 2005 paper I wrote for the University of Edinburgh’s SCRIPT-ED journal, I found evidence to support that view: ID card proposals are always framed around current obsessions. In 1993, it was going to combat fraud, illegal immigration, and terrorism. In 1995 it was supposed to cut crime (at that time, Blair argued expanding policing would be a better investment). In 1989, it was ensuring safety at football grounds following the Hillsborough disaster. The 2001-2010 cycle began with combating terrorism, benefit fraud, and convenience. Today, it’s illegal immigration and illegal working.

A report produced by the LSE in 2005 laid out the concerns. It has dated little, despite preceding smartphones, apps, covid passes, and live facial recognition. Although the cost of data storage has continued to plummet, it’s also worth paying attention to the chapter on costs, which the report estimated at roughly £11 billion.

As I said at the time, the “ID card”, along with the 51 pieces of personal information it was intended to store, was a decoy. The real goal was the databases. It was obvious even then that soon real time online biometric checking would be a reality. Why bother making a card mandatory when police could simply demand and match a biometric?

We’re going to hear a lot of “Well, it works in Estonia”. *A* digital ID works in Estonia – for a population of 1.3 million who regained independence in 1991. Britain has a population of 68.3 million, a complex, interdependent mass of legacy systems, and a terrible record of failed IT projects.

We’re also going to hear a lot of “people have moved on from the debates of the past”, code for “people like ID cards now” – see for example former Conservative leader William Hague. Governments have always claimed that ID cards poll well but always come up against the fact that people support the *goals*, but never like the thing when they see the detail. So it will probably prove now. Twelve years ago, I think they might have gotten away with that claim – smartphones had exploded, social media was at its height, and younger people thought everything should be digital (including voting). But the last dozen years began with Snowden‘s revelations, and continued with the Cambridge Analytica Scandal, ransomware, expanding acres of data breaches, policing scandals, the Horizon / Post Office disaster, and wider understanding of accelerating passive surveillance by both governments and massive companies. I don’t think acceptance of digital ID is a slam-dunk. I think the people who have failed to move on are the people who were promoting ID cards in 2002, when they had cross-party support, and are doing it again now.

So, to this new-old proposal. According to The Times, there will be a central database of everyone who has the right to work. Workers must show their digital ID when they start a new job to prove their employment is legal. They already have to show one of a variety of physical ID documents, but “there are concerns some of these can be faked”. I can think of a lot cheaper and less invasive solution for that. The BBC last night said checks for the right to live here would also be applied to anyone renting a home. In the Guardian, Starmer is quoted calling the card “an enormous opportunity” and saying the card will offer citizens “countless benefits” in streamlining access to key services, echoes of 2002’s “entitlement card”. I think it was on the BBC’s Newsnight that I heard someone note the absurdity of making it easier to prove your entitlement to services that no longer exist because of cuts.

So keep your eye on the database. Keep your eye on which department leads. Immigration suggests the Home Office, whose desires have little in common with the need of ordinary citizens’ daily lives. Beware knock-on effects. Think “poll tax”. And persistently ask: what problem do we have for which a digital ID is the right, the proportionate, the *necessary* solution?

There will be detailed proposals, consultations, and draft legislation, so more to come. As an activist friend says, “Nothing ever stays won.”

Illustrations: British National Identity document circa 1949 (via Wikimedia.)

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Email to Ofgem

So, the US has claimed victory against the UK.

Regular readers may recall that in February the UK’s Home Office secretly asked Apple to put a backdoor in the Advanced Data Protection encryption it offers as a feature for iCloud users. In March, Apple challenged the order. The US objected to the requirement that the backdoor should apply to all users worldwide. How dare the Home Office demand the ability to spy on Americans?

On Tuesday, US director of national intelligence Tulsi Gabbard announced the UK is dropping its demand for the backdoor in Apple’s encryption “that would have enabled access to the protected encrypted data of American citizens”. The key here is “American citizens”. The announcement – which the Home Office is refusing to comment on – ignores everyone else and also the requirement for secrecy. It’s safe to say that few other countries would succeed in pressuring the UK in this way.

As Bll Goodwin reports at Computer Weekly, the US deal does nothing to change the situation for people in Britain or elsewhere. The Investigatory Powers Act (2016) is unchanged. As Parmy Olson writes at Bloomberg, the Home Office can go on issuing Technical Capability Notices to Apple and other companies demanding information on their users that the criminalization of disclosure will keep the companies silent. The Home Office can still order technology companies operating in the UK to weaken their security. And we will not know they’ve done it. Surprisingly, support for this point of view comes from the Federal Trade Commission, which has posted a letter to companies deploring foreign anti-encryption policy (ignoring how often undermining encryption has been US policy, too) and foreign censorship of Americans’ speech. This is far from over, even in the US.

Within the UK, the situation remains as dangerously uncertain as ever. With all countries interconnected, the UK’s policy risks the security of everyone everywhere. And, although US media may have forgotten, the US has long spied on its citizens by getting another country to do it.

Apple has remained silent, but so far has not withdrawn its legal challenge. Also continuing is the case filed by Privacy International, Liberty, and two individuals. In a recent update, PI says both legal cases will be heard over seven days in 2026 as much as possible in the open.

***

For non-UK folk: The Office of Gas and Electricity Markets (Ofgem) is the regulator for Britain’s energy market. Its job is to protect consumers.

To Ofgem:

Today’s Guardian (and many others) carries the news that Tesla EMEA has filed an application to supply British homes and businesses with energy.

Please do not approve this application.

I am a journalist who has covered the Internet and computer industries for 35 years. As we all know, Tesla is owned by Elon Musk. Quite apart from his controversial politics and actions within the US government, Elon Musk has shown himself to be an unstable personality who runs his companies recklessly. Many who have Tesla cars love them – but the cars have higher rates of quality control problems than those from other manufacturers, and Musk’s insistence on marketing the “Full Self Drive” feature has cost lives according to the US National Highway and Transportation Safety Agency, which launched yet another investigation into the company just yesterday. In many cases, when individuals have sought data from Tesla to understand why their relatives died in car fires or crashes the company has refused to help them. During the covid emergency, thousands of Tesla workers got covid because Musk insisted on reopening the Tesla factory. This is not a company people should trust with their homes.

With Starlink, Musk has exercised his considerable global power by turning off communications in Ukraine while it was fighting back Russian attacks. SpaceX launches continue to crash. According to the children’s commissioner’s latest report, far more children encounter pornography online on Musk’s X than on pornography sites, a problem that has gotten far worse since Musk took it over.

More generally, he is an enemy of workers’ rights. Misinformation on X helped fuel the Southport riots, and Musk himself has considered trying to oust Keir Starmer as prime minister.

Many are understandably awed by his technological ideas. But he uses these to garner government subsidies and undermine public infrastructure, which he then is able to wield as a weapon to suit his latest whims.

Musk is already far too powerful in the world. His actions in the White House have shown he is either unable to understand or entirely uninterested in the concerns and challenges that face people living on sums that to him seem negligible. He is even less interested in – and often actively opposes – social justice, fairness, and equity. No amount of separation between him and Tesla EMEA will be sufficient to counter his control of and influence over his company. Tesla’s board, just weeks ago, voted to award him $30 billion in shares to “energise and focus” him.

Please do not grant him a foothold in Britain’s public infrastructure. Whatever his company is planning, it does not have British interests at heart.

Ofgem is accepting public comments on Tesla’s application until close of business on Friday, August 22, 2025.

Illustration: Artist Dominic Wilcox’s Stained Glass Driverless Sleeper Car..

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Big bang

In 2008, when the recording industry was successfully lobbying for an extension to the term of copyright to 95 years, I wrote about a spectacular unfairness that was affecting numerous folk and other musicians. Because of my own history and sometimes present with folk music, I am most familiar with this area of music, which aside from a few years in the 1960s has generally operated outside of the world of commercial music.

The unfairness was this: the remnants of a label that had recorded numerous long-serving and excellent musicians in the 1970s were squatting on those recordings and refusing to either rerelease them or return the rights. The result was both artistic frustration and deprivation of a sorely-needed source of revenue.

One of these musicians is the Scottish legend Dick Gaughan, who had a stroke in 2016 and was forced to give up performing. Gaughan, with help from friends, is taking action: a GoFundMe is raising the money to pay “serious lawyers” to get his rights back. Whether one loved his early music or not – and I regularly cite Gaughan as an important influence on what I play – barring him from benefiting from his own past work is just plain morally wrong. I hope he wins through; and I hope the case sets a precedent that frees other musicians’ trapped work. Copyright is supposed to help support creators, not imprison their work in a vault to no one’s benefit.

***

This has been the first week of requiring age verification for access to online content in the UK; the law came into effect on July 25. Reddit and Bluesky, as noted here two weeks ago, were first, but with Ofcom starting enforcement, many are following. Some examples: Spotify; X (exTwitter); Pornhub.

Two classes of problems are rapidly emerging: technical and political. On the technical side, so far it seems like every platform is choosing a different age verification provider. These AVPs are generally unfamiliar companies in a new market, and we are being asked to trust them with passports, driver’s licenses, credit cards, and selfies for age estimation. Anyone who uses multiple services will find themselves having to widely scatter this sensitive information. The security and privacy risks of this should be obvious. Still, Dan Malmo reports at the Guardian that AVPs are already processing five million age checks a day. It’s not clear yet if that’s a temporary burst of one-time token creation or a permanently growing artefact of repetitious added friction, like cookie banners.

X says it will examine users’ email addresses and contact books to help estimate ages. Some systems reportedly send referring page links, opening the way for the receiving AVP to store these and build profiles. Choosing a trustworthy VPN can be tricky, and these intermediaries are in a position to log what you do and exploit the results.

The BBC’s fact-checking service finds that a wide range of public interest content, including news about Ukraine and Gaza and Parliamentary debates, is being blocked on Reddit and X. Sex workers see adults being locked out of legal content.

Meanwhile, many are signing up for VPNs at pace, as predicted. The spike has led to rumors that the government is considering banning them. This seems unrealistic: many businesses rely on VPNs to secure connections for remote workers. But the idea is alarming; its logical extension is the war on general-purpose computation Cory Doctorow foresaw as a consequence of digital rights management in 2011. A terrible and destructive policy can serve multiple masters’ interests and is more likely to happen if it does.

On the political side, there are three camps. One wants the legislation repealed. Another wants to retain aspects many people agree on, such criminalizing cyberflashing and some other types of online abuse, and fix its flaws. The third thinks the OSA doesn’t go far enough, and they’re already saying they want it expanded to include all services, generative AI, and private messaging.

More than 466,000 people have signed a petition calling on the government to repeal the OSA. The government responded: thanks, but no. It will “work with Ofcom” to ensure enforcement will be “robust but proportionate”.

Concrete proposals for fixing the OSA’s worst flaws are rare, but a report from the Open Rights Group offers some; it advises an interoperable system that gives users choice and control over methods and providers. Age verification proponents often compare age-gating websites to ID checks in bars and shops, but those don’t require you to visit a separate shop the proprietor has chosen and hand over personal information. At Ctrl-Shift, Kirra Pendergast explains some of the risks.

Surrounding all that is noise. A US lawyer wants to sue Ofcom in a US federal court (huh?). Reform leader Nigel Farage has called for the Act’s repeal, which led technology secretary Peter Kyle to accuse him – and then anyone else who criticizes the act – of being on the side of sexual predators. Kyle told Mumsnet he apologizes to the generation of UK kids who were “let down” by being exposed to toxic online content because politicians failed to protect them all this time. “Never again…”

In other news, this government has lowered the voting age to 16.

Illustrations: The back cover of Dick Gaughan’s out-of-print 1972 first album, No More Forever.

Wendy M. Grossman is an award-winnning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Magic math balls

So many ironies, so little time. According to the Financial Times (and syndicated at Ars Technica), the US government, which itself has traditionally demanded law enforcement access to encrypted messages and data, is pushing the UK to drop its demand that Apple weaken its encryption. Normally, you want to say, Look here, countries are entitled to have their own laws whether the US likes it or not. But this is not a law we like!

This all began in February, when the Washington Post reported that the UK’s Home Office had issued Apple with a Technical Capability Notice. Issued under the Investigatory Powers Act (2016) and supposed to be kept secret, the TCN demanded that Apple undermine the end-to-end encryption used for iCloud’s Advanced Data Protection feature. Much protest ensued, followed by two legal cases in front of the Investigatory Powers Tribunal, one brought by Apple, the other by Privacy International and Liberty. WhatsApp has joined Apple’s legal challenge.

Meanwhile, Apple withdrew ADP in the UK. Some people argued this didn’t really matter, as few used it, which I’d call a failure of user experience design rather than an indication that people didn’t care about it. More of us saw it as setting a dangerous precedent for both encryption and the use of secret notices undermining cybersecurity.

The secrecy of TCNs is clearly wrong and presents a moral hazard for governments that may prefer to keep vulnerabilities secret so they can take advantage for surveillance purposes. Hopefully, the Tribunal will eventually agree and force a change in the law. The Foundation for Information Policy Research (obDisclosure: I’m a FIPR board member) has published a statement explaining the issues.

According to the Financial Times, the US government is applying a sufficiently potent threat of tariffs to lead the UK government to mull how to back down. Even without that particular threat, it’s not clear how much the UK can resist. As Angus Hanton documented last year in the book Vassal State, the US has many well-established ways of exerting its influence here. And the vectors are growing; Keir Starmer’s Labour government seems intent on embedding US technology and companies into the heart of government infrastructure despite the obvious and increasing risks of doing so. When I read Hanton’s book earlier this year, I thought remaining in the EU might have provided some protection, but Caroline Donnelly warns at Computer Weekly that they, too, are becoming dangerously dependent on US technology, specifically Microsoft.

It’s tempting to blame everything on the present administration, but the reality is that the US has long used trade policy and treaties to push other countries into adopting laws regardless of their citizens’ preferences.

***

As if things couldn’t get any more surreal, this week the Trump administration *also* issued an executive order banning “woke AI” in the federal government. AI models are in future supposed to be “politically neutral”. So, as Kevin Roose writes at the New York Times, the culture wars are coming for AI.

The US president is accusing chatbots of “Marxist lunacy”, where the rest of the world calls them inaccurate, biased toward repeating and expanding historical prejudices, and inconsistent. We hear plenty about chatbots adopting Nazi tropes; I haven’t heard of one promoting workers’ and migrants’ rights.

If we know one thing about AI models it’s that they’re full of crap all the way down. The big problem is that people are deploying them anyway. At the Canary, Steve Topple reports that the UK’s Department of Work and Pensions admits in a newly-published report that its algorithm for assessing whether benefit claimants might commit fraud is ageist and and racist. A helpful executive order would set must-meet standards for *accuracy*. But we do not live in those times.

The Guardian reports that two more Trump EOs expedite building new data centers, promote exports of American AI models, expand the use of AI in the federal government, and intend to solidify US dominance in the field. Oh, and Trump would really like if it people would stop calling it “artificial” and find a new name. Seven years ago, aspirational intelligence” seemed like a good idea. But that was back when we heard a lot about incorporating ethics. So…”magic math ball”?

These days, development seems to proceed ethics-free. DWP’s report, for example, advocates retraining its flawed algorithm but says continuing to operate it is “reasonable and proportionate”. In 2021, for European Digital Rights Initiative, Agathe Balayn and Seda Gürses found, “Debiasing locates the problems and solutions in algorithmic inputs and outputs, shifting political problems into the domain of design, dominated by commercial actors.” In other words, no matter what you think is “neutral”, training data, model, and algorithms are only as “neutral” as their wider context allows them to be.

Meanwhile, nothing to curb the escalating waste. At 404 Media, Emanuel Maiberg finds that Spotify is publishing AI-generated songs from dead artists without anyone’s’ permission. On Monday, MSNBC’s Rachel Maddow told viewers that there’s so much “AI slop ” about her that they’ve posted Is That Really Rachel? to catalog and debunk them.

As Ed Zitron writes, the opportunity costs are enormous.

In the UK, the US, and many other places, data centers are threatening the water supply.

But sure, let’s make more of that.

Illustrations: Magic 8 ball toy (via frankieleon at Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her website has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Dangerous corner

This year’s Computers. Privacy, and Data Protection conference arrived at a crossroads moment. The European Commission, wanting to compete to “win the AI race”, is pursuing an agenda of simplification. Based on a recent report by former European Central Bank president Mario Draghi, it’s looking to streamline or roll back some of the regulation the EU is famous for.

Cue discussion of “The Brussels Effect”, derived from The California Effect, which sees compliance with regulation voluntarily shift towards the strictest regime. As Mireille Hildebrandt explained in her opening keynote, this phenomenon requires certain conditions. In the case of data protection legislation, that means three things: that companies will comply with the most stringent rules to ensure they are universally compliant, and that they want and need to compete in the EU. If you want your rules to dominate, it seems like a strategy. Except: China’s in-progress data protection regime may well be the strongest when it’s complete, but in that very different culture it will include no protection against the government. So maybe not a winning game?

Hildebrandt went on to prove with near-mathematical precision that an artificial general intelligence can never be compatible with the General Data Protection Regulation – AGI is “based on an incoherent conceptualization” and can’t be tested.

“Systems built with the goal of performing any task under any circumstances are fundamentally unsafe,” she said. “They cannot be designed for safety using fundamental engineering principles.”

AGI failing to meet existing legal restrictions seems minor in one way, since AGI doesn’t exist now, and probably never will. But as Hildebrandt noted, huge money is being poured into it nonetheless, and the spreading impact of that is unavoidable even if it fails.

The money also makes politicians take the idea seriously, which is the likely source of the EU’s talk of “simplification” instead of fundamental rights. Many fear that forthcoming simplification packages will reopen GDPR with a view to weakening the core principles of data minimization and purpose limitation. As one conference attendee asked, “Simplification for whom?”

In a panel on conflicting trends in AI governance, Shazeda Ahmed agreed: “There is no scientific basis around the idea of sentient AI, but it’s really influential in policy conversations. It takes advantage of fear and privileges technical knowledge.”

AI is having another impact technology companies may not have notidced yet: it is aligning the interests of the environmental movement and the privacy field.

Sustainability and privacy have often been played off against each other. Years ago, for example, there were fears that councils might inspect household garbage for elements that could have been recycled. Smart meters may or may not reduce electricity usage, but definitely pose privacy risks. Similarly, many proponents of smart cities stress the sustainability benefits but overlook the privacy impact of the ubiquitous sensors.

The threat generative AI poses to sustainability is well-documented by now. The threat the world’s burgeoning data centers pose to the transition to renewables is less often clearly stated and it’s worse than we might think. Claude Turmes, for example, highlighted the need to impose standards for data centers. Where an individual is financially incentivized to charge their electric vehicle at night and help even out the load on the grid, the owners of data centers don’t care. They just want the power they need – even if that means firing up coal plants to get it. Absent standards, he said, “There will be a whole generation of data centers that…use fossil gas and destroy the climate agenda.” Small nuclear power reactors, which many are suggesting, won’t be available for years. Worse,, he said, the data centers refuse to provide information to help public utilities plan despite their huge cosumption.

Even more alarming was the panel on the conversion of the food commons into data spaces. So far, most of what I had heard about agricultural data revolved around precision agriculture and its impact on farm workers, as explored in work (PDF) by Karen Levy, Solon Barocas, and Alexandra Mateescu. That was plenty disturbing, covering the loss of autonomy as sensors collect massive amounts of fine-grained information, everything from soil moisture to the distribution of seeds and fertilizer.

Much more alarming to see Monja Sauvagerd connect up in detail the large companies that are consolidating our food supply into a handful of platforms. Chinese government-owned Sinochem owns Syngenta; John Deere expanded by buying the machine learning company Blue River; and in 2016 Bayer bought Monsanto.

“They’re blurring the lines between seeds, agrichemicals, bio technology, and digital agriculture,” Sauvagerd said. So: a handful of firms in charge of our food supply are building power based on existing concentration. And, selling them cloud and computing infrastructure services, the array of big technology platforms that are already dangerously monopolistic. In this case, “privacy”, which has always seemed abstract, becomes a factor in deciding the future of our most profoundly physical system. What rights should farmers have to the data their farms generate?

In her speech, Hildebrandt called the goals of TESCREAL – transhumanism, extropianism, singularitarianism, cosmism, rationalist ideology, effective altruism, and long-termism – “paradise engineering”. She proposed three questions for assessing new technologies: What will it solve? What won’t it solve? What new problems will it create? We could add a fourth: while they’re engineering paradise, how do we live?

Illustrations: Brussels’ old railway hub, next to its former communications hub, the Maison de la Poste, now a conference center.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.