Waste management

This week, Amazon announced it would cripple a bunch of older Kindle models. Users will be able to go on reading the books that are already stored on their devices, but they won’t be able to add anything new, whether it’s borrowed, bought, or downloaded. The switch will be toggled on May 20, and applies to Kindle ereaders and Kindle Fires released in or before 2012. These devices had already been locked out of the store in 2022.

There is no benefit to consumers from doing this, even though Amazon is offering a substantial discount on newer devices for those who want to switch. There are presumably benefits to Amazon – namely, that it can update its store and other software without having to cater to older devices, plus sell an extra bunch of new ones. But overall, it’s a globally hostile move that creates a new pile of electronic waste composed of functioning hardware. One of these days, that’s going to be your car when some auto manufacturer decides that a “legacy” model isn’t worth supporting any more.

One option is to stop buying devices whose manufacturers demand that you surrender ownership in favor of their ongoing control. The other main possibility is to continue spreading right to repair laws so that a company like Amazon (or John Deere) wishing to shed itself of the responsibility of supporting older devices would be required to open them up to their users and the third-party ecosystem around them that would doubtless form. As the population of Internet of Things devices continues to grow around the world there will be much more of this – when we need much less. It’s absolutely maddening. Try a Kobo and the Gutenberg project.

***

It came out this week that language added in October 2025 to Microsoft’s terms of use says: “Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.”

This is like the disclaimers that psychics and mediums in the UK issue to prospective customers to prevent exploitation of the credulous. A software company, though…if it’s going to admit up front that the service it’s providing can’t be relied on, why force it on people in the first place?

Obviously, the point here is to cover the company’s ass in case of lawsuits. It fits right in with companies’ general refusal to accept liability for problems created by the software they make. I’m glad the company is warning people, but the better solution – if the company can’t bring itself to discontinue it altogether – would be to either remove Copilot and let people download it if they want it or leave it turned off by default, warning people up front instead of in terms of use that normally wouldn’t have been read. Instead, what we have to do is search for instructions to remove it and hope Microsoft hasn’t made following them impossible.

***

In the midst of a lot of science fictional hype-scares about “AI” along come some more real concerns. We’ll start “small”: The New York Times has done a study that finds that Google’s AI Overviews are right 90% of the time. Sounds not-so-bad, until you remember the Law of Truly Large Numbers. That “90%” success rate when the remaining 10% is applied to trillions means the overall result is, as Ryan Whitwam puts it in summarizing the story for Ars Technica, “hundreds of thousands of lies going out every minute of the day”. That’s automation for you. Used to be you needed an army of bots to disseminate misinformation at any sort of scale, and even then it was less effective since it wasn’t backed by an apparently authoritative name (see also Microsoft, above).

More alarming are the reports surfacing about generative AI’s effectiveness at aiding and amplifying online crime. In November, Google announced internal researchers had found hackers experimenting with a script that interacts with Gemini’s API to create just-in-time modifications. Last month, at Hacker Noon polymorphic viruses, but again, this appears to be a step up in sophistication and speed.

At the same time, Casey Newton reports at Platformer, Anthropic’s latest model is likely the first of many that can find and exploit vulnerabilities in software in “ways that far outpace the efforts of defenders”. The announcement, which appears to have been inadvertent, was quickly followed by another, formally launching the new model, Claude Mythos, and Project Glasswing. The latter, per Newton, gives more than 40 of the biggest technology companies early access so they can use the model to find and patch vulnerabilities in both their own systems and open source systems that underpin digital infrastructure.

Unlike most AI-related scare stories, this one is backed by people who are usually sensible, such as Alex Stamos, formerly chief security officer at Facebook and Yahoo, and other security practitioners. These warnings are not coming from AI company CEOs with a concept to sell .

So: on the one hand, (hopefully) better software; on the other, (potentially) newer, more dangerous attacks. Ugh. To restore calm, I recommend Terry Godier’s essay The Last Quiet Thing.

Illustrations: Bill Steele, who in 1970 wrote the environmental anthem Garbage!.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The Silicon Valley chronicles

We should have seen this coming. At Platformer, Casey Newton reports that Meta has discussed pulling funding for the Oversight Board after 2028 after already reducing it “significantly” this year. Who needs fripperies like independent governance after reporting billions of losses in its Reality Labs unit, the bit responsible for the seemingly now-abandoned metaverse, the smartglasses, and, of course, AI? Per Newton, there are ongoing discussions about continuing the Oversight Board somehow, perhaps by opening it up to adjudicate for other platforms.

The reality is the Oversight Board’s moment has probably passed. In 2018, creating it was an effective public relations response to a series of scandalous revelations, beginning in 2017 with Carole Cadwalladr‘s work exposing Cambridge Analytica‘s use of Facebook to collect personal data to target political advertising. Its biggest moment was probably in January 2021, when the Oversight Board backed Facebook‘s decision to ban then-“former guy” Donald Trump for two years following the January 6 insurrection. Twitter banned him, too, and there was a brief crackdown on postings calling for violence. Much public criticism followed from whistleblowers such as Frances Haugen and Sarah Wynn Williams (2025), and the Netflix documentary The Great Hack..

Today, though,the big technology companies seem not to care. Maybe they will again if usage shrinks enough – the BBC reports an Ofcom study showing that UK adults are actively posting 61% less than last year. But defunding the Oversight Board seems consistent with the general decline of content moderation on Facebook and elsewhere. Neither fines, nor spreading age verification laws, nor other constraints can be remedied by funding an Oversight Board that is already rarely mentioned.

Besides, the years since 2018 have seen the “billionaire class” take a hard turn to the libertarian right; they show little inclination to be constrained personally or corporately by national laws or governments.

In a January 2025 interview, Netscape creator and venture capitalist Marc Andreessen provided this explanation: US Democrats “broke the deal”. That is, Silicon Valley supported Democrats as long as they left technology companies free of regulation. (Democrats might reply that they were responding on behalf of the public to changes in Silicon Valley companies’ behavior.)

In addition, Connie Loizos reports at TechCrunch that the billionaires who signed Warren Buffett’s and Bill Gates’s Giving Pledge would now like it forgotten. At Current Affairs, Nathan J. Robinson fears most Anduril CEO Palmer Luckey’s enthusiasm for incorporating AI and robotics into more and bigger weapons. Anduril was founding in 2017, the year before Google employees petitioned the company to exit its contract with Project Maven, the Pentagon’s effort to harness machine learning and automatic targeting. By 2021, Tom Simonite was reporting at Wired that Google was bidding on military contracts. A few weeks back, at the Financial Times, Jemima Kelly called Silicon Valley billionaires “enablers, keeping us distracted and dumb”, citing a recent podcast interview in which Andreessen said he never engages in introspection.

Available to link all this together is Jacob Silverman’s new book, Gilded Rage: Elon Musk and the Radicalization of Silicon Valley. Musk is not the sole focus of Silverman’s “guided tour through America’s self-designated innovator class” and its resentment of government and power to change it. Much of the book, which Silverman began researching in 2023, focuses on other high-dollar funders such as David Sacks and DOGE co-mastermind Vivek Ramaraswamy (whom Silverman introduces as the boss who fired him from an early job), as well as members of the “Paypal Mafia” including Peter Thiel and David Sacks. Silverman also includes chapters on Musk’s acquisition of Twitter, Saudi Arabia’s growing connections to Silicon Valley, the cryptocurrency boom, the fight over TikTok’s US presence, a so-far failed plan to take over California’s Solano County, Sam Bankman-Fried’s rise and fall, and the choice of JD Vance as Trump’s running mate. The book ends with the donors’ success – that is, Trump’s election in 2024.

With Gilded Rage, Silverman revives a formerly niche publishing subgenre , which documented the beginnings of this shift. First on the scene in the US, to the best of my knowledge, was northern California native Paulina Borsook with a 1996 essay for Mother Jones, Cyberselfish. In it, and in the subsequent 2000 book, she laid out Silicon Valley’s refusal to recognize the government assistance and military funding that enabled its wealth and growth. To Borsook, who described herself in the 1998 book Wired Women as the “token hippie feminist writing for Wired“, Silicon Valley’s turn to the right and distaste for government were already visible even then. In November, David Streitfeld profiled Borsook at the New York Times and noted the price she paid for her contrarian view.

In a 1995 essay The Californian Ideology, Richard Barbrook and Andy Cameron pushed Europe to take a different path.

Silverman’s most recent predecessor is Douglas Rushkoff’s 2022 skewing of “The Mindset” in Survival of the Richest. In Rushkoff’s telling, these high-wealth individuals are planning their safety and/or escape during and after “the incident” – that is, whatever catastrophe is going to wipe us all out.

So Silverman is less documenting a shift than he is describing an outcome: a political wave whose emergence into the mainstream only seems sudden.

Illustrations: Political cartoon from 1904, showing Standard Oil’s stranglehold on US industry (via Wikimedia).

Also this week:
At Plutopia, we interview Paulina Borsook.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Power games

UK prime minister Keir Starmer’s desire to bring in a UK digital ID is awake again with the announcement that the government plans to make the IDs available for “a handful of uses” before the next general election, due by 2029, . The requisite consultation closes May 5.

At Computer Weekly, Lis Evanstad adds a summary and detail about the consultation. Government by app! What could possibly go wrong?

Among other new information in the consultation: the age for being able to get a digital ID could drop below 18, perhaps even to issuance at birth. There might be a single unique identifier to enable linking data throughout government systems. Darren Jones, the prime minister’s chief secretary, talks smoother access to government services and the ability to share only the piece of data that’s needed for a specific purpose, but not about the risks of tying everything to a single identifier whose compromise can reverberate throughout your life.

In his Guardian piece, Stacey quotes Jones, who positions the digital ID as a way to improve fairness in access to government services, which he says accrue disproportionately to “pushy” people with time, patience, and energy.

This sounds good until you read Government Digital Service co-founder Tom Loosemore‘s blog, where he notes that creating that sort of stonewalling bureaucracy is often a deliberate strategy to manage demand. Loosemore argues that agentic AI will force an end to this strategy because agents will have unlimited patience and “reduce the cost to citizens of appeals, challenges, and calculations etc to near-zero”. Instead of having to manually dig through financial records, Loosemore finds in an experiment that an AI agent can scan the documents, find the information, and present only the data required to establish the citizen’s entitlement to benefits.

He believes AI agents will also bring a new level of transparency (and perhaps “pushiness”): “AI Agents will always dig out that 93 page PDF of guidance hidden.” Governments, he writes, will be forced to “clarify and tighten policies & processes, with all the painful political trade-offs therein”.

Or: will budget-protecting governments adopt their own agentic AIs to move and re-bury the stuff they don’t want applicants to find and recalibrate their requirements to make them resistant to automation? Seems just as likely, really.

While all that was going on, significant votes took place on the future of access to online content. On Tuesday, Jennifer McKiernan reports at the BBC, MPs rejected the proposed social media for under-16s, which would have been added to the Children’s Wellbeing and Schools bill. Instead, the government is continuing to collect information from the consultation it launched on March 2, which closes on May 26.

Some MPs seem to have been persuaded by the argument – mooted by, among others, the National Society for the Prevention of Cruelty to Children – that banning children from social media will simply push them to find darker, less visible unregulated online spaces with less in the way of protection or moderation. The Commons did, however, support a government proposal to give the relevant ministers greater powers to restrict or ban children’s access to social media services and chatbots, limit their use of VPNs, and change the “age of digital consent”. The Children’s Wellbeing and Schools bill now goes back to the Lords for more discussion. .

As an unelected body, in recent years the House of Lords has often been a damper on hastily-passed legislation in response to political trends, but here they’re leading. The social media ban passed there. This week, as Dev Kundaliya reports at Computing. the Lords voted on two amendments, one to the Crime and Policing bill, the other to the Children’s Wellbeing and Schools bill. At the Online Safety Act Network blog, University of Essex professor Lorna Woods explains these in detail. The first would enable the government to amend the Online Safety Act to “minimize or mitigate the risks of harm to individuals” from illegal AI-generated content. The second would give the government latitude to change the age of consent.

Woods’ main point is the extreme power being given the government here to bypass Parliament, calling the clauses Henry VIII powers – that is, giving the government the power to change or repeal an Act of Parliament without consulting Parliament. The government’s official justification is to give the government greater flexibility to adapt on the fly to new technologies and online harms. Or to bar access to stuff it doesn’t like, presumably.

The Open Rights Group agrees, calling the plan powers to restrict the entire Internet. ORG also cites a March 0 open letter from 400 scientists and calls for a moratorium on age assurance to learn more about the technological hazards and social impact and for adopting alternative measures in the meantime, such as regulating algorithmic manipulation and providing parents with support.

At DefendDigitalMe, Jen Persson points out that the books already contain laws enabling considerable digital control of children; surveillance, she writes, is being “dressed as ‘safety'”. None of this, she writes, is compatible with children’s *rights*, which don’t seem to get much of a look-in.

Illustrations: Henry VIII, as painted by Hans Holbein the Younger (via Wikimedia.

Also this week: TechGrumps 3.38, The Bettification of Everything.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of the future Internet

“What kind of Internet do you want [him] to inherit?” “Him” was then measuring his age in weeks.

“Not *this* Internet.”

Now, when said son has grown to measure his life in months, my friend and I are no closer to a positive vision. But notably many more people seem to be asking the same kind of question.

In the last week I’ve been to two meetings convened to pull together a cross-section of activists, policy wonks, and techies to talk about building movements to push back against the spread of technological control. The goals of these groups, like my friend’s and mine remain fuzzy, but they reflect a widespread growing alarm about AI, US entanglement, and our other technological ills.

“When did the future stop being something we plan for and become something done to us?” a friend asked about five years ago. That sense of being held hostage by the inevitability narrative is there, too, in a jumble including job loss, the evils of capitalism, the embedding of companies like Palantir in the health service and soon in policing, the speed of change, widespread loneliness, sustainability, and existential threats. So the overall feel has been part-Occupy, part consciousness-raising session.

Those who did have visions to propose often seemed to be describing things that already exist: trusted, authoritative content (the BBC, Wikipedia); ending capitalism in favor of shared ownership and distributed power (“there’s always someone reinventing communism,” the person next to me muttered), and recreating the impossible dream of micropayments.

One meeting polled us with a list of concerns about AI and asked us to pick the most important. The winner, by far, was “consolidation of power”. This speaks to a wider movement than merely opposing AI or resisting the encroachment of the worst technology surveillance practices into daily life.

Similar discussions have been growing for at least a couple of years. At The Register, long-time open source advocate Liam Proven writes after attending the Open Source Policy Summit that Europe is reassessing its technological reliance on US IT services, which offers the potential for a US president to order disconnection. The lack of European billion-dollar technology companies leads people to forget the technology invented here that instead embraced openness: the web, Linux, Raspberry Pi, Open StreetMap, the Fediverse.

It’s a little alarming, however, that all of this discussion hovers at the application layer. Old-timers who’ve watched the Internet build up understand that underneath the social media and smartphones lies the physical layer, the infrastructure that is also condolidated and controlled: chips, cables, wireless spectrum. For younger folks, those elements are near-invisible; their adult lives have been dominated by concerns about data. Yet in the last year we’ve been warned of sabotage to undersea cables and chip shortages. There’s more general recognition of the issues surrounding data centers’ demand for power and water.

Even so, there’s a good amount of recognition that all the strands of our present polycrisis are intertwined – see for example the mission statement at Germany’s Cables of Resistance. A broader group, building on the 2024 conference convened by Cristina Caffarra, who called out policy makers at CPDP 2024 for ignoring physical infrastructure, is working on a EuroStack to provide a European cloud alternative.

At the political layer, we have Dutch News reporting that Dutch MPs are pushing their government to move away from depending on US technology companies to provide essential infrastructure. In the UK, LibDem and Green MPs are calling on the government to reconsider its contracts with Palantir.

A group called Pull the Plug will lead a “march against the machines” in London on February 28 to demand the UK government create citizens’ assemblies and implement their decisions on AI.

It feels like change is gathering here. In the US, the future still looks much like the past. In a blog post this week, here is Anthropic, presumably responding to OpenAI’s plan to add advertising to ChatGPT:

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking…We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

Compare and contrast to Google founders Sergey Brin and Larry Page in their 1998 Google-founding paper:

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users…we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

No wonder Anthropic adds this caution: “Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.” Translation: we may need the money.” Of course they’ll frame it as serving the customer better.

Illustrations: (One of) the first Internet ad, for AT&T, on HotWired (via The Internet History Podcast.

Also this week:
At the Plutopia podcast we talk to science fiction writer Ken MacLeod.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The censorship-industrial complex

In a sign of the times, the Academy of Motion Picture Arts and Sciences has announced that in 2029 the annual Oscars ceremony will move from ABC to YouTube, where it will be viewable worldwide for free. At Variety, Clayton Davis speculates how advertising will work – perhaps mid-roll? The obvious answer is to place the ads between the list of nominees and opening the envelope to announce the winner. Cliff-hanger!

The move is notable. Ratings for the awards show have been declining for decades. In 1960, 45.8 million people in the US watched the Oscars – live, before home video recording. In 1998, the peak, 55.2 million, after VCRs, but before YouTube. In 2024: 19.5 million. This year, the Oscars drew under 18.1 million viewers.

On top of that, broadcast TV itself is in decline. One of the biggest audiences ever gathered for a single episode of a scripted show was in 1983: 100 million, for the series finale of M*A*S*H. In 2004, the Friends finale drew 52.5 million. In 2019, the Big Bang Theory finale drew just 17.9 million. YouTube has more than 2.7 billion active users a month. Whatever ABC was paying for the Oscars, reach may matter more than money, especially in an industry that is also threatened by shrinking theater audiences. In the UK, YouTube is second most-watched TV service ($), after only the BBC.

The move suggests that the US audience itself may also not be as uniquely important as it was historically. The Academy’s move fits into many other similar trends.

***

During this week’s San Francisco power outage, an apparently unexpected consequence was that non-functioning traffic lights paralyzed many of the city’s driverless Waymo taxis. In its blog posting, the company says, “While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.”

Friends in San Francisco note that the California Driver’s Handbook (under “Traffic Control”) is specific about what to do in such situations: treat the intersection as if it had all-way stop signs. It’s a great example of trusting human social cooperation.

Robocars are, of course, not in on this game. In an uncertain situation they can’t read us. So the volume of requests overwhelmed the remote human controllers and the cars froze, blocking intersections and even sidewalks. Waymo suspended the service temporarily, and says it is updating the cars’ software to make them act “more decisively” in such situations in future.

Of course, all these companies want to do away with the human safety drivers and remote controllers as they improve cars’ programming to incorporate more edge cases. I suspect, however, that we’ll never really reach the point where humans aren’t needed; there will always be new unforeseen issues. Driving a car is a technical challenge. Sharing the roads with others is a social effort requiring the kind of fuzzy flexibility computers are bad at. Getting rid of the humans will mean deciding what level of dysfunction we’re willing to accept from the cars.

Self-driving taxis are coming to London in 2026, and I’m struggling to imagine it. It’s a vastly more complex city to navigate than San Francisco, and has many narrow, twisty little streets to flummox programmers used to newer urban grids.

***

The US State Department has announced sanctions barring five people and potentially their families from obtaining visas to enter or stay in the US, labeling them radical activists and weaponized NGOs. They are: Imran Ahmed, an ex-Labour advisor and founder and CEO of the Centre for Countering Digital Hate; Clare Melford, founder of the Global Disinformation Index; Thierry Breton, a former member of the European Commission, whom under secretary of state for public diplomacy Sarah B. Rogers, called “a mastermind” of the Digital Services Act; and Josephine Ballon and Anna-Lena von Hodenberg, managing directors of the independent German organization HateAid, which supports people affected by digital violence. Ahmed, who lived in Washington, DC, has filed suit to block his deportation; a judge has issued a temporary restraining order.

It’s an odd collection as a “censorship-industrial complex”. Breton is no longer in a position to make laws calling US Big Tech to account; his inclusion is presumably a warning shot to anyone seeking to promote further regulation of this type. GDI’s site’s last “news” posting was in 2022. HateAid has helped a client file suit against Google in August 2025, and sued X in July for failing to remove criminal antisemitic content. The Center for Countering Digital Hate has also been in court to oppose antisemitic content on X and Instagram; in 2024 Elon Musk called it a ‘criminal organization’. There was more logic to”the three people in hell” taught to an Irish friend as a child (Cromwell, Queen Elizabeth I, and Martin Luther).

Whatever the Trump administration’s intention, the result is likely to simply add more fuel to initiatives to lessen European dependence on US technology.

Illustrations: Christmas tree in front of the US Capitol in 2020 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Simplification

We were warned this was coming at this year’s Computers, Privacy, and Data Protection, and now it’s really here. The data protection NGO Noyb reports that a leaked internal draft (PDF) of the European Commission’s Digital Omnibus threatens to undermine the architecture the EU has been building around data protection, AI, cybersecurity, and privacy generally. At The Register, Connor Jones summarizes the changes; Noyb has detail.

The EU’s workings are, as always, somewhat inscrutable to outsiders. Noyb explains that the omnibus tool is intended to allow multiple laws to be updated simultaneously to “improve the quality of the law and streamline paperwork obligations”. In this case, Noyb argues that the European Commission is abusing this option to fast-track far more substantial and contentious changes that should be subject to impact assessments and feedback from other EU institutions, as well as legal services.

If the move succeeds – the final draft will be presented on November 19 – Noyb believes it could remove fundamental rights to privacy and data protection that Europeans have been building for more than 30 years. Noyb, European Digital Rights, and the Irish Council for Civil Liberties have sent an open letter of objection to the Commission. The basic argument: this isn’t “simplification” but deregulation. The package would still have to be accepted by the European Parliament and a majority of EU member states.

As far as I can recall, business has never much liked data protection. In the early 1990s, when the first laws were being written, I remember being told data protection was a “tax on small business”. Privacy advocates instead see data protection as a way of redressing the power imbalance between large organizations and individuals.

By 1998, when data protection law was implemented in all EU member states, US companies were publicly insisting that the US didn’t need a privacy law in order to be in compliance. Companies could use corporate policies and sectoral laws to provide a “layered approach” that would be just as protective. When I wrote about this for Scientific American in 1999, privacy advocates in the UK predicted a trade war over this, calling it a failure to understand that you can’t cut a deal with a fundamental right – like the First Amendment.

In early 2013, it looked entirely possible that the period of negotiations over data protection reform would end with rollback. GDPR was the focus of intense lobbying efforts. There were, literally, 4,000 proposed amendments, so many that I recall being shown software written to manage and understand them all.

And then…Snowden. His revelations of government spying shifted the mood noticeably, and, under his shadow, when GDPR was finally adopted in 2016 and came into force in 2018, it expanded citizens’ rights and increased penalties for non-compliance. Since then, other countries around the world have used GDPR as a model, including China and several US states.

Those few states aside, at the US federal level data protection law has never been popular, and the pile of law growing around it – the Digital Services Act, the Digital Markets Act, and the AI Act – is particularly unwelcome to the current administration, which sees it as a deliberate attack on US technology companies.

In the UK the in-progress Data (Use and Access) Act, which passed in June, also weakened some data protection provisions. It will be implemented over the year to June 2026.

At its blog, the Open Rights Group argues that some aspects of the DUAA rest on the claim that innovation, economic growth, and public security are harmed by data protection law, a dubious premise.

Until this leak, it seemed possible that the DUAA would break Britain’s adequacy decision and remove the UK from the list of countries to which the EU allows data transfers. The rule is that to qualify a country must have legal protections equivalent to those of the EU. It would be the wrong way round if instead of the UK enhancing its law to match the EU, the EU weakened its law to match the UK.

There’s a whole secondary issue here, which is that a law is only useful if it’s enforced. Noyb actively brings legal cases to force enforcement in the EU. In the UK, privacy advocates, like ORG, have long complained that the Information Commissioner’s Office is increasingly quiescent.

Many of the EU’s changes appear to be aimed at making it easier for AI companies to exploit personal data to develop models. It’s hard to know where that will end, given that every company is sprinkling “AI” over itself in order to sound exciting and new (until the next thing comes along), if this thing comes into force you have to think data protection law will increasingly only apply to small businesses running older technology that can’t be massaged to qualify for exemption..

I blame this willingness to undermine fundamental rights at least partly on the fantasy of the “AI race”. This is nation-state-level FOMO. What race? What’s the end point? What does it mean to “win”? Why the AI race, and not the net-zero race, the renewables race, or the sustainability race? All of those would produce tangible benefits and solve known problems of long standing and existential impact.

Illustrations: A drunk parrot in a Putney garden (photo by Simon Bisson; used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The gated web

What is an AI browser?

Or, in a more accurate representation of my mental reaction, *WTF* is an AI browser?

In wondering about this, I’m clearly behind the times. Tech sites are already doing roundups of their chosen “best” ones. At Mashable, Cecily Mouran compares “top” AI browsers because “The AI browser wars hath begun.”

Is the war that no one wants these things but they’re being forced on us anyway? Because otherwise…it’s just a bunch of heavily financed companies trying to own a market they think will be worth billions.

In Tim Berners-Lee’s original version, the web was meant to simplify sharing information. A key element was giving users control over presentation. Then came designers, who hated that idea. That battle between users’ preferences and browser makers’ interests continues to this day. What most people mean by the browser wars), though, was the late-1990s fight between Microsoft and Netscape, or the later burst of competition around smartphones. A big concern has long been market domination: a monopoly could seek to slowly close down the web by creating proprietary additions to the open standards and lock all others out.

Mouran, citing Casey Newton’s Platformer newsletter, suggests that Google specifically has exploited its browser to increase search use (and therefore ad revenues), partly by merging the address and search bars. I know I’m not typical, but for me search remains a separate activity. Most of the time I’m following a link or scanning familiar sites. Yes, when my browser history fills in a URL, I guess you could say I’m searching the browser history, but to me the better analogy is scanning an array of daily newspapers. Many people *also* use their browser to access cloud-based productivity software and email or play online games, none of which is search.

Nor are chatbots, since they don’t actually *find* information; they apply mathematics and statistics to a load of ingested text and create sentences by predicting the most likely next word. This is why Emily Bender and Alex Hanna call them “synthetic text extruding machines” in their book, The AI Con. I am in the business of trying to make sense of the impact of fast-moving technology, or at least of documenting the conflicts it creates. The only chatbot I’ve found of any value for this – or for personal needs such as a tech issue – is Perplexity, and that’s because it cites (or can be ordered to cite) sources one can check. There is every difference in the world between just wanting an answer and wanting the background from which to derive an answer that may possibly be new.

In any event, Newton’s take is that a company that’s serious about search must build its own browser. Therefore: AI companies are building them. Hence these roundups. Mauron’s pitch: “Imagine a browser that acts as your research assistant, plans trips, sends emails, and schedules meetings. As AI models become more advanced, they’re capable of autonomously handling more complex tasks on your behalf. For tech companies, the browser is the perfect medium for realizing this vision.”

OK, I can see exactly what it does for tech companies. It gives them control over what information you can access, how you use it, and who and how much you pay for the services its agent selects (plus it gets a commission).

I can also see what it does for employers. My browser agent can call your browser agent and negotiate a meeting plan. Then they attend the meeting on our behalf and send us both summaries, which they ingest and file, later forwarding them to our bosses’ agents to verify we were at work that day. In between, they can summarize emails, and decide which ones we need to see. (As Charles Arthur quipped at The Overspill, “Could they…send fewer emails?”)

Remember when part of the excitement of the Internet was the direct access it gave to people who were formerly inaccessible? Now, we appear to be building systems to ensure that every human is their own gated community.

What part of this is good for users? If you are fortunate enough not to care about the price of anything, maybe it’s great to replace your personal assistant with an agentic web browser. Most of us have struggled along doing things for ourselves and each other. At Cybernews, Mayank Sharma warns that AI browsers’ intentional preemption of efforts to browse for yourself, filtering anything they deem “irrelevant”, threaten the open web. Newton quantifies the drop in traffic news publishers are already seeing from generative AI. Will we soon be complaining about information underload?

At Pluralistic last year, Cory Doctorow wrote about the importance of faithful agents: software that is loyal to us rather than its maker. He particularly focused on browsers, which have gone from that initial vision of user control to become software that spies on us and reports home. In Mauron’s piece, Perplexity openly hopes to use chats to build user profiles and eventually show ads.

The good news, such as it is, is that from what I’ve read in writing this, most of these companies hope to charge for these browsers – AI as a subscription service. So avoiding them is also cheaper. Double win.

Illustrations: John Tenniel’s drawing of Davy Jones, sitting on his locker (via Wikimedia, published in Punch, 1892 with the caption, “AHA! SO LONG AS THEY STICK TO THEM OLD CHARTS, NO FEAR O’ MY LOCKER BEIN’ EMPTY!!”

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The bottom drawer

It only now occurs to me how weirdly archaic the UK government’s rhetoric around digital ID really is. Here’s prime minister Keir Starmer in India, quoted in the Daily Express (and many elsewheres):

“I don’t know how many times the rest of you have had to look in the bottom drawer for three bills when you want to get your kids into school or apply for this or apply for that – drives me to frustration.”

His image of the bottom drawer full of old bills is the bit. I asked an 82-year-old female friend: “What do you do if you have to supply a utility bill to confirm your address?” Her response: “I download one.”

Right. And she’s in the exact demographic geeks so often dismiss as technically incompetent. Starmer’s children are teenagers. Lots of people under 40 have never seen a paper statement.

Sure, many people can’t do that download, for various reasons. But they are the same people who will struggle with digital IDs, largely for the same reasons. So claiming people will want digital IDs because they’re more “convenient” is specious. The inconvenience isn’t in obtaining the necessary documentation. It lies in inconsistent, poorly designed submission processes – this format but not that, or requiring an in-person appointment. Digital IDs will provide many more opportunities for technical failure, as the system’s first targets, veterans, may soon find out.

A much cheaper solution for meeting the same goal would be interoperable systems that let you push a button to send the necessary confirmation direct to those who need it, like transferring a bank payment. This is, of course, close to the structure Mydex and researcher Derek McAuley have been working on for years, the idea being to invert today’s centralized databases to give us control of our own data. Instead, Starmer has rummaged in Tony Blair’s bottom drawer to pull out old ID proposals.

In an analysis published by the research organization Careful Industries, Rachel Coldicutt finds a clash: people do want a form of ID that would make life easier, but the government’s interest is in creating an ID that will make public services more efficient. Not the same.

Starmer himself has been in India this week, taking advantage to study its biometric ID system Aadhaar. Per Bloomberg, Starmer met with Infosys co-founder Nandan Nilekani, Aadhaar’s architect, because 16-year-old Aadhaar is a “massive success”.

According to the Financial Times, Aadhaar has 99% penetration in India, and “has also become the bedrock for India’s domestic online payments network, which has become the world’s largest, and enabled people to easily access capital markets, contributing to the country’s booming domestic investor base.” The FT also reports that Starmer claims Aadhaar has saved India $10 billion a year by reducing fraud and “leakages” in welfare schemes. In April, authentication using Aadhaar passed 150 billion transactions, and continues to expand through myriad sectors where its use was never envisioned. Visitors to India often come away impressed. However…

At Yale Insights, Ted O’Callahan tells the story of Aadhaar’s development. Given India’a massive numbers of rural poor with no way to identify themselves or access financial services, he writes, the project focused solely on identification.

Privacy International examines the gap between principle and practice. There have been myriad (and continuing) data breaches, many hit barriers to access, and mandatory enrollment for accessing many social protection schemes adds to preexisting exclusion.

In a posting at Open Democracy, Aman Sethi is even less impressed after studying Aadhaar for a decade. The claim of annual savings of $10 billion is not backed by evidence, he writes, and Aadhaar has brought “mass surveillance; a denial of services to the elderly, the impoverished and the infirm; compromised safety and security, and a fundamentally altered relationship between citizen and state.” As in Britain in 2003, when then-prime minister Tony Blair proposed the entitlement card, India cited benefit fraud as a key early justification for Aadhaar. Trying to get it through, Blair moved on to preventing illegal working and curbing identity theft. For Sethi, a British digital ID brings a society “where every one of us is a few failed biometrics away from being postmastered” (referring to the postmaster Horizon scandal).

In a recent paper for the Indian Journal of Law and Legal Research, Angelia Sajeev finds economic benefits but increased social costs. At the Christian Science Monitor, Riddhima Dave reports that many other countries that lack ID systems, particularly developing countries, are looking to India as a model. The law firm AM Legals warns of the spread of data sharing as Aadhaar has become ubiquitous, increasing privacy risks. Finally, at the Financial Times, John Thornhill noted in 2021 the system’s extraordinary mission creep: the “narrow remit” of 2009 to ease welfare payments and reduce fraud has sprawled throughout the public sector from school enrollment to hospital admissions, and into private companies.

Technology secretary Liz Kendall told Parliament this week that the digital ID will absolutely not be used for tracking. She is utterly powerless to promise that on behalf of the governments of the future.

If Starmer wants to learn from another country, he would do well to look at those problems and consider the opportunity costs. What has India been unable to do while pursuing Aadhaar? What could *we* do with the money and resources digital IDs will cost?

Illustrations: In 1980’s Yes, Minister (S01e04, “Big Brother”), minister Jim Hacker (Paul Eddington) tries to explain why his proposed National Integrated Database is not a “Big Brother”.

Update: Spelling of “Aadhaar” corrected.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Undue process

To the best of my knowledge, Imgur is the first mainstream company to quit the UK in response to the Online Safety Act (though many US news sites remain unavailable due to 2018’s General Data Protection Regulation. Widely used to host pictures for reuse on web forums and social media, Imgur shut off UK connections on Tuesday. In a statement on Wednesday, the company said UK users can still exercise their data protection rights. That is, Imgur will reply within the statutory timeframe to requests for copies of our data or for the account to be deleted.

In this case, the push came from the Information Commissioner’s Office. In a statement, the ICO explains that on September 10 it notified Imgur’s owner, MediaLab AI of its provisional findings from its previously announced investigation into “how the company uses children’s information and its approach to age assurance”. The ICO proposed to fine Imgur. Imgur promptly shut down UK access. The ICO’s statement says departure changes nothing: “We have been clear that exiting the UK does not allow an organisation to avoid responsibility for any prior infringement of data protection law, and our investigation remains ongoing.”

The ICO calls Imgur’s departure “a commercial decision taken by the company”. While that’s true, EU and UK residents have dealt for years with unwanted cookie consent banners because companies subject to data protection laws have engaged in malicious compliance intended to spark a rebellion against the law. So: wash.

Many individual users stick to Imgur’s free tier, but it profits from subscriptions and advertising. MediaLab AI bought it in 2021, and uses it as a platform to mount advertising campaigns at scale for companies like Kraft-Heinz and Alienware.

Meanwhile, UK users’ Imgur accounts are effectively hostages. We don’t want lawless companies. We also don’t want bad laws – or laws that are badly drafted and worse implemented. Children’s data should be protected – but so should everyone’s. There remains something fundamentally wrong with having a service many depend upon yanked with no notice.

Companies’ threats to leave the market rather than comply with the law are often laughable – see for example Apple’s threat to leave the EU if it doesn’t repeal the Digital Markets Act. This is the rare occasion when a company has actually done it (although presumably they can turn access back on at any time). If there’s a lesson here, it may be that without EU membership Britain is now too small for foreign companies to bother complying with its laws.

***

Boundary disputes and due process are also the subject of a lawsuit launched in the US against Ofcom. At the end of August, 4chan and Kiwi Farms filed a complaint in a Washington, DC federal court against Ofcom, claiming the regulator is attempting to censor them and using the OSA to “target the free speech rights of Americans”.

We hear less about 4chan these days, but in his book The Other Pandemic, journalist James Ball traces much of the spread of QAnon and other conspiracy theories to the site. In his account, these memes start there, percolate through other social media, and become mainstream and monetized on YouTube. Kiwi Farms is equally notorious for targeted online and offline harassment.

The argument mooted by the plaintiffs’ lawyer Preston Byrne is that their conduct is lawful within the jurisdictions where they’re based and that UK and EU countries seeking to enforce their laws should do so through international treaties and courts. There’s some precedent to the first bit, albeit in a different context. In 2010. the New York State legislature and then the US Congress passed the Libel Tourism Protection Act. Under it, US courts are prevented from enforcing British libel judgments if the rulings would not stand in a US court. The UK went on to modify its libel laws in 2013.

Any country has the sovereignty to demand that companies active within its borders comply with its laws, even laws that are widely opposed, and to punish them if they don’t, which is another thing 4chan’s lawyers are complaining about. The question the Internet has raised since the beginning (see also the Apple case and, before it the 1996 case United States v. Thomas) is where the boundary is and how it can be enforced. 4chan is trying to argue that the penalties Ofcom provisionally intends to apply are part of a campaign of targeted harassment of US technology companies. Odd to see *4chan* adopting the technique long ago advocated by staid, old IBM: when under attack, wrap yourself in the American flag.

***

Finally, in the consigned-to-history category, AOL shut down dialup on September 30. I recall traveling with a file of all of the dialup numbers the even earlier service, CompuServe maintained around the world. It was, in its time, a godsend. (Then AOL bought up the service, its biggest competitor before the web, and shut it down, seemingly out of spite.) For this reason, my sympathies are with the 124,000 US users the US Census Bureau says still rely on dial-up – only a few thousand of them were paying for AOL, per CNBC – and the uncounted others elsewhere. It’s easy to forget when you’re surrounded by wifi and mobile connections that Internet access remains hard for many people.

Elsewhere this week: Childproofing the Internet, at Skeptical Inquirer.

Illustrations: Imgur’s new UK home page.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The absurdity card

Fifteen years ago, a new incoming government swept away a policy its immediate predecessors had been pushing since shortly after the 2001 9/11 attacks: identity cards. That incoming government was led by David Cameron’s conservatives, in tandem with Nick Clegg’s liberal democrats. The outgoing government was Tony Blair’s. When Keir Starmer’s reinvented Labour party swept the 2024 polls, probably few of us expected he would adopt Blair’s old policies so soon.

But here we are: today’s papers announce Starmer’s plan for mandatory “digital ID”.

Fifteen years is an unusually long time between ID card proposals in Britain. Since they were scrapped at the end of World War II, there has usually been a new proposal about every five years. In 2002, at a Scrambling for Safety event held by the Foundation for Information Policy Research and Privacy International, former minister Peter Lilley observed that during his time in Margaret Thatcher’s government ID card proposals were brought to cabinet every time there was a new minister for IT. Such proposals were always accompanied with a request for suggestions how it could be used. A solution looking for a problem.

In a 2005 paper I wrote for the University of Edinburgh’s SCRIPT-ED journal, I found evidence to support that view: ID card proposals are always framed around current obsessions. In 1993, it was going to combat fraud, illegal immigration, and terrorism. In 1995 it was supposed to cut crime (at that time, Blair argued expanding policing would be a better investment). In 1989, it was ensuring safety at football grounds following the Hillsborough disaster. The 2001-2010 cycle began with combating terrorism, benefit fraud, and convenience. Today, it’s illegal immigration and illegal working.

A report produced by the LSE in 2005 laid out the concerns. It has dated little, despite preceding smartphones, apps, covid passes, and live facial recognition. Although the cost of data storage has continued to plummet, it’s also worth paying attention to the chapter on costs, which the report estimated at roughly £11 billion.

As I said at the time, the “ID card”, along with the 51 pieces of personal information it was intended to store, was a decoy. The real goal was the databases. It was obvious even then that soon real time online biometric checking would be a reality. Why bother making a card mandatory when police could simply demand and match a biometric?

We’re going to hear a lot of “Well, it works in Estonia”. *A* digital ID works in Estonia – for a population of 1.3 million who regained independence in 1991. Britain has a population of 68.3 million, a complex, interdependent mass of legacy systems, and a terrible record of failed IT projects.

We’re also going to hear a lot of “people have moved on from the debates of the past”, code for “people like ID cards now” – see for example former Conservative leader William Hague. Governments have always claimed that ID cards poll well but always come up against the fact that people support the *goals*, but never like the thing when they see the detail. So it will probably prove now. Twelve years ago, I think they might have gotten away with that claim – smartphones had exploded, social media was at its height, and younger people thought everything should be digital (including voting). But the last dozen years began with Snowden‘s revelations, and continued with the Cambridge Analytica Scandal, ransomware, expanding acres of data breaches, policing scandals, the Horizon / Post Office disaster, and wider understanding of accelerating passive surveillance by both governments and massive companies. I don’t think acceptance of digital ID is a slam-dunk. I think the people who have failed to move on are the people who were promoting ID cards in 2002, when they had cross-party support, and are doing it again now.

So, to this new-old proposal. According to The Times, there will be a central database of everyone who has the right to work. Workers must show their digital ID when they start a new job to prove their employment is legal. They already have to show one of a variety of physical ID documents, but “there are concerns some of these can be faked”. I can think of a lot cheaper and less invasive solution for that. The BBC last night said checks for the right to live here would also be applied to anyone renting a home. In the Guardian, Starmer is quoted calling the card “an enormous opportunity” and saying the card will offer citizens “countless benefits” in streamlining access to key services, echoes of 2002’s “entitlement card”. I think it was on the BBC’s Newsnight that I heard someone note the absurdity of making it easier to prove your entitlement to services that no longer exist because of cuts.

So keep your eye on the database. Keep your eye on which department leads. Immigration suggests the Home Office, whose desires have little in common with the need of ordinary citizens’ daily lives. Beware knock-on effects. Think “poll tax”. And persistently ask: what problem do we have for which a digital ID is the right, the proportionate, the *necessary* solution?

There will be detailed proposals, consultations, and draft legislation, so more to come. As an activist friend says, “Nothing ever stays won.”

Illustrations: British National Identity document circa 1949 (via Wikimedia.)

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.