In search of the future Internet

“What kind of Internet do you want [him] to inherit?” “Him” was then measuring his age in weeks.

“Not *this* Internet.”

Now, when said son has grown to measure his life in months, my friend and I are no closer to a positive vision. But notably many more people seem to be asking the same kind of question.

In the last week I’ve been to two meetings convened to pull together a cross-section of activists, policy wonks, and techies to talk about building movements to push back against the spread of technological control. The goals of these groups, like my friend’s and mine remain fuzzy, but they reflect a widespread growing alarm about AI, US entanglement, and our other technological ills.

“When did the future stop being something we plan for and become something done to us?” a friend asked about five years ago. That sense of being held hostage by the inevitability narrative is there, too, in a jumble including job loss, the evils of capitalism, the embedding of companies like Palantir in the health service and soon in policing, the speed of change, widespread loneliness, sustainability, and existential threats. So the overall feel has been part-Occupy, part consciousness-raising session.

Those who did have visions to propose often seemed to be describing things that already exist: trusted, authoritative content (the BBC, Wikipedia); ending capitalism in favor of shared ownership and distributed power (“there’s always someone reinventing communism,” the person next to me muttered), and recreating the impossible dream of micropayments.

One meeting polled us with a list of concerns about AI and asked us to pick the most important. The winner, by far, was “consolidation of power”. This speaks to a wider movement than merely opposing AI or resisting the encroachment of the worst technology surveillance practices into daily life.

Similar discussions have been growing for at least a couple of years. At The Register, long-time open source advocate Liam Proven writes after attending the Open Source Policy Summit that Europe is reassessing its technological reliance on US IT services, which offers the potential for a US president to order disconnection. The lack of European billion-dollar technology companies leads people to forget the technology invented here that instead embraced openness: the web, Linux, Raspberry Pi, Open StreetMap, the Fediverse.

It’s a little alarming, however, that all of this discussion hovers at the application layer. Old-timers who’ve watched the Internet build up understand that underneath the social media and smartphones lies the physical layer, the infrastructure that is also condolidated and controlled: chips, cables, wireless spectrum. For younger folks, those elements are near-invisible; their adult lives have been dominated by concerns about data. Yet in the last year we’ve been warned of sabotage to undersea cables and chip shortages. There’s more general recognition of the issues surrounding data centers’ demand for power and water.

Even so, there’s a good amount of recognition that all the strands of our present polycrisis are intertwined – see for example the mission statement at Germany’s Cables of Resistance. A broader group, building on the 2024 conference convened by Cristina Caffarra, who called out policy makers at CPDP 2024 for ignoring physical infrastructure, is working on a EuroStack to provide a European cloud alternative.

At the political layer, we have Dutch News reporting that Dutch MPs are pushing their government to move away from depending on US technology companies to provide essential infrastructure. In the UK, LibDem and Green MPs are calling on the government to reconsider its contracts with Palantir.

A group called Pull the Plug will lead a “march against the machines” in London on February 28 to demand the UK government create citizens’ assemblies and implement their decisions on AI.

It feels like change is gathering here. In the US, the future still looks much like the past. In a blog post this week, here is Anthropic, presumably responding to OpenAI’s plan to add advertising to ChatGPT:

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking…We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

Compare and contrast to Google founders Sergey Brin and Larry Page in their 1998 Google-founding paper:

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users…we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

No wonder Anthropic adds this caution: “Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.” Translation: we may need the money.” Of course they’ll frame it as serving the customer better.

Illustrations: (One of) the first Internet ad, for AT&T, on HotWired (via The Internet History Podcast.

Also this week:
At the Plutopia podcast we talk to science fiction writer Ken MacLeod.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Split

In an abrupt reversal, UK prime minister Keir Starmer announced this week that the digital IDs he said in September would be mandatory for proving the right to work will now be…not so much. The announcement appears to reinstate the status quo: workers can continue proving their right to work by showing a passport or e-visa.

It’s not clear what led to the change, although some – commenters on social media, former Labour home secretary David Blunkett – suggest opponents “won”. The reality is that digital IDs are probably not really going away. However, making them optional is an important step in the right direction. A lot more is needed to develop a system that works for people instead of for governments.

***

Ten days on, the discovery that Xai’s Grok chatbot was being used to “nudify” images of women and children is still firing headlines, especially since the BBC reported that the Internet Watch Foundation had found “criminal images” of girls between 11 and 13 that appeared to have been Grok-generated. Child sexual abuse material is illegal in the UK, as in many other countries, no matter how it’s created or whether it’s real or synthetic. On Wednesday, Elon Musk asked on X if anyone could break Grok’s image moderation.

Last Friday, the Independent, among others, reported that X had turned off Grok’s image generation for all but the site’s (paying) verified users. On Monday, Starmer warned that X could lose the right to self-regulate if it could not control Grok. On Tuesday, Ofcom said it was launching an investigation, and Starmer told the House of Commons that X was “acting to ensure full compliance with the law”. In fact, it later came out, he was basing this information on media reports but had not himself been in contact with X himself. His government is now planning legislation to criminalize this type of software. Yesterday, Musk announced X would geoblock the AI tool in countries where it’s illegal. This morning, the Guardian reports that the feature is still not blocked in the Grok app.

As an unexpected side effect, these revelations have reignited divisions in the venerable and venerated elite scientists’ Royal Society, which elected Elon Musk an Overseas Fellow in 2018.

To recap: in August 2024, Nicola Davis reported at the Guardian that a 74 Fellows had written to the Society calling for Elon Musk’s expulsion, after Musk tweets promoting unrest in the UK and propagating scientific disinformation.

In late 2024, the developmental neuropsychologist Dorothy M. Bishop blogged that she had resigned from the Royal Society to protest Elon Musk’s continued membership as an Overseas Fellow.

Further resignations have followed. Next up, iIn February, was professor of systems biology Andrew Millar, who deplored . Around the same time, more than 1,000 scientists signed an open letter to the Society’s then-president Andrew Smith calling for Musk’s ouster.

In March, Andrew Sella, a chemist, returned the Society’s Michael Faraday prize for science communication, explainingthe society’s inaction. Also that month, on X neural networking pioneer Geoff Hinton called for Musk’s expulsion. There was another burst of calls for Musk to be expelled in September, when he addressed a far-right rally organized by Tommy Robinson.

At the end of 2025, the Royal Society changed presidents. In April 2025, the incoming president, geneticist and Nobel Laureate Paul Nurse, taking the position for a rare second time, told The Times that he had written to Musk asking him if he could do something to improve the situation of American science, adding that given the damage he has caused to the “scientific endeavor in the United States” he should consider resigning from the Society.

In retrospect, more attention should have been paid to Nurse’s position that Musk should not be expelled, which he justified by saying that many Fellows were “odd”. The Guardian published more details about that correspondence in July.

A few days ago, professor of materials science Rachel Oliver published an open letter to Nurse asking him to reconsider his argument that Fellows should only be expelled if their science proved “fraudulent or highly defective”. Oliver argues that this stance grants “a licence to harass to the already powerful people on whom the Society bestows fellowship”.

She was responding to this week’s report in which Nurse doubled down on those overlooked comments, arguing that the code of conduct Fellows cited to justify expulsion might need to be revised because it resembled an employer’s code of conduct, and Fellows are not employees. He also took another shot at members who aren’t Musk, pointing to a portrait of Isaac Newton and saying, “He was a very nasty piece of work, yet we revere him.” I’m not sure that “we tolerated assholes in the past so we should continue to do so” is the persuasive argument he thinks it is.

It’s also clear that the Royal Society will continue to face public and private censure, no matter what it does now. This row will resurface every time Musk is in the news. The Royal Society is damned whatever it decides; it can’t keep hoping Musk will gentlemanly fall on his sword.

Illustrations: Sir Isaac Newton, as seen in the National Portrait Gallery, London (via Wikimedia).

Also this week:
At the Techgrumps podcast, #3.36, Men are weird: The Return of the Glasshole.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Disconnexion

One thing we left out in last week’s complaint is generative AI’s undoubted ability to magnify the worst of human online behavior. A few days ago, the world discovered that X’s chatbot, Grok, can be commanded to “nudify” images of women and children – that is, digitally remove their clothes without their consent. A number of commenters also note that some of the same British politicians who are calling out X and Grok about this and who more broadly insist on increasing restrictions in the name of online safety nonetheless continue to post there. Even Ashley St. Clair, the mother of one of Elon Musk’s sons, is unable to get these images taken down. Some ministers have called for banning this form of deepfake software.

Among those calling for Elon Musk to act “urgently” are technology secretary Liz Kendall and prime minister Keir Starmer. The BBC reported this morning (January 9) that the government is calling on Ofcom to use “all its powers”. At Variety, Naman Rathandran reports that X has moved AI image editing behind a paywall.

On January 2, at the National Observer, Jimmy Thompson calls on the Canadian government to delete their accounts. On Wednesday, the Commons women and equalities committee announced it would stop using X. As of January 8, both Kendall and Starmer are still posting on X, along with the UK’s Supreme Court and the Regulatory Policy committee and doubtless many others. Ofcom, the regulatory agency in charge of enforcing the Online Safety Act, posted a statement on January 5 saying it has contacted X and plans a “swift assessment to determine whether there are potential compliance issues that warrant investigation”. At the Online Safety Act Network, Lorna Woods explains the relevant law.

My guess is that few politicians manage their own social media – an extreme form of mental compartmentalization – and their aides are schooled in the belief that “we must meet the audience where they are”. In that sense, these accounts are not ordinary users, who use social media to connect to their friends and other interesting people. Politicians, like many others who are paid to show off in public, use social media to broadcast, not so much to participate. But much depends on whether you think that Grok’s behavior is one piece of a fundamental structural problem with X and its ownership or whether you believe it’s an isolated ill-thought-out feature to be solved by tweaking software, a distinction Jason Koebler explores at 404 Media.

The politicians’ accounts doubtless predate Musk’s takeover. Twitter was – and X is – small compared to other social media. But the short-burst style perfectly suited journalists, who gave it far more coverage than it probably deserved. Politicians go where they perceive the public to be, which is often signaled by media coverage.

It’s not necessarily wrong for politicians and government agencies to argue that they should be on X to serve their constituents who use it. But to legitimize that claim they should also be cross-posting on every significant platform, especially the open web. We can then argue about the threshold for “significant”. At a guess, it’s bigger than a blog but smaller than Mastodon, where politicians are notoriously absent.

***

The early 2020s’ exciting future of cryptocurrencies has gotten lost in the distraction of the last couple of years’ excitement over our new future of technologies pretending to be “smart”. In 2023’s “crypto winter”, we thought anyone still interested was either an early booster or thought they could smell profit. As Molly White wrote this week, they’ve spent the last two years nourishing grudges and building a political machine that could sink large parts of the economy.

More quietly, as Dave Birch predicted in 2017 (and repeated in his 2020 book, The Currency Cold War) “serious people” were considering their approach. Among them, Birch numbered banks, governments, and communities.

Now, governments are hatching proposals. As 2025 ended, the European Council backed the European Central Bank’s digital euro plan; the European Parliament will vote on it this year. The Financial Times reports that this electronic alternative to cash could help European central bankers pull back some control over electronic retail payments from the US organizations that dominate the field. The ECB hopes to start issuing the currency in 2029. In the UK, the Bank of England is mulling the design of the digital pound. The International Monetary Fund sees the digital euro as a continuation of financial stability.

Birch dates government interest to Facebook’s now-defunct 2019 cryptocurrency plan. Today, I imagine new motives: the US’s diminishing reliability as an ally raises the desirability of lessening reliance on its infrastructure generally. Visa, Mastercard, and other payment mechanisms largely transit US systems, a reality the FT says European banks are already working to change. In March, ECB board member Philip R. Lane argued that the digital euro will foster monetary autonomy.

We’ll see. The Economist writes that many countries are recognizing cash’s greater resilience, and are rethinking plans to go all-digital.

It remains hard to know how much central bank digital currencies will matter. As I wrote in 2023, there are few obvious benefits to individuals. For most of us the problem isn’t the mechanism for payments, it’s finding the money.

Illustrations: Bank of England facade.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The AI who was God

Three subjects dominated 2025: increasing AI infestation, expanding surveillance use of biometrics, and, age verification and online safety. The last spent the year spreading across the world, including, most recently, to Louisiana. There, on December 22, a US District Court blocked the law on First Amendment grounds in a suit brought by the trade association NetChoice. Less than a week earlier on similar grounds, NetChoice also won a suit in Arkansas that would have penalized platforms for “using designs or algorithms” that they “know or should have known” could harm users by for example leading to addiction, drug use, or self-harm. The judge in this case called the law “unconstitutionally vague”. Personally, it seems like it would be hard to prove cause and effect.

However, much of the rest of the year felt in many ways like rinse-and-repeat, only bigger and more frustrating. The immediate future – 2026 – therefore looks like more of all of those perennial topics, especially surveillance. This time next year we will still be fighting over age verification, network neutrality, national identification systems, surveillance, data protection, security issues surrounding the Internet of Things and other “smart” devices, social media bans, and access to strong encryption, along with other perennials such as copyright and digital sovereignty.

It is however possible that AI might have gone quiet by then. Three types of reasons: financial, technical, and social.

To take finances first, concerns about the AI bubble have been building all year. In the latest of his series of diatribes about this and the “rot economy”, Ed Zitron writes that AI is bringing “enshittification Stage Four”, in which companies, having already turned on their users and customers, turn on their shareholders. Zitron traces the circular deals, the massive debt, the extravagant claims, and the disproportionately small revenues, and invokes the adage, if something can’t go on forever, it will stop.

On the technical side, no matter what Elon Musk predicts, more sober commentary at MIT Technology Review is calling “reset”. As Adam Becker writes in More Everything Forever, one thing that can’t go on ad infinitum is exponentially increasing computing power-up: exponential growth always hits resource limits. It is entirely possible that come 2027 we’ll have run out of all sorts of road on this current paradigm of “AI”. If so, expect to hear a lot more about how quantum is ready to remake the world. Generative AI will still be bigger ten years from now (just like the Internet in 2000, when the dot-com boom crashed), but it won’t become sentient and fix climate change.

Brief digression. On Mastodon, Icelandic web developer Baldur Bjarnason posts that he’s hearing people claim that studies showing that large language models won’t lead to AGI are “whitewashing creationism”. Uh…huh?

On the social side, pressure is mounting to curb the industry’s growth. US politicians including US senator Bernie Sanders (D-VT) and Florida governor Ron DeSantis (R) are both working to slow data center construction. Data centers guzzle power and water, as Zitron also explains, and nearby residents pay both directly and indirectly.

Other harms keep mounting up. The year’s Retraction Watch annual report includes myriad fake references Salesforce fired 4,000 people before only now realizing that large language models can’t do their jobs; other companies nonetheless want to copy it. Organizers canceled a concert by Canadian musician Ashley MacIsaac after a Google AI summary wrongly said he’d been convicted of sexual assault.

At Utah’s Park Record, Cannon Taylor reported recently that in late October an AI summary indicated that a West Jordan, Utah police officer had morphed into a frog. Simple explanation: ta Harry Potter movie playing in the background had been recorded by the officer’s bodycam during an investigation. Per the story, the summary seemed humanly written until, “And then the officer turned into a frog, and a magic book appeared and began granting wishes.”

The story goes on to report several different AI software trials. One, used in Summit County, has a setting that inserts deliberate errors into the summaries to expose officers who don’t thoroughly check them. With that turned off, the time savings over having officers write their own summaries are considerable. Summit County turned it on.The time savings vanished. The County decided to pass.

Back when pranksters used to deface web pages for fun, a pasttime more embarrassing than harmful, I thought it would be much worse when they learned to make small, hard-to-detect changes that poisoned the information supply.

AI is perfect for automating this.

In their “FakeParts” paper (PDF), researchers at the Institut Polytechnique de Paris discuss a disturbing example: subtle, localized AI-driven changes to otherwise real videos. These fakeparts blend in seamlessly; identifying them is far harder than spotting a complete fake, which on its own is hard enough. The researchers warn that subtle changes to facial expressions or gestures can change the emotional content of genuine statements, great for creating targeted attacks and sophisticated disinformation campaigns.

Cut to James Thurber‘s 1939 fable, The Owl Who Was God. If AI kills us, it will be because we trust it without applying common sense.

Illustrations: Barred owl (photo by Steve Bellovin, used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: More Everything Forever

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity
By Adam Becker
Basic Books (Hachette)
ISBN: 9781541619593
Publication date: April 22, 2025

A friend who would be 93 now used to say that the first time he’d read about the idea of living long enough to live forever was when he was about eight. Even at that age, he was a dedicated reader of science fiction, though he also said this was a habit so weird at the time that he had to hide it from his classmates.

Cut to 1992, when I reviewed Ed Regis’s book Great Mambo Chicken and the Transhuman Condition for New Scientist. Regis traveled the American southwest, finding cryonicists, guys building rockets in the desert, wondering whether gravity was really necessary, figuring out how to make backups of our brains, spinning chickens in centrifuges to understand the impact of heavier-than-earth gravity, that sort of thing. Regis called it “fin-de-siecle hubris”.

In 1992 it was certainly tempting to believe that this sort of craziness was somehow related to the upcoming millennium. Today’s techbros have no such excuse, yet their dreams are the same. This is the collection Timnit Gebru and Émile Torres have dubbed TESCREAL: Transhumanism, extropianism, Singularitarianism, Cosmism, Rationalists, Altruism, and Longtermism, all of it, as Adam Becker explains in More Everything Forever, more of a rebranding than a new vision of the future.

You could accordingly view Becker’s book as a follow-up, 25 years on. Regis could present all this as a mostly whacked-out bunch of dreamers, but since then it’s all become much more serious. Today’s chicken-spinners are armed with massive amounts of money and power and are willing to ignore the present suffering of millions if it means enabling their image of the future. We’ve met this crowd before, in the pages of Douglas Rushkoff’s Survival of the Richest. These are the folks who treat science fiction’s cautionary tales as a manual for what to build.

Becker does a fine job of tracing the history of the various TESCREAL strands. Most are older than one might expect, some with roots in thousands-year-old Christian beliefs. Isn’t fear of death, which Becker believes lies at the core of all this, as old as humanity? At last year’s CPDP, Mireille Hildebrandt called TESCREAL “paradise engineering”.

“If it violates physics, you can ignore it,” I was told at a conference on these topics after I asked how to distinguish the appealing-but-impossible from the well-maybe-someday. Becker proves the wisdom of this: his grounding in engineering and physics helps him provide essential debunking. Mars is too far away and too poisonous for humans to settle there any time soon. Meanwhile, he points out, Moore’s Law, which underpins projections by folks like Ray Kurzweil that computational power will continue to accelerate exponentially, is far more likely to end, like all other exponential trends. Physics, resource constraints, the increasing difficulty of finding new technological paradigms, and the fact that we understand so little of how the human brain or consciousness really works are all factors. The reality, Becker concludes, is that AGI is at best a long, long way off.

The censorship-industrial complex

In a sign of the times, the Academy of Motion Picture Arts and Sciences has announced that in 2029 the annual Oscars ceremony will move from ABC to YouTube, where it will be viewable worldwide for free. At Variety, Clayton Davis speculates how advertising will work – perhaps mid-roll? The obvious answer is to place the ads between the list of nominees and opening the envelope to announce the winner. Cliff-hanger!

The move is notable. Ratings for the awards show have been declining for decades. In 1960, 45.8 million people in the US watched the Oscars – live, before home video recording. In 1998, the peak, 55.2 million, after VCRs, but before YouTube. In 2024: 19.5 million. This year, the Oscars drew under 18.1 million viewers.

On top of that, broadcast TV itself is in decline. One of the biggest audiences ever gathered for a single episode of a scripted show was in 1983: 100 million, for the series finale of M*A*S*H. In 2004, the Friends finale drew 52.5 million. In 2019, the Big Bang Theory finale drew just 17.9 million. YouTube has more than 2.7 billion active users a month. Whatever ABC was paying for the Oscars, reach may matter more than money, especially in an industry that is also threatened by shrinking theater audiences. In the UK, YouTube is second most-watched TV service ($), after only the BBC.

The move suggests that the US audience itself may also not be as uniquely important as it was historically. The Academy’s move fits into many other similar trends.

***

During this week’s San Francisco power outage, an apparently unexpected consequence was that non-functioning traffic lights paralyzed many of the city’s driverless Waymo taxis. In its blog posting, the company says, “While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.”

Friends in San Francisco note that the California Driver’s Handbook (under “Traffic Control”) is specific about what to do in such situations: treat the intersection as if it had all-way stop signs. It’s a great example of trusting human social cooperation.

Robocars are, of course, not in on this game. In an uncertain situation they can’t read us. So the volume of requests overwhelmed the remote human controllers and the cars froze, blocking intersections and even sidewalks. Waymo suspended the service temporarily, and says it is updating the cars’ software to make them act “more decisively” in such situations in future.

Of course, all these companies want to do away with the human safety drivers and remote controllers as they improve cars’ programming to incorporate more edge cases. I suspect, however, that we’ll never really reach the point where humans aren’t needed; there will always be new unforeseen issues. Driving a car is a technical challenge. Sharing the roads with others is a social effort requiring the kind of fuzzy flexibility computers are bad at. Getting rid of the humans will mean deciding what level of dysfunction we’re willing to accept from the cars.

Self-driving taxis are coming to London in 2026, and I’m struggling to imagine it. It’s a vastly more complex city to navigate than San Francisco, and has many narrow, twisty little streets to flummox programmers used to newer urban grids.

***

The US State Department has announced sanctions barring five people and potentially their families from obtaining visas to enter or stay in the US, labeling them radical activists and weaponized NGOs. They are: Imran Ahmed, an ex-Labour advisor and founder and CEO of the Centre for Countering Digital Hate; Clare Melford, founder of the Global Disinformation Index; Thierry Breton, a former member of the European Commission, whom under secretary of state for public diplomacy Sarah B. Rogers, called “a mastermind” of the Digital Services Act; and Josephine Ballon and Anna-Lena von Hodenberg, managing directors of the independent German organization HateAid, which supports people affected by digital violence. Ahmed, who lived in Washington, DC, has filed suit to block his deportation; a judge has issued a temporary restraining order.

It’s an odd collection as a “censorship-industrial complex”. Breton is no longer in a position to make laws calling US Big Tech to account; his inclusion is presumably a warning shot to anyone seeking to promote further regulation of this type. GDI’s site’s last “news” posting was in 2022. HateAid has helped a client file suit against Google in August 2025, and sued X in July for failing to remove criminal antisemitic content. The Center for Countering Digital Hate has also been in court to oppose antisemitic content on X and Instagram; in 2024 Elon Musk called it a ‘criminal organization’. There was more logic to”the three people in hell” taught to an Irish friend as a child (Cromwell, Queen Elizabeth I, and Martin Luther).

Whatever the Trump administration’s intention, the result is likely to simply add more fuel to initiatives to lessen European dependence on US technology.

Illustrations: Christmas tree in front of the US Capitol in 2020 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Slop

Sometimes it doesn’t pay to be first. iRobot, the maker of the Roomba, has filed for Chapter 11 bankruptcy protection and been acquired by Picea, one of its Chinese suppliers, Lauren Almeida reports at the Guardian. The company’s value has cratered since 2021.

Given the wild enthusiasm that greeted the Roomba’s release in 2002, it seems incredible. Years before then, I recall an event where a speaker whose identity I don’t remember said that ever since he had mentioned the possibility of a robot vacuum sometime like the 1960s he’d gotten thousands of letters asking when it would be ready. There was definitely customer demand. It helped that the Roomba itself was kind of cute as it banged randomly into furniture. People named them, and took them on vacation. But, as often happens, the Roomba’s success attracted lower-cost competitors, and the first mover failed to keep up.

I got one in 2003. After a great few months, I realized that Roombas are not compatible with long hair, which ties them into knots that take longer to cut out than vacuuming. I gave it away within a year and haven’t tried again.

At Mashable, Leah Stodart warns that although the Roombas people already have will continue to work “for now”, users can’t be confident that this state of affairs will continue. Like so many other things that used to be things we owned and are now things we subscribe to (but still think we “buy”), newer-model Roombas are controlled by an app that the manufacturer can change or discontinue at will. She calls it “unplanned obsolescence”. Her advice not to buy a new one this year is sound from the consumer’s point of view, but hardly likely to help the company survive.

***

If generative AI is so great, why is everyone forcing it on us? The latest example, Luke James reports at Tom’s Hardware, is LG “smart” TVs whose users woke up the other day to find a new update had installed “CoPilot: Your AI Companion” without asking permission and that there was no option to remove it. The most you can do to disable it, James says, is keep your TV disconnected from the Internet.

There are of course many more, the automated summaries popping up everywhere being the most obvious. Then, Matthew Gault reports at 404 Media, a Discord moderator and an Anthropic executive added Anthropic’s Claude chatbot to a community for queer gamers, who had voted to restrict Claude to its own channel. Result: major exodus. Duh.

And, of course, as Lance Ulanoff reminds at TechRadar, there is “AI slop” everywhere – music playlists, YouTube videos, ebooks – threatening people’s livelihood even though, as Cory Doctorow has written, “AI can’t do your job. But an AI salesman can convince your boss to fire you and replace you with a chatbot that can’t do your job.” For a while, anyway: Microsoft is halving its sales targets for AI.

And thus we get “slop” as the word of the year, per Merriam-Webster. Any time companies are this intent on foisting something on us – chatbots, ads – you have to know that they’re intent on favoring their interests, not ours.

***

Last week, Customs and Borders Patrol published a notice in the Federal Register proposing new rules for foreigners traveling to the US on an ESTA (“Electronic System for Travel Authorization”) as part of the visa waiver program. It has drawn a lot of discussion in the UK, which is one of the 42 affected countries. Under the new rules, applicants must install CBP’s app, into which they must submit a massive load of “high-value” personal information. The list is long, allows for a so-far-imaginary future of DNA sampling, and expects you to be able to give five years’ worth of family members’ residences, phone numbers, and places of birth, and all the email addresses you’ve used for ten years. CBP thinks the average applicant should be able to complete on their smartphone in 22 minutes. I think it would take hours of painful, resentful typing on a stupid touch keyboard, and even then I doubt I could fill it out with any certainty that the information I supplied was complete or accurate. Data collection at this scale makes it easy to find an error to use as an excuse to deny entry to or deport someone you want to get rid of. As Edward Hasbrouck writes at Papers, Please, “Welcome to the 2026 World Cup”.

“They have to be planning to use AI on all that data,” a friend commented last week. Probably – to build social graphs and find connections deemed suspicious. Privacy International predicts that the masses of data being demanded will in fact enable the AI tools necessary to implement automated decision making and calls the proposals “disproportionate for “a family’s visit to Disney World”,

One of the problems Hasbrouck highlights while opposing this level of suspicionless data collection is that CBP has not provided any way for would-be respondents to the Federal Register notice to examine the app’s source code. What other data might it be collecting?

As Hasbrouck adds in a follow-up, the rules the US imposes on visitors are often adopted by other countries as requirements for US travelers. In this game of ping-pong escalation, no one wins.

Simplification

We were warned this was coming at this year’s Computers, Privacy, and Data Protection, and now it’s really here. The data protection NGO Noyb reports that a leaked internal draft (PDF) of the European Commission’s Digital Omnibus threatens to undermine the architecture the EU has been building around data protection, AI, cybersecurity, and privacy generally. At The Register, Connor Jones summarizes the changes; Noyb has detail.

The EU’s workings are, as always, somewhat inscrutable to outsiders. Noyb explains that the omnibus tool is intended to allow multiple laws to be updated simultaneously to “improve the quality of the law and streamline paperwork obligations”. In this case, Noyb argues that the European Commission is abusing this option to fast-track far more substantial and contentious changes that should be subject to impact assessments and feedback from other EU institutions, as well as legal services.

If the move succeeds – the final draft will be presented on November 19 – Noyb believes it could remove fundamental rights to privacy and data protection that Europeans have been building for more than 30 years. Noyb, European Digital Rights, and the Irish Council for Civil Liberties have sent an open letter of objection to the Commission. The basic argument: this isn’t “simplification” but deregulation. The package would still have to be accepted by the European Parliament and a majority of EU member states.

As far as I can recall, business has never much liked data protection. In the early 1990s, when the first laws were being written, I remember being told data protection was a “tax on small business”. Privacy advocates instead see data protection as a way of redressing the power imbalance between large organizations and individuals.

By 1998, when data protection law was implemented in all EU member states, US companies were publicly insisting that the US didn’t need a privacy law in order to be in compliance. Companies could use corporate policies and sectoral laws to provide a “layered approach” that would be just as protective. When I wrote about this for Scientific American in 1999, privacy advocates in the UK predicted a trade war over this, calling it a failure to understand that you can’t cut a deal with a fundamental right – like the First Amendment.

In early 2013, it looked entirely possible that the period of negotiations over data protection reform would end with rollback. GDPR was the focus of intense lobbying efforts. There were, literally, 4,000 proposed amendments, so many that I recall being shown software written to manage and understand them all.

And then…Snowden. His revelations of government spying shifted the mood noticeably, and, under his shadow, when GDPR was finally adopted in 2016 and came into force in 2018, it expanded citizens’ rights and increased penalties for non-compliance. Since then, other countries around the world have used GDPR as a model, including China and several US states.

Those few states aside, at the US federal level data protection law has never been popular, and the pile of law growing around it – the Digital Services Act, the Digital Markets Act, and the AI Act – is particularly unwelcome to the current administration, which sees it as a deliberate attack on US technology companies.

In the UK the in-progress Data (Use and Access) Act, which passed in June, also weakened some data protection provisions. It will be implemented over the year to June 2026.

At its blog, the Open Rights Group argues that some aspects of the DUAA rest on the claim that innovation, economic growth, and public security are harmed by data protection law, a dubious premise.

Until this leak, it seemed possible that the DUAA would break Britain’s adequacy decision and remove the UK from the list of countries to which the EU allows data transfers. The rule is that to qualify a country must have legal protections equivalent to those of the EU. It would be the wrong way round if instead of the UK enhancing its law to match the EU, the EU weakened its law to match the UK.

There’s a whole secondary issue here, which is that a law is only useful if it’s enforced. Noyb actively brings legal cases to force enforcement in the EU. In the UK, privacy advocates, like ORG, have long complained that the Information Commissioner’s Office is increasingly quiescent.

Many of the EU’s changes appear to be aimed at making it easier for AI companies to exploit personal data to develop models. It’s hard to know where that will end, given that every company is sprinkling “AI” over itself in order to sound exciting and new (until the next thing comes along), if this thing comes into force you have to think data protection law will increasingly only apply to small businesses running older technology that can’t be massaged to qualify for exemption..

I blame this willingness to undermine fundamental rights at least partly on the fantasy of the “AI race”. This is nation-state-level FOMO. What race? What’s the end point? What does it mean to “win”? Why the AI race, and not the net-zero race, the renewables race, or the sustainability race? All of those would produce tangible benefits and solve known problems of long standing and existential impact.

Illustrations: A drunk parrot in a Putney garden (photo by Simon Bisson; used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The gated web

What is an AI browser?

Or, in a more accurate representation of my mental reaction, *WTF* is an AI browser?

In wondering about this, I’m clearly behind the times. Tech sites are already doing roundups of their chosen “best” ones. At Mashable, Cecily Mouran compares “top” AI browsers because “The AI browser wars hath begun.”

Is the war that no one wants these things but they’re being forced on us anyway? Because otherwise…it’s just a bunch of heavily financed companies trying to own a market they think will be worth billions.

In Tim Berners-Lee’s original version, the web was meant to simplify sharing information. A key element was giving users control over presentation. Then came designers, who hated that idea. That battle between users’ preferences and browser makers’ interests continues to this day. What most people mean by the browser wars), though, was the late-1990s fight between Microsoft and Netscape, or the later burst of competition around smartphones. A big concern has long been market domination: a monopoly could seek to slowly close down the web by creating proprietary additions to the open standards and lock all others out.

Mouran, citing Casey Newton’s Platformer newsletter, suggests that Google specifically has exploited its browser to increase search use (and therefore ad revenues), partly by merging the address and search bars. I know I’m not typical, but for me search remains a separate activity. Most of the time I’m following a link or scanning familiar sites. Yes, when my browser history fills in a URL, I guess you could say I’m searching the browser history, but to me the better analogy is scanning an array of daily newspapers. Many people *also* use their browser to access cloud-based productivity software and email or play online games, none of which is search.

Nor are chatbots, since they don’t actually *find* information; they apply mathematics and statistics to a load of ingested text and create sentences by predicting the most likely next word. This is why Emily Bender and Alex Hanna call them “synthetic text extruding machines” in their book, The AI Con. I am in the business of trying to make sense of the impact of fast-moving technology, or at least of documenting the conflicts it creates. The only chatbot I’ve found of any value for this – or for personal needs such as a tech issue – is Perplexity, and that’s because it cites (or can be ordered to cite) sources one can check. There is every difference in the world between just wanting an answer and wanting the background from which to derive an answer that may possibly be new.

In any event, Newton’s take is that a company that’s serious about search must build its own browser. Therefore: AI companies are building them. Hence these roundups. Mauron’s pitch: “Imagine a browser that acts as your research assistant, plans trips, sends emails, and schedules meetings. As AI models become more advanced, they’re capable of autonomously handling more complex tasks on your behalf. For tech companies, the browser is the perfect medium for realizing this vision.”

OK, I can see exactly what it does for tech companies. It gives them control over what information you can access, how you use it, and who and how much you pay for the services its agent selects (plus it gets a commission).

I can also see what it does for employers. My browser agent can call your browser agent and negotiate a meeting plan. Then they attend the meeting on our behalf and send us both summaries, which they ingest and file, later forwarding them to our bosses’ agents to verify we were at work that day. In between, they can summarize emails, and decide which ones we need to see. (As Charles Arthur quipped at The Overspill, “Could they…send fewer emails?”)

Remember when part of the excitement of the Internet was the direct access it gave to people who were formerly inaccessible? Now, we appear to be building systems to ensure that every human is their own gated community.

What part of this is good for users? If you are fortunate enough not to care about the price of anything, maybe it’s great to replace your personal assistant with an agentic web browser. Most of us have struggled along doing things for ourselves and each other. At Cybernews, Mayank Sharma warns that AI browsers’ intentional preemption of efforts to browse for yourself, filtering anything they deem “irrelevant”, threaten the open web. Newton quantifies the drop in traffic news publishers are already seeing from generative AI. Will we soon be complaining about information underload?

At Pluralistic last year, Cory Doctorow wrote about the importance of faithful agents: software that is loyal to us rather than its maker. He particularly focused on browsers, which have gone from that initial vision of user control to become software that spies on us and reports home. In Mauron’s piece, Perplexity openly hopes to use chats to build user profiles and eventually show ads.

The good news, such as it is, is that from what I’ve read in writing this, most of these companies hope to charge for these browsers – AI as a subscription service. So avoiding them is also cheaper. Double win.

Illustrations: John Tenniel’s drawing of Davy Jones, sitting on his locker (via Wikimedia, published in Punch, 1892 with the caption, “AHA! SO LONG AS THEY STICK TO THEM OLD CHARTS, NO FEAR O’ MY LOCKER BEIN’ EMPTY!!”

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: The AI Con

The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
By Emily Bender and Alex Hanna
HarperCollins
ISBN: 978-0-06-341856-1

Enormous sums of money are sloshing around AI development. Amazon is handing $8 billion to Anthropic. Microsoft is adding $1 billion worth of Azure cloud computing to its existing massive stake in Open AI. And Nvidia is pouring $100 billion in the form of chips into Open AI’s project to build a gigantic data center, while Oracle is borrowing $100 billion in order to give OpenAI $300 billion worth of cloud computing. Current market *revenue* projections? 85 billion in 2029. So they’re all fighting for control over the Next Big Thing, which projections suggest will never pay off. Warnings that the AI bubble may be about to splatter us all are coming from Cory Doctorow and Ed Zitron – and the Daily Telegraph, The Atlantic, and the Wall Street Journal. Bain Capital says the industry needs another $800 billion in investment now and $2 trillion by 2030 to meet demand.

Many talk about the bubble and economic consequences if it bursts. Few talk about the opportunity costs as AI sucks money and resources away from other things that might be more valuable. In The AI Con, linguistics professor Emily Bender and DAIR Institute director of research Alex Hanna provide an exception. Bender is one of the four authors of the seminal 2021 paper On the Dangers of Stochastic Parrots, which arguably founded AI-skepticism.

In the book, the authors review much that’s familiar: the many layers of humans required to code, train, correct, and mind “AI”: the programmers, designers, data labelers, and raters, along with the humans waiting to take over when the AI fails. They also go into the water, energy, and labor demands of data centers and present approaches to AI.

Crucially, they avoid both doomerism and boosterism, which they understand as alternative sides of the same coin. Both the fully automated hellscape Doomers warn against and and the Boosters’ world governed by a benign synthetic intelligence ignore the very real harms taking place at present. Doomers promote “AI safety” using “fake scenarios” meant to frighten us. Think HAL in the movie 2001: A Space Odyssey or Nick Bostrum’s paperclip maximizer. Boosters rail against the constraints implicit in sustainability, trust and safety organizations within technology companies, and government regulation. We need, Bender and Hanna write, to move away from speculative risks and toward working on the real problems we have. Hype, they conclude, doesn’t have to be true to do harm.

The book ends with a chapter on how to resist hype. Among their strategies: persistently ask questions such as how a system is evaluated, who is harmed and who benefits, how the system was developed and with what kind of data and labor practices. Avoid language that humanizes the system – no “hallucinations” for errors. Advocate for transparency and accountability, and resist the industry’s claims that the technology is so new there is no way to regulate it. The technology may be new, but the principles are old. And, when necessary, just say no and resist the narrative that its progress is inevitable.