Long Island AI

The peak of dot-com mania was identifiable even at the time: it was in January 2000, when AOL bought Time Warner for $183 billion. Peak podcast mania argiab;u came in 2020 when Spotify paid Joe Rogan $100 million (per rumor). It’s now time to call the peak of AI mania: Allbirds, which was loved a few years back for making sustainable wool shoes …has sold off all stock and assets, and is rebranding as Newbird AI to sell “AI compute infrastructure”. At The Overspill, Charles Arthur compares this to 2017, when Long Island Iced Tea renamed itself Long Blockchain, arguably the peak of bitcoin mania. His theory that all Newbird AI has left is a bunch of empty warehouses, so hopes someone will put data centers in them is the only possibility that makes any sense.

Most of those bits of history led to more financial silliness and then a crash. The AOL-TimeWarner marriage was notoriously catastrophic. AOL, which was supposed to modernize old, slow Time-Warner, in fact was already becoming obsolete as users shifted to new technology (broadband) and the wider Internet. By 2003, the company was selling itself off in pieces. Long Island Blockchain cratered; a year later it had abandoned its plans to buy bitcoin mining equipment. Its delisting by NASDAQ was accompanied by the SEC’s charging three people with insider trading. Joe Rogan of course remains hugely popular and Spotify is doing fine, but it’s certainly not controlling the business of podcasting as it appeared to hope it would.

The day following the announcemenet, Newbird AI’s shares rose as much as 700% (briefly), partly on the additional news that it will close a funding round of $50 million in the second quarter of 2026. Even CNBC calls this “pivot” bizarre.

For our purposes, it doesn’t matter if this wacky strategy works (pick your definition of “works”), because when the share price of a basically assetless company goes up 700% because it’s added “AI” to its name we have reached the absurdity that marks the peak of every bubble.

It’s not the only sign (or the only absurdity). In the UK, last month Aisha Down reported at the Guardian that many of the efforts prime minister Keir Starmer – and Rishi Sunak before him – has announced to “mainline AI into the veins” of the British economy are based on what she calls “phantom investments”. She reported faithfully that the Department of Science, Innovation, and Technology said it “rejected these assertions”, but this week we learned that at least one piece of her reporting was absolutely correct.

This week’s news revolves around a project called Stargate, announced in September 2025 and involving the UK-headquartered AI infrastructure provider Nscale, Microsoft, Nvidia, and OpenAI. This week, OpenAI announced it was putting the project – for which it was supposed to build a data center – on hold. OpenAI blames energy prices and, as Joseph Bambridge reports at Politico, the government’s decision last month to shelve proposals to allow data miners to use copyrighted content unless its owner opts out. The proposal was widely opposed by the UK’s creative industry, and was indefinitely delayed in a report issued on March 18 (ReedSmith has a useful legal summary)..

The loss – or delay – of Stargate is a rounding error to companies the size of Microsoft and Nvidia. It’s more significant for Nscale, which according to CNBC raised $2 billion in a funding round just last month with investments from Nvidia, Dell, Lenovo, and other much less famous names; at the same time, it added former Google and Facebook ad business builder Sheryl Sandberg and former Facebook global policy head Nick Clegg to its board. The new funding raised Nscale’s valuation to $14.6 billion. At his blog, Ed Zitron calls the ability to raise funding for a data center that doesn’t exist “weird“, and suggests that AI companies should admit that their chatbots are just “regular old software”.

Meta king Mark Zuckerberg, last seen losing money on the former next-big-thing metaverse, is, Megan Bobrowsky reports at the Wall Street Journal, building an AI agent to get him information and answers faster. This comes on top of last month’s announcement that Meta is buying the AI agent network Moltbook, seemingly mostly in order to hire its two founders for Meta’s Superintelligence Labs. This week, Zuckerberg also announced he was building an AI clone of himself to interact with staff so they can feel more connected to him. Seems like in reality it would make corporate management feel like automated customer service.

The question about bubbles is always: is this one like railroads or like tulips? Tulips left nothing of value behind while railroads went on to be transformative. In 2001, almost everyone knew the Internet would go on growing in size and importance. In the AI case, despite the current silliness, over time we will learn how to use these new capabilities and limit the downsides. But first, we will have to deal with the fallout of the fact that the finances do not add up.

Illustrations: Tulips, (via Wikimedia).

Also this week:
At Skeptical Inquirer, I go to this year’s Gathering 4 Gardner.
At the Plutopia podcast, we interview Tereza Pultarova, who reports on developing military technology in Ukraine.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

References for “What We Talk About When We Talk About AI”

By slide number as presented at Greenwich Skeptics, 2026-04-13:

3 – Ofcom, Adults’ Media Use and Attitudes Report, April 2, 2026.

4 – What is “AI”? Clockwise from top left: Eliza (1966), by Joseph Weizenbaum, which simulated a psychotherapist; a modern data center; Sidney Harris’s 1977 cartoon, “Then a miracle occurs”; protein folding by DeepMind’s Alphafold in 2024; Privacy International‘s mockup of an AI-assisted surveillance dashboard.

6 – Automata: Ancient Greek automata; mechanical singing birds and other musical machines.

7 – Companions, clockwise from top left: Wilson, Tom Hanks’ basketball in the 2000 movie Cast Away; a Roomba (2002); C3PO and R2D2 from Star Wars; Furbys (1998); a Tamagotchi pet (1996).

9 – Helpers: Mickey Mouse’s enchanted broom in “The Sorcerer’s Apprentice” from the 1940 movie Fantasia; Rosey the Robot in the 1962 TV series The Jetsons.

10 – Guardians: a Golem; a guardian angel; HAL, from the 1968 movie 2001: A Space Odyssey.

11 – Killers: The Price of Privacy: Re-evaluating the NSA, 2014; the Terminator; IEEE Spectrum.

12 – Frauds: The Mechanical Turk; automata by Jacques de Vauconson; Clever Hans.

13 – Asimov’s Laws of Robotics, formulated in his first robot short story, “Runaround”, in 1942.

14 – Arthur C. Clarke’s three laws, first formulated in “The Hazards of Prophecy” in 1962, revised in 1973.

15 – Alan Turing, Computing Machinery and Intelligence, 1950.

16 – The Turing test, from “Computing Machinery and Intelligence”.

17 – The Unperson of the Year by James Boyle at TechDirt.

18 – The eight scientists who assembled in Dartmouth for the first workshop on artificial intelligence in 1956: Oliver Selfridge, Nathaniel Rochester, Ray Solomonoff, Marvin Minsky, Trenchard More, John McCarthy, and Claude Shannon.

19 – Personal conversation with John McCarthy, 2006.

20 – Your A.I. Radiologist Will Not Be With You Soon by Steve Lohr at the New York Times”; Ed Zitron.

23 – Demis Hassabis, quoted in the Guardian as DeepMind’s mission at its founding in 2010.

24 – Ken MacLeod, The Cassini Division, 1998.

25 – Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, 2014.

26 – Charles Stross, Dude, You Broke the Future, 2017.

27 – Turing, “Computing Machinery and Intelligence”.

28 – Games: chess, Jeopardy, Go (2016).

29 – Clockwise from top left: protesters freeze a Cruise robotaxi by placing a traffic cone on its hood; Microsoft services agreement; BBC; Nature; Hacker Noon; Red Dog Security; Waymos Freeze in Place, Snarl Traffic En Masse During Saturday’s Citywide Power Outage; The Register.

30 – Cory Doctorow, at Pluralistic.

31 – 404 Media

32 – Books: Ghost Work, by Mary L. Gray and Siddharth Suri (2019); Behind the Screen, by Sarah T. Roberts (2019); The Costs of Connection, by Nick Couldry and Ulises A. Meijas (2020); Atlas of AI, by Kate Crawford (2021).

33 – Books: Automating Inequality, by Virginia Eubanks (2019); Black Software, by Charlton R. McIlwain (2019); Unmasking AI, by Joy Buolamwini (2023); Algorithms of Oppression: Why Search Engines Reinforce Racism, by Safia Umoja Noble (2018).

34 – Chart showing the flow of money in the LLM ecosystem. Drawn by Edward Hasbrouck for the (US) National Writers Union.

35 – Good ongoing coverage of behind-the-scenes human workers at Rest of World.

36 – 1X’s Neo robot home servant, launched 2025.

38 – London’s Ringways: The first map of the capital’s unbuilt motorways.

39 – Ringways.

41 – Exponential future: Ray Kurzweil’s projection of the “law of accelerating returns”; Mickey Mouse drowns in exponential growth in “The Sorcerer’s Apprentice” in Fantasia, 1940.

42 – Turbli.

43 – Present harm, clockwise from top left: Kings College London; Ars Technica; Bureau of Investigative Journalism; Anadolu Ajansi; Toronto Star; NBC News; Guardian.

44 – Madeleine Claire Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (2019).

45 – Hildebrandt, Mireille, keynote at Computers, Privacy, and Data Protection 2025.

46 – Replace by Clawd.

Further reading:

Becker, Adam, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity (2025).

Bender, Emily M., and Alex Hanna, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, 2025.

Booth, Robert, at the Guardian: Number of AI chatbots ignoring human instructions increasing, study says.

Broussard, Meredith, Artificial Unintelligence: How Computers Misunderstand the World (2018) and More Than a Glitch: Confronting Race, Gender, and the Ability Bias in Tech (2024).

Couldry, Nick, and Ulises A. Meijas, Data Grab: The New Colonialism of Big Tech (and How to Fight Back, 2024.

Darling, Kate, The New Breed, 2021.

Grossman, Wendy M., Finding the gorilla.

Jones, Phil, Work Without the Worker (2021).

Marx, Paris, the Tech Won’t Save Us podcast.

O’Neil, Cathy, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2017).

Shane, Janelle M., You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place (2019).

Standage, Tom, The Turk: The Life and Times of the Famous Eighteenth-Century Chess-Playing Machine (2003)

Strengers, Yolanda, and Jenny Kennedy, The Smart Wife (2020).

Stross, Charles, Shaping the Future, 2007.

Waste management

This week, Amazon announced it would cripple a bunch of older Kindle models. Users will be able to go on reading the books that are already stored on their devices, but they won’t be able to add anything new, whether it’s borrowed, bought, or downloaded. The switch will be toggled on May 20, and applies to Kindle ereaders and Kindle Fires released in or before 2012. These devices had already been locked out of the store in 2022.

There is no benefit to consumers from doing this, even though Amazon is offering a substantial discount on newer devices for those who want to switch. There are presumably benefits to Amazon – namely, that it can update its store and other software without having to cater to older devices, plus sell an extra bunch of new ones. But overall, it’s a globally hostile move that creates a new pile of electronic waste composed of functioning hardware. One of these days, that’s going to be your car when some auto manufacturer decides that a “legacy” model isn’t worth supporting any more.

One option is to stop buying devices whose manufacturers demand that you surrender ownership in favor of their ongoing control. The other main possibility is to continue spreading right to repair laws so that a company like Amazon (or John Deere) wishing to shed itself of the responsibility of supporting older devices would be required to open them up to their users and the third-party ecosystem around them that would doubtless form. As the population of Internet of Things devices continues to grow around the world there will be much more of this – when we need much less. It’s absolutely maddening. Try a Kobo and the Gutenberg project.

***

It came out this week that language added in October 2025 to Microsoft’s terms of use says: “Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.”

This is like the disclaimers that psychics and mediums in the UK issue to prospective customers to prevent exploitation of the credulous. A software company, though…if it’s going to admit up front that the service it’s providing can’t be relied on, why force it on people in the first place?

Obviously, the point here is to cover the company’s ass in case of lawsuits. It fits right in with companies’ general refusal to accept liability for problems created by the software they make. I’m glad the company is warning people, but the better solution – if the company can’t bring itself to discontinue it altogether – would be to either remove Copilot and let people download it if they want it or leave it turned off by default, warning people up front instead of in terms of use that normally wouldn’t have been read. Instead, what we have to do is search for instructions to remove it and hope Microsoft hasn’t made following them impossible.

***

In the midst of a lot of science fictional hype-scares about “AI” along come some more real concerns. We’ll start “small”: The New York Times has done a study that finds that Google’s AI Overviews are right 90% of the time. Sounds not-so-bad, until you remember the Law of Truly Large Numbers. That “90%” success rate when the remaining 10% is applied to trillions means the overall result is, as Ryan Whitwam puts it in summarizing the story for Ars Technica, “hundreds of thousands of lies going out every minute of the day”. That’s automation for you. Used to be you needed an army of bots to disseminate misinformation at any sort of scale, and even then it was less effective since it wasn’t backed by an apparently authoritative name (see also Microsoft, above).

More alarming are the reports surfacing about generative AI’s effectiveness at aiding and amplifying online crime. In November, Google announced internal researchers had found hackers experimenting with a script that interacts with Gemini’s API to create just-in-time modifications. Last month, at Hacker Noon polymorphic viruses, but again, this appears to be a step up in sophistication and speed.

At the same time, Casey Newton reports at Platformer, Anthropic’s latest model is likely the first of many that can find and exploit vulnerabilities in software in “ways that far outpace the efforts of defenders”. The announcement, which appears to have been inadvertent, was quickly followed by another, formally launching the new model, Claude Mythos, and Project Glasswing. The latter, per Newton, gives more than 40 of the biggest technology companies early access so they can use the model to find and patch vulnerabilities in both their own systems and open source systems that underpin digital infrastructure.

Unlike most AI-related scare stories, this one is backed by people who are usually sensible, such as Alex Stamos, formerly chief security officer at Facebook and Yahoo, and other security practitioners. These warnings are not coming from AI company CEOs with a concept to sell .

So: on the one hand, (hopefully) better software; on the other, (potentially) newer, more dangerous attacks. Ugh. To restore calm, I recommend Terry Godier’s essay The Last Quiet Thing.

Illustrations: Bill Steele, who in 1970 wrote the environmental anthem Garbage!.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The Silicon Valley chronicles

We should have seen this coming. At Platformer, Casey Newton reports that Meta has discussed pulling funding for the Oversight Board after 2028 after already reducing it “significantly” this year. Who needs fripperies like independent governance after reporting billions of losses in its Reality Labs unit, the bit responsible for the seemingly now-abandoned metaverse, the smartglasses, and, of course, AI? Per Newton, there are ongoing discussions about continuing the Oversight Board somehow, perhaps by opening it up to adjudicate for other platforms.

The reality is the Oversight Board’s moment has probably passed. In 2018, creating it was an effective public relations response to a series of scandalous revelations, beginning in 2017 with Carole Cadwalladr‘s work exposing Cambridge Analytica‘s use of Facebook to collect personal data to target political advertising. Its biggest moment was probably in January 2021, when the Oversight Board backed Facebook‘s decision to ban then-“former guy” Donald Trump for two years following the January 6 insurrection. Twitter banned him, too, and there was a brief crackdown on postings calling for violence. Much public criticism followed from whistleblowers such as Frances Haugen and Sarah Wynn Williams (2025), and the Netflix documentary The Great Hack..

Today, though,the big technology companies seem not to care. Maybe they will again if usage shrinks enough – the BBC reports an Ofcom study showing that UK adults are actively posting 61% less than last year. But defunding the Oversight Board seems consistent with the general decline of content moderation on Facebook and elsewhere. Neither fines, nor spreading age verification laws, nor other constraints can be remedied by funding an Oversight Board that is already rarely mentioned.

Besides, the years since 2018 have seen the “billionaire class” take a hard turn to the libertarian right; they show little inclination to be constrained personally or corporately by national laws or governments.

In a January 2025 interview, Netscape creator and venture capitalist Marc Andreessen provided this explanation: US Democrats “broke the deal”. That is, Silicon Valley supported Democrats as long as they left technology companies free of regulation. (Democrats might reply that they were responding on behalf of the public to changes in Silicon Valley companies’ behavior.)

In addition, Connie Loizos reports at TechCrunch that the billionaires who signed Warren Buffett’s and Bill Gates’s Giving Pledge would now like it forgotten. At Current Affairs, Nathan J. Robinson fears most Anduril CEO Palmer Luckey’s enthusiasm for incorporating AI and robotics into more and bigger weapons. Anduril was founding in 2017, the year before Google employees petitioned the company to exit its contract with Project Maven, the Pentagon’s effort to harness machine learning and automatic targeting. By 2021, Tom Simonite was reporting at Wired that Google was bidding on military contracts. A few weeks back, at the Financial Times, Jemima Kelly called Silicon Valley billionaires “enablers, keeping us distracted and dumb”, citing a recent podcast interview in which Andreessen said he never engages in introspection.

Available to link all this together is Jacob Silverman’s new book, Gilded Rage: Elon Musk and the Radicalization of Silicon Valley. Musk is not the sole focus of Silverman’s “guided tour through America’s self-designated innovator class” and its resentment of government and power to change it. Much of the book, which Silverman began researching in 2023, focuses on other high-dollar funders such as David Sacks and DOGE co-mastermind Vivek Ramaraswamy (whom Silverman introduces as the boss who fired him from an early job), as well as members of the “Paypal Mafia” including Peter Thiel and David Sacks. Silverman also includes chapters on Musk’s acquisition of Twitter, Saudi Arabia’s growing connections to Silicon Valley, the cryptocurrency boom, the fight over TikTok’s US presence, a so-far failed plan to take over California’s Solano County, Sam Bankman-Fried’s rise and fall, and the choice of JD Vance as Trump’s running mate. The book ends with the donors’ success – that is, Trump’s election in 2024.

With Gilded Rage, Silverman revives a formerly niche publishing subgenre , which documented the beginnings of this shift. First on the scene in the US, to the best of my knowledge, was northern California native Paulina Borsook with a 1996 essay for Mother Jones, Cyberselfish. In it, and in the subsequent 2000 book, she laid out Silicon Valley’s refusal to recognize the government assistance and military funding that enabled its wealth and growth. To Borsook, who described herself in the 1998 book Wired Women as the “token hippie feminist writing for Wired“, Silicon Valley’s turn to the right and distaste for government were already visible even then. In November, David Streitfeld profiled Borsook at the New York Times and noted the price she paid for her contrarian view.

In a 1995 essay The Californian Ideology, Richard Barbrook and Andy Cameron pushed Europe to take a different path.

Silverman’s most recent predecessor is Douglas Rushkoff’s 2022 skewing of “The Mindset” in Survival of the Richest. In Rushkoff’s telling, these high-wealth individuals are planning their safety and/or escape during and after “the incident” – that is, whatever catastrophe is going to wipe us all out.

So Silverman is less documenting a shift than he is describing an outcome: a political wave whose emergence into the mainstream only seems sudden.

Illustrations: Political cartoon from 1904, showing Standard Oil’s stranglehold on US industry (via Wikimedia).

Also this week:
At Plutopia, we interview Paulina Borsook.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Eating the web

“Traffic to my blog has plummeted,” a friend said recently. Over decades, he’s built a thriving community, and his core users persist. But Google was crucial for bringing in new readers – and its introduction of AI and changes to its algorithm have punished small sites.

This week, Sean Hollister reports at The Verge that Google is using its AI to replace the headlines on news stories. For Google, it is, as Charles Arthur comments at The Overspill, a “small” and “narrow” experiment – until it becomes a feature. For The Verge, however, the impact is noticeable: the headlines it crafts to market its journalists’ work are being replaced with boring titles that do not accurately convey the articles’ content.

Then, the week before last, Joe Toscano reported at Forbes that Google has patented a system that uses AI to rewrite website landing pages to produce customized versions for its users. Toscano links this to an earlier announcement in which Google announced a protocol to make websites’ structure more readable to AI agents. Toscano suggests that taken together the two elements allow Google to break websites apart into their component parts for reassembly by AI agents into whatever version they identify as best for the user they represent.

In the early 1990s, someone I met directed me to read the work of Robert McChesney, whose books recount the cooption and commercialization of radio and television, also originally conceived as democratic, educational media. Helping to prevent a similar outcome for the Internet is a lot of what net.wars has always been about. Now, Google, which would not exist without the open web, wants to eat the whole thing.

***

On Tuesday, a jury in a New Mexico court found Meta guilty of misleading consumers about the safety of its platforms and enabling harm including child sexual exploitation, as Katie McQue reports at the Guardian. The jury has ordered the company to pay $375 million in civil penalties. Meta will appeal. Snapchat and TikTok, which were also accused, settled before the trial began.

The New Mexico attorney general’s office says it intends to pursue changes to platform design including age verification and “protecting minors from encrypted communications that shield bad actors”.

On Wednesday, a jury in Los Angeles found YouTube guilty of deliberately designing an addictive product. As Dara Kerr reports at the Guardian, the case was brought by a 20-year-old woman who claimed her addiction to Instagram and YouTube began at age six, damaging her relationships with her family and in school and causing her to become depressed and engage in self-harm. The jury awarded her $6 million, split between Meta (70%) and YouTube (30%). Both companies say they will appeal.

They will have to, because, as Kerr reported in January, there are more of these trials to come, and even to trillion-dollar companies thousands of fines can add up to real money. In a consolidated case, in California state and federal courts thousands of families accuse social media companies of harming children. Reuters reports that more trials are scheduled: a school district in Breathitt County, Kentucky in federal court against Meta, ByteDance, Snapchat, and Google, and one in state court in California in July against Instagram, YouTube, TikTok, and Snapchat.

In January, the Tech Oversight Project reported newly unsealed documents contained the “smoking gun” evidence – that is, internal email discussions – that the four companies deliberately designed their products to be addictive and failed to provide effective warnings about social media use. Certainly, the leaked documents make it sou9nd like a plan. Tech Oversight quotes one: “Onboarding kids into Google’s Ecosystem leads to brand trust and loyalty over their lifetime.” It’s hard not to see the commonality with Joe Camel and so many other marketing strategies.

Key to these cases is Section 230 – the clause in the Communications Decency Act that shields online services from liability for the material their users post and allows them to moderate content in good faith. The plaintiffs argued – successfully in New Mexico – that the law does not shield the platforms from liability for their design decisions. The social media companies naturally tried to argue that it does.

At his blog, law professor Eric Goldman discusses the broader impact of these bellwether cases. As he says, whatever changes the social media companies feel forced to make by the potential liability of myriad jury trials and new laws may help some victims but almost certainly hurt other groups who were not represented at the trial. Similarly, at Techdirt Mike Masnick warns that features like autoscrolling and algorithmic recommendations are not inherently harmful; it’s the content they relentlessly serve that is really the issue; cue the First Amendment. And few who are not technology giants can afford to face jury trials and fines. Are we talking a regime under which every design decision has to go through lawyers?

In a posting summarizing the history of S230, Goldman predicts that age verification laws will reshape the Internet of 2031 or 2036 beyond recognition, killing most of what we love now. So much doom, so little time.

Illustrations: The volcano of Stromboli, on which JRR Tolkien based Mount Doom in The Lord of the Rings (by Steven J. Dengler at Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Turn left at the robot

It’s easy to forget, now that so many computer interfaces seem to be getting more inscrutable, that in the early 1990s the shift from the command line to graphical interfaces and the desire to reach the wider mass market introduced a new focus on usability. In classics like Don Norman’s The Design of Everyday Things, a common idea was that a really well-designed system should not require a manual because the design itself would tell the user how to operate it.

What is intuitive, though, is defined by what you’re used to. The rumor that Apple might replace the letters on keys with glyphs is a case in point. People who’ve grown up with smartphones might like the idea of glyphs that match those on their phones. But try doing technical support over the phone and describing a glyph; easier to communicate the letters.

Those years in which computer interfaces were standardized relied on metaphors based on familiar physical items: a floppy disk to save a file, an artist’s palette for color choices. In 1993, leading software companies like Microsoft and Lotus set up usability labs; it took watching user testing to convince developers that struggling to use their software was not a sign the users were stupid.

With that background, it was interesting to attend this year’s edition of the 20-year-old Human-Robot Interaction conference. Robots don’t need metaphors; they *are* the thing itself. Although: why shouldn’t a robot also have menu buttons for common functions?

In the paper I found most interesting and valuable, Lauren Wright examined the use of a speaking Misty robot to deliver social-emotional learning lessons. Wright’s group tested the value of deception – that is, having the robot speak in the first person of its “family”, experiences, and “emotions” – versus a more truthful presentation, in which the robot is neutral and tells its stories in the third person, refers to its programmers, and professes no humanity. The researchers were testing the widely-held assumption that kids engage more with robots programmed to appear more human. They found the opposite: while both versions significantly increased their learning, the kids who used the factual robot showed more engagement and higher scores in the sense of using concepts from the lesson in their answers. This really shouldn’t be surprising. Children don’t in general respond well to deception. Really, who does?

The children’s personal reactions to the robots were at least as interesting. In Michael F. Xu‘s paper, the researchers held co-design sessions and then installed a robot in eight family homes to act as a neutral third-party enforcer issuing timely reminders on behalf of busy parents. Some of the families did indeed report that the robot’s reminder got stuff done more efficiently. On the other hand, the experiment was short – only four days – and you have to wonder if that would still be true after the novelty wore off. There were hints of this from the kids, some of whom pushed back. One simply bypassed a robot reminding him of the limits on his TV viewing by taking the TV upstairs, where the robot couldn’t go. Another reacted like I would at any age and told the robot to “shut up”.

The fact versus fiction presentation included short video clips of some of the kids’ interaction with the robot tutor. In one, a boy placed his hands on either side of the robot’s “face” while it was talking and kept moving its head around, exploring the robot’s physical capabilities (or trying to take its head off?). The speaker ignored this entirely, but the sight hilariously made an important point: the robot’s physical form speaks even when the robot is silent.

We saw this at We Robot 2016, when a Jamaican lawyer asked Olivier Guilhem, from Aldebaran Robotics, which makes Pepper, “Why is the robot white?” His response: “It looks clean.” This week, one paper tried to tease out how “representation bias” – assumptions about gender, skin tone, dis/ability, accessibility, size, age – affect users’ reactions. In the dataset used to train an AI model, bias may be baked in through the historical record. With robots, bias can also present directly through the robot’s design, as Yolande Strengers’ and Jenny Kennedy’s showed in their 2020 book The Smart Wife. Despite its shiny, unmistakable whiteness, Pepper’s shape was ambiguous enough for its gender to be interpreted differently in different cultures. In the HRI paper, the researchers concluded that biases in robot design could perpetuate occupational stereotypes – “technological segregation”. They also found their participants consistently preferred non-skin tones – in their examples, silver and light teal.

“Who builds AI shapes what AI becomes,” said Ben Rosman, who outlined a burgeoning collaborative effort to build a machine learning community across Africa and redress its underrepresentation. The same with robots: many, many cultural norms affect how humans interact with them. That information is signal, not noise, he says, and should be captured to enable robots to operate across wide ranges of human context without relying on “brittle defaults” that interpret human variation as failures. “Turn left at the robot,” makes perfect sense once you know that in South Africa “robots” are known elsewhere as traffic lights.

Illustrations: Rosey, the still-influential “old demo model” robot maid in The Jetsons (1962-1963).

Also this week:
At the Plutopia podcast, we interview Marc Abrahams, founder of the Ig Nobel awards.
At Skeptical Inquirer, the latest Letter to America finds David Clarke conducting the English folklore survey.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A short history of We Robot, 2026 edition

On the eve of We Robot 2026, here are links to my summaries of every year since 2012, the inaugural conference, except 2014, which I missed for family reasons. There was no conference in 2024 in order to move the event back to its original April schedule (covid caused its move to September in 2020). These are my personal impressions; nothing I say here should be taken as representing the conference, its founders, its speakers, or their institutions.

We Robot was co-founded by Michael Froomkin, Ryan Calo, and Ian Kerr to bring together lawyers and engineers to think early and long about the coming conflicts in robots, law, and policy.

2025: Predatory inclusion: In Windsor, Ontario, a few months into the new US administration, the sudden change in international relations highlights the power imbalances inherent in many of today’s AI systems. Catopromancy: in workshops, we hear a librarian propose useful AI completely out of step with today’s corporate offerings, and mull how to apply existing laws to new scenarios.

2024 No conference.

2023 The end of cool: after struggling to design a drone delivery service that had benefits over today’s cycling couriers, we find ourselves less impressed by robots that can do somersaults but not anything obviously useful; the future may have seemed more exciting when it was imaginary.

2022 Insert a human: following a long-held conference theme about “humans in the loop, “robots” are now “sociotechnical systems”. Coding ethics: Where Asimov’s laws were just a story device, in workshops we try to work out how to design a real ethical robot.

2021 Plausible diversions: maybe any technology sufficiently advanced to seem like magic can be well enough understood that we can assign responsibility and liability? Is the juice worth the squeeze?: In workshops, we mull how to regulate delivery robots, which will likely have no user-serviceable parts. Title from Woody Hartzog.

2020 (virtual) The zero on the phone: AI exploitation and bias embedded in historical data become what one speaker calls “unregulated experimentation on humans…without oversight or control”.

2019 Math, monsters, and metaphors. We dissect the trolley problem and find the true danger on the immediate horizon is less robots, more the “pile of math that does some stuff” we call “AI”. The Algernon problem: in workshops, new disciplines joining the We Robot family remind us that robots/AI are carrying out the commands of distant owners.

2018 Deception. We return to the question of what makes robots different and revisit Madeleine Clare Elish’s moral crumple zones after the first pedestrian death by self-driving car. Late, noisy, and wrong: in workshops, engineers Bill Smart and Cindy Grimm explain why sensors never capture what you think they’re capturing and how AI systems use their data.

2017 Have robot, will legislate: Discussion of risks this year focused on the intermediate situation, when automation and human norms must co-exist.

2016 Humans all the way down: Madeline Clare Elish introduces “moral crumple zones”, a paper that will resonate through future years. The lab and the world: in workshops, Bill Smart uses conference attendees in formation to show why getting a robot to do anything is difficult.

2015: Multiplicity: W
When in the life of a technology is the right time for regulatory intervention?

2014 Missed conference

2013 Cautiously apocalyptic: Diversity of approaches to regulation will be needed to handle the diversity of robots, and at the beginning of cloud robotics and full-scale data collection, we envision a pet robot dog that can beg its owner for an upgraded service subscription.

2012 A really fancy hammer with a gun. At the first We Robot, we try to answer the fundamental question: what difference do robots bring? Unsentimental engineer Bill Smart provided the title.

Power games

UK prime minister Keir Starmer’s desire to bring in a UK digital ID is awake again with the announcement that the government plans to make the IDs available for “a handful of uses” before the next general election, due by 2029, . The requisite consultation closes May 5.

At Computer Weekly, Lis Evanstad adds a summary and detail about the consultation. Government by app! What could possibly go wrong?

Among other new information in the consultation: the age for being able to get a digital ID could drop below 18, perhaps even to issuance at birth. There might be a single unique identifier to enable linking data throughout government systems. Darren Jones, the prime minister’s chief secretary, talks smoother access to government services and the ability to share only the piece of data that’s needed for a specific purpose, but not about the risks of tying everything to a single identifier whose compromise can reverberate throughout your life.

In his Guardian piece, Stacey quotes Jones, who positions the digital ID as a way to improve fairness in access to government services, which he says accrue disproportionately to “pushy” people with time, patience, and energy.

This sounds good until you read Government Digital Service co-founder Tom Loosemore‘s blog, where he notes that creating that sort of stonewalling bureaucracy is often a deliberate strategy to manage demand. Loosemore argues that agentic AI will force an end to this strategy because agents will have unlimited patience and “reduce the cost to citizens of appeals, challenges, and calculations etc to near-zero”. Instead of having to manually dig through financial records, Loosemore finds in an experiment that an AI agent can scan the documents, find the information, and present only the data required to establish the citizen’s entitlement to benefits.

He believes AI agents will also bring a new level of transparency (and perhaps “pushiness”): “AI Agents will always dig out that 93 page PDF of guidance hidden.” Governments, he writes, will be forced to “clarify and tighten policies & processes, with all the painful political trade-offs therein”.

Or: will budget-protecting governments adopt their own agentic AIs to move and re-bury the stuff they don’t want applicants to find and recalibrate their requirements to make them resistant to automation? Seems just as likely, really.

While all that was going on, significant votes took place on the future of access to online content. On Tuesday, Jennifer McKiernan reports at the BBC, MPs rejected the proposed social media for under-16s, which would have been added to the Children’s Wellbeing and Schools bill. Instead, the government is continuing to collect information from the consultation it launched on March 2, which closes on May 26.

Some MPs seem to have been persuaded by the argument – mooted by, among others, the National Society for the Prevention of Cruelty to Children – that banning children from social media will simply push them to find darker, less visible unregulated online spaces with less in the way of protection or moderation. The Commons did, however, support a government proposal to give the relevant ministers greater powers to restrict or ban children’s access to social media services and chatbots, limit their use of VPNs, and change the “age of digital consent”. The Children’s Wellbeing and Schools bill now goes back to the Lords for more discussion. .

As an unelected body, in recent years the House of Lords has often been a damper on hastily-passed legislation in response to political trends, but here they’re leading. The social media ban passed there. This week, as Dev Kundaliya reports at Computing. the Lords voted on two amendments, one to the Crime and Policing bill, the other to the Children’s Wellbeing and Schools bill. At the Online Safety Act Network blog, University of Essex professor Lorna Woods explains these in detail. The first would enable the government to amend the Online Safety Act to “minimize or mitigate the risks of harm to individuals” from illegal AI-generated content. The second would give the government latitude to change the age of consent.

Woods’ main point is the extreme power being given the government here to bypass Parliament, calling the clauses Henry VIII powers – that is, giving the government the power to change or repeal an Act of Parliament without consulting Parliament. The government’s official justification is to give the government greater flexibility to adapt on the fly to new technologies and online harms. Or to bar access to stuff it doesn’t like, presumably.

The Open Rights Group agrees, calling the plan powers to restrict the entire Internet. ORG also cites a March 0 open letter from 400 scientists and calls for a moratorium on age assurance to learn more about the technological hazards and social impact and for adopting alternative measures in the meantime, such as regulating algorithmic manipulation and providing parents with support.

At DefendDigitalMe, Jen Persson points out that the books already contain laws enabling considerable digital control of children; surveillance, she writes, is being “dressed as ‘safety'”. None of this, she writes, is compatible with children’s *rights*, which don’t seem to get much of a look-in.

Illustrations: Henry VIII, as painted by Hans Holbein the Younger (via Wikimedia.

Also this week: TechGrumps 3.38, The Bettification of Everything.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Bedroom eyes

We’ve long known that much of today’s “AI” is humans all the way down. This week underlines this: in an investigation, Svenska Dagbladet and Göteborgs-Posten learn that Meta’s Ray-Ban smart glasses are capturing intimate details of people’s lives and sending them to Nairobi, Kenya. There, employees at Meta subcontractor Sama label and annotate the data for use in training models. Brings a new meaning to “bedroom eyes”.

This sort of violation is easily imposed on other people without their knowledge or consent. We worry about the police using live facial recognition, but what about being captured by random people on the street? In January’s episode of the TechGrumps podcast, we called the news of Meta’s new product “Return of the Glasshole“.

Two 2018 books, Mary L. Gray and Siddharth Suri’s Ghost Work and Sarah T. Roberts’ Behind the Screen made it clear that “machine learning” and “AI” depend on poorly-paid unseen laborers. Dataveillance is a stowaway in every “smart” device. But this is a whole new level: the Kenyans report glimpses of bank cards, bedroom intimacy, even bathroom visits. The journalists were able to establish that the glasses’ AI requires a connection to Meta’s servers to answer questions, and there’s no opt out.

The UK’s Information Commissioner’s Office is investigating, and at Ars Technica Sarah Perez reports that a US lawsuit has been filed.

As the original Swedish report goes on to say, the EU has no adequacy agreement with Kenya. More disturbing is the fact that probably hundreds of people within Meta worked on this without seeing a problem.

In 1974, the Watergate-related revelation that US president Richard Nixon had recorded everything taking place in his office inspired folksinger Bill Steele to write the song The Walls Have Ears (MP3). What struck him particularly was that everyone saw it as unremarkable. “Unfortunately still current,” he commented in his 1977 liner notes. Nearly 50 years later, ditto.

***

A lot of (especially younger) people don’t remember that before 9/11 you could walk into most buildings without showing ID. Many authorities – the EU in particular – have long been unhappy with anonymity online, and one conspiratorial theory about age gating and the digital ID infrastructure being built in many places is that the goal is complete and pervasive identification. In the UK, requiring ID for all Internet access has occasionally popped up as a child safety idea, even though security experts recommend lying about birth dates and other personal data in the interests of self-protection against identity theft.

Now we have generative AI, and along comes a new paper that finds that large language models can be used to deanonymize people online at large scale by analyzing profiles and conversations. In one exercise, they matched Hacker News posts to LinkedIn profiles. In another, they linked users across subReddit communities. In a third, they split Reddit profiles to mimic the use of pseudonymous posting. Pseudonymity doesn’t offer meaningful protection (though I’m not sure how much it ever did), and preventing this type of attack is difficult. They also suggest platforms should reconsider their data access policies in line with their findings.

It’s hard to imagine most platforms will care much; users have long been expected to assess their own risk. Even smaller communities with a more concerned administration will not be in a position to know how many other services their users access, what they post there, or how it can be cross-linked. The difficulty of remaining anonymous online has been growing ever since 2000, when Latanya Sweeney showed it was possible to identify 87% of the population recorded in census data given just Zip code, date of birth, and gender. As psychics know, most people don’t really remember what they’ve said and how it can be linked and exploited by someone who’s paying attention. The paper concludes: we need a new threat model for privacy online.

***

The Internet, famously, was designed to support communications in the face of a bomb outage.

Building it required physical links – undersea cables, fiber connections, data centers, routers. For younger folks who have grown up with wifi and mobile phone connections, that physical layer may be invisible. But it matters no less than it did twenty-five years ago, when experts agreed that ten backhoes (among other things) could do more effective damage than bombs.

This week’s horrible, spreading war in the Middle East has seen the closure of the Strait of Hormuz and the Red see to commercial traffic. Indranil Ghosh reports at Rest of World that that 17 undersea cables pass through the Red Sea alone, and billions, soon trillions, of dollars in US technology investment depends on fiber optic cables running through war zones. There’s been reporting before now about the links between various Middle Eastern countries and Silicon Valley (see for example the recent book Gilded Rage, by Jacob Silverman), but until now much less about the technological interdependence put in jeopardy by the conflict. Ghosh also reports that drones have struck two Amazon Web Services data centers in UAE and one in Bahrein.

The issue is not so much direct damage to the cables as the impossibility of repairing them as long as access is closed. The Internet, designed with war in mind, is a product of peace.

Illustrations: Monument to Anonymous, by Meredith Bergmann.

Also this week: At the Plutopia podcast, we interview Kate Devlin, who studies human-AI interaction.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Saving no one

In the early 2010s, after “nano” and before “AI”, 3D printing was the technology that was going to change everything. Then it seemed to go quiet except for guns.

“First we will gain control over the shape of physical things. Then we will gain new levels of control over their composition, the materials they’re made of. Finally, we will gain control over the behavior of physical things,” Hod Lipson and Melba Kurman wrote in their 2013 book, Fabricated. As far as I can tell, we’re still pretty much in the era of making physical things that could be made by traditional methods rather than weird new shapes that could *only* be produced by additive manufacturing. More than 15 years after a fellow technology conference attendee excitedly lectured me that 3D printing was going to change everything, its growth remains largely hidden from most of us.

Until this past week, when I attended an event awash in puzzle makers and discovered that it’s been a godsend to them for making not only prototypes but also small runs of copies or published designs, freeing them from having to find space and capital for the kind of quantities required by traditional production. It’s good to see a formerly hyped technology supporting clever and entertaining human invention.

Exploding egg, anyone?

***

In one of the biggest fines in its history, the UK Information Commissioner’s Office has announced it is fining Reddit £14.5 million for failing to put in place an effective age verification mechanism to block under-13s from using the site under Reddit’s stated terms of service. The story is somewhat confused by timing: the fine is under data protection law and relates to the period before the arrival of the Online Safety Act, but the OSA’s requirement for age verification brought the changes that sparked the fine. Reddit says it will appeal.

In the UK terms and conditions Reddit announced in June 2025, the company says that “by using the services, you state that…you are at least 13 years old”. But Reddit didn’t require proof, and the ICO says that many under-13s use(d) the platform.

In July, when the Online Safety Act came into effect, Reddit added an age gate of 18 for “mature” content. Unlike many other social media sites that are just giant pools of content sorted by curation or algorithm, Reddit is a large set of distinct subReddits. Each of these communities has its own rules, social norms, and, most important, human moderators. Because of this, it’s comparatively easy to mark a particular subReddit as “for adults only”. After the July change, anyone in the UK wishing to access one of those subReddits was asked to submit a selfie or an image of a government-issued ID in order to prove their age.

The ICO’s findings state that Reddit failed to protect under-13s from accessing content that placed them at risk; that it processed under-13s’ data unlawfully (because they are too young to meaningfully consent); and that a simple statement is not a sufficient age verification mechanism (which is made clear in the OSA).

A Reddit spokesperson told the Guardian: “The ICO’s insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users’ online privacy and safety.”

I take their point; I’d rather skip the “mature” content than bear the privacy risk of uploading personal data to whatever third-party company Reddit is using for age verification. Last July, I decided I would just be a child. (Although: my Reddit account dates to 2015, so they could just do the math.)

Turns out, it may have been a wise decision. Reddit, saying it didn’t want to hold users’ personal data, chose the age verification provider Persona.

Persona deserves a look. Last week, Discord announced it would begin treating all users as teens until they’d been verified, also using Persona. The result as Ashley Belanger reports at Ars Technica was a user backlash. First, because last time Discord tried this, its now-former age verification provider’s pile of 70,000 users’ age check information was hacked.

Second, because The Rage reports that a group of security researchers found a Persona front end exposed to the open Internet on a US government server. On examination, that code shows that Persona performs 269 different verification checks and scours the Internet and government sources using your selfie and facial recognition. Discord has now announced it will delay introducing age verification – and won’t be using Persona after an apparently unsatisfactory trial in the UK last year.In a blog posting, Discord says that, like Reddit, it does not want to know its users’ identification details. It is adding more verification options.

If the world had already had a set of established trustworthy companies that specialized in age verification when the OSA came into effect, then it would make sense to turn to them to provide that service. But we aren’t in that situation. Instead, although providers have been working for more than a decade to build such systems, their deployment at scale is new.

Part of keeping children – and the rest of us – safe is protecting security and privacy – and child safety campaigners’ refusal to accept this has been an issue for decades. Creating new privacy risks doesn’t keep anyone safer – including children.

Illustrations: Six-panel early 1970s cartoon strip, “What the User Wanted”.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.