Eating the web

“Traffic to my blog has plummeted,” a friend said recently. Over decades, he’s built a thriving community, and his core users persist. But Google was crucial for bringing in new readers – and its introduction of AI and changes to its algorithm have punished small sites.

This week, Sean Hollister reports at The Verge that Google is using its AI to replace the headlines on news stories. For Google, it is, as Charles Arthur comments at The Overspill, a “small” and “narrow” experiment – until it becomes a feature. For The Verge, however, the impact is noticeable: the headlines it crafts to market its journalists’ work are being replaced with boring titles that do not accurately convey the articles’ content.

Then, the week before last, Joe Toscano reported at Forbes that Google has patented a system that uses AI to rewrite website landing pages to produce customized versions for its users. Toscano links this to an earlier announcement in which Google announced a protocol to make websites’ structure more readable to AI agents. Toscano suggests that taken together the two elements allow Google to break websites apart into their component parts for reassembly by AI agents into whatever version they identify as best for the user they represent.

In the early 1990s, someone I met directed me to read the work of Robert McChesney, whose books recount the cooption and commercialization of radio and television, also originally conceived as democratic, educational media. Helping to prevent a similar outcome for the Internet is a lot of what net.wars has always been about. Now, Google, which would not exist without the open web, wants to eat the whole thing.

***

On Tuesday, a jury in a New Mexico court found Meta guilty of misleading consumers about the safety of its platforms and enabling harm including child sexual exploitation, as Katie McQue reports at the Guardian. The jury has ordered the company to pay $375 million in civil penalties. Meta will appeal. Snapchat and TikTok, which were also accused, settled before the trial began.

The New Mexico attorney general’s office says it intends to pursue changes to platform design including age verification and “protecting minors from encrypted communications that shield bad actors”.

On Wednesday, a jury in Los Angeles found YouTube guilty of deliberately designing an addictive product. As Dara Kerr reports at the Guardian, the case was brought by a 20-year-old woman who claimed her addiction to Instagram and YouTube began at age six, damaging her relationships with her family and in school and causing her to become depressed and engage in self-harm. The jury awarded her $6 million, split between Meta (70%) and YouTube (30%). Both companies say they will appeal.

They will have to, because, as Kerr reported in January, there are more of these trials to come, and even to trillion-dollar companies thousands of fines can add up to real money. In a consolidated case, in California state and federal courts thousands of families accuse social media companies of harming children. Reuters reports that more trials are scheduled: a school district in Breathitt County, Kentucky in federal court against Meta, ByteDance, Snapchat, and Google, and one in state court in California in July against Instagram, YouTube, TikTok, and Snapchat.

In January, the Tech Oversight Project reported newly unsealed documents contained the “smoking gun” evidence – that is, internal email discussions – that the four companies deliberately designed their products to be addictive and failed to provide effective warnings about social media use. Certainly, the leaked documents make it sou9nd like a plan. Tech Oversight quotes one: “Onboarding kids into Google’s Ecosystem leads to brand trust and loyalty over their lifetime.” It’s hard not to see the commonality with Joe Camel and so many other marketing strategies.

Key to these cases is Section 230 – the clause in the Communications Decency Act that shields online services from liability for the material their users post and allows them to moderate content in good faith. The plaintiffs argued – successfully in New Mexico – that the law does not shield the platforms from liability for their design decisions. The social media companies naturally tried to argue that it does.

At his blog, law professor Eric Goldman discusses the broader impact of these bellwether cases. As he says, whatever changes the social media companies feel forced to make by the potential liability of myriad jury trials and new laws may help some victims but almost certainly hurt other groups who were not represented at the trial. Similarly, at Techdirt Mike Masnick warns that features like autoscrolling and algorithmic recommendations are not inherently harmful; it’s the content they relentlessly serve that is really the issue; cue the First Amendment. And few who are not technology giants can afford to face jury trials and fines. Are we talking a regime under which every design decision has to go through lawyers?

In a posting summarizing the history of S230, Goldman predicts that age verification laws will reshape the Internet of 2031 or 2036 beyond recognition, killing most of what we love now. So much doom, so little time.

Illustrations: The volcano of Stromboli, on which JRR Tolkien based Mount Doom in The Lord of the Rings (by Steven J. Dengler at Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Turn left at the robot

It’s easy to forget, now that so many computer interfaces seem to be getting more inscrutable, that in the early 1990s the shift from the command line to graphical interfaces and the desire to reach the wider mass market introduced a new focus on usability. In classics like Don Norman’s The Design of Everyday Things, a common idea was that a really well-designed system should not require a manual because the design itself would tell the user how to operate it.

What is intuitive, though, is defined by what you’re used to. The rumor that Apple might replace the letters on keys with glyphs is a case in point. People who’ve grown up with smartphones might like the idea of glyphs that match those on their phones. But try doing technical support over the phone and describing a glyph; easier to communicate the letters.

Those years in which computer interfaces were standardized relied on metaphors based on familiar physical items: a floppy disk to save a file, an artist’s palette for color choices. In 1993, leading software companies like Microsoft and Lotus set up usability labs; it took watching user testing to convince developers that struggling to use their software was not a sign the users were stupid.

With that background, it was interesting to attend this year’s edition of the 20-year-old Human-Robot Interaction conference. Robots don’t need metaphors; they *are* the thing itself. Although: why shouldn’t a robot also have menu buttons for common functions?

In the paper I found most interesting and valuable, Lauren Wright examined the use of a speaking Misty robot to deliver social-emotional learning lessons. Wright’s group tested the value of deception – that is, having the robot speak in the first person of its “family”, experiences, and “emotions” – versus a more truthful presentation, in which the robot is neutral and tells its stories in the third person, refers to its programmers, and professes no humanity. The researchers were testing the widely-held assumption that kids engage more with robots programmed to appear more human. They found the opposite: while both versions significantly increased their learning, the kids who used the factual robot showed more engagement and higher scores in the sense of using concepts from the lesson in their answers. This really shouldn’t be surprising. Children don’t in general respond well to deception. Really, who does?

The children’s personal reactions to the robots were at least as interesting. In Michael F. Xu‘s paper, the researchers held co-design sessions and then installed a robot in eight family homes to act as a neutral third-party enforcer issuing timely reminders on behalf of busy parents. Some of the families did indeed report that the robot’s reminder got stuff done more efficiently. On the other hand, the experiment was short – only four days – and you have to wonder if that would still be true after the novelty wore off. There were hints of this from the kids, some of whom pushed back. One simply bypassed a robot reminding him of the limits on his TV viewing by taking the TV upstairs, where the robot couldn’t go. Another reacted like I would at any age and told the robot to “shut up”.

The fact versus fiction presentation included short video clips of some of the kids’ interaction with the robot tutor. In one, a boy placed his hands on either side of the robot’s “face” while it was talking and kept moving its head around, exploring the robot’s physical capabilities (or trying to take its head off?). The speaker ignored this entirely, but the sight hilariously made an important point: the robot’s physical form speaks even when the robot is silent.

We saw this at We Robot 2016, when a Jamaican lawyer asked Olivier Guilhem, from Aldebaran Robotics, which makes Pepper, “Why is the robot white?” His response: “It looks clean.” This week, one paper tried to tease out how “representation bias” – assumptions about gender, skin tone, dis/ability, accessibility, size, age – affect users’ reactions. In the dataset used to train an AI model, bias may be baked in through the historical record. With robots, bias can also present directly through the robot’s design, as Yolande Strengers’ and Jenny Kennedy’s showed in their 2020 book The Smart Wife. Despite its shiny, unmistakable whiteness, Pepper’s shape was ambiguous enough for its gender to be interpreted differently in different cultures. In the HRI paper, the researchers concluded that biases in robot design could perpetuate occupational stereotypes – “technological segregation”. They also found their participants consistently preferred non-skin tones – in their examples, silver and light teal.

“Who builds AI shapes what AI becomes,” said Ben Rosman, who outlined a burgeoning collaborative effort to build a machine learning community across Africa and redress its underrepresentation. The same with robots: many, many cultural norms affect how humans interact with them. That information is signal, not noise, he says, and should be captured to enable robots to operate across wide ranges of human context without relying on “brittle defaults” that interpret human variation as failures. “Turn left at the robot,” makes perfect sense once you know that in South Africa “robots” are known elsewhere as traffic lights.

Illustrations: Rosey, the still-influential “old demo model” robot maid in The Jetsons (1962-1963).

Also this week:
At the Plutopia podcast, we interview Marc Abrahams, founder of the Ig Nobel awards.
At Skeptical Inquirer, the latest Letter to America finds David Clarke conducting the English folklore survey.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A short history of We Robot, 2026 edition

On the eve of We Robot 2026, here are links to my summaries of every year since 2012, the inaugural conference, except 2014, which I missed for family reasons. There was no conference in 2024 in order to move the event back to its original April schedule (covid caused its move to September in 2020). These are my personal impressions; nothing I say here should be taken as representing the conference, its founders, its speakers, or their institutions.

We Robot was co-founded by Michael Froomkin, Ryan Calo, and Ian Kerr to bring together lawyers and engineers to think early and long about the coming conflicts in robots, law, and policy.

2025: Predatory inclusion: In Windsor, Ontario, a few months into the new US administration, the sudden change in international relations highlights the power imbalances inherent in many of today’s AI systems. Catopromancy: in workshops, we hear a librarian propose useful AI completely out of step with today’s corporate offerings, and mull how to apply existing laws to new scenarios.

2024 No conference.

2023 The end of cool: after struggling to design a drone delivery service that had benefits over today’s cycling couriers, we find ourselves less impressed by robots that can do somersaults but not anything obviously useful; the future may have seemed more exciting when it was imaginary.

2022 Insert a human: following a long-held conference theme about “humans in the loop, “robots” are now “sociotechnical systems”. Coding ethics: Where Asimov’s laws were just a story device, in workshops we try to work out how to design a real ethical robot.

2021 Plausible diversions: maybe any technology sufficiently advanced to seem like magic can be well enough understood that we can assign responsibility and liability? Is the juice worth the squeeze?: In workshops, we mull how to regulate delivery robots, which will likely have no user-serviceable parts. Title from Woody Hartzog.

2020 (virtual) The zero on the phone: AI exploitation and bias embedded in historical data become what one speaker calls “unregulated experimentation on humans…without oversight or control”.

2019 Math, monsters, and metaphors. We dissect the trolley problem and find the true danger on the immediate horizon is less robots, more the “pile of math that does some stuff” we call “AI”. The Algernon problem: in workshops, new disciplines joining the We Robot family remind us that robots/AI are carrying out the commands of distant owners.

2018 Deception. We return to the question of what makes robots different and revisit Madeleine Clare Elish’s moral crumple zones after the first pedestrian death by self-driving car. Late, noisy, and wrong: in workshops, engineers Bill Smart and Cindy Grimm explain why sensors never capture what you think they’re capturing and how AI systems use their data.

2017 Have robot, will legislate: Discussion of risks this year focused on the intermediate situation, when automation and human norms must co-exist.

2016 Humans all the way down: Madeline Clare Elish introduces “moral crumple zones”, a paper that will resonate through future years. The lab and the world: in workshops, Bill Smart uses conference attendees in formation to show why getting a robot to do anything is difficult.

2015: Multiplicity: W
When in the life of a technology is the right time for regulatory intervention?

2014 Missed conference

2013 Cautiously apocalyptic: Diversity of approaches to regulation will be needed to handle the diversity of robots, and at the beginning of cloud robotics and full-scale data collection, we envision a pet robot dog that can beg its owner for an upgraded service subscription.

2012 A really fancy hammer with a gun. At the first We Robot, we try to answer the fundamental question: what difference do robots bring? Unsentimental engineer Bill Smart provided the title.

Power games

UK prime minister Keir Starmer’s desire to bring in a UK digital ID is awake again with the announcement that the government plans to make the IDs available for “a handful of uses” before the next general election, due by 2029, . The requisite consultation closes May 5.

At Computer Weekly, Lis Evanstad adds a summary and detail about the consultation. Government by app! What could possibly go wrong?

Among other new information in the consultation: the age for being able to get a digital ID could drop below 18, perhaps even to issuance at birth. There might be a single unique identifier to enable linking data throughout government systems. Darren Jones, the prime minister’s chief secretary, talks smoother access to government services and the ability to share only the piece of data that’s needed for a specific purpose, but not about the risks of tying everything to a single identifier whose compromise can reverberate throughout your life.

In his Guardian piece, Stacey quotes Jones, who positions the digital ID as a way to improve fairness in access to government services, which he says accrue disproportionately to “pushy” people with time, patience, and energy.

This sounds good until you read Government Digital Service co-founder Tom Loosemore‘s blog, where he notes that creating that sort of stonewalling bureaucracy is often a deliberate strategy to manage demand. Loosemore argues that agentic AI will force an end to this strategy because agents will have unlimited patience and “reduce the cost to citizens of appeals, challenges, and calculations etc to near-zero”. Instead of having to manually dig through financial records, Loosemore finds in an experiment that an AI agent can scan the documents, find the information, and present only the data required to establish the citizen’s entitlement to benefits.

He believes AI agents will also bring a new level of transparency (and perhaps “pushiness”): “AI Agents will always dig out that 93 page PDF of guidance hidden.” Governments, he writes, will be forced to “clarify and tighten policies & processes, with all the painful political trade-offs therein”.

Or: will budget-protecting governments adopt their own agentic AIs to move and re-bury the stuff they don’t want applicants to find and recalibrate their requirements to make them resistant to automation? Seems just as likely, really.

While all that was going on, significant votes took place on the future of access to online content. On Tuesday, Jennifer McKiernan reports at the BBC, MPs rejected the proposed social media for under-16s, which would have been added to the Children’s Wellbeing and Schools bill. Instead, the government is continuing to collect information from the consultation it launched on March 2, which closes on May 26.

Some MPs seem to have been persuaded by the argument – mooted by, among others, the National Society for the Prevention of Cruelty to Children – that banning children from social media will simply push them to find darker, less visible unregulated online spaces with less in the way of protection or moderation. The Commons did, however, support a government proposal to give the relevant ministers greater powers to restrict or ban children’s access to social media services and chatbots, limit their use of VPNs, and change the “age of digital consent”. The Children’s Wellbeing and Schools bill now goes back to the Lords for more discussion. .

As an unelected body, in recent years the House of Lords has often been a damper on hastily-passed legislation in response to political trends, but here they’re leading. The social media ban passed there. This week, as Dev Kundaliya reports at Computing. the Lords voted on two amendments, one to the Crime and Policing bill, the other to the Children’s Wellbeing and Schools bill. At the Online Safety Act Network blog, University of Essex professor Lorna Woods explains these in detail. The first would enable the government to amend the Online Safety Act to “minimize or mitigate the risks of harm to individuals” from illegal AI-generated content. The second would give the government latitude to change the age of consent.

Woods’ main point is the extreme power being given the government here to bypass Parliament, calling the clauses Henry VIII powers – that is, giving the government the power to change or repeal an Act of Parliament without consulting Parliament. The government’s official justification is to give the government greater flexibility to adapt on the fly to new technologies and online harms. Or to bar access to stuff it doesn’t like, presumably.

The Open Rights Group agrees, calling the plan powers to restrict the entire Internet. ORG also cites a March 0 open letter from 400 scientists and calls for a moratorium on age assurance to learn more about the technological hazards and social impact and for adopting alternative measures in the meantime, such as regulating algorithmic manipulation and providing parents with support.

At DefendDigitalMe, Jen Persson points out that the books already contain laws enabling considerable digital control of children; surveillance, she writes, is being “dressed as ‘safety'”. None of this, she writes, is compatible with children’s *rights*, which don’t seem to get much of a look-in.

Illustrations: Henry VIII, as painted by Hans Holbein the Younger (via Wikimedia.

Also this week: TechGrumps 3.38, The Bettification of Everything.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Bedroom eyes

We’ve long known that much of today’s “AI” is humans all the way down. This week underlines this: in an investigation, Svenska Dagbladet and Göteborgs-Posten learn that Meta’s Ray-Ban smart glasses are capturing intimate details of people’s lives and sending them to Nairobi, Kenya. There, employees at Meta subcontractor Sama label and annotate the data for use in training models. Brings a new meaning to “bedroom eyes”.

This sort of violation is easily imposed on other people without their knowledge or consent. We worry about the police using live facial recognition, but what about being captured by random people on the street? In January’s episode of the TechGrumps podcast, we called the news of Meta’s new product “Return of the Glasshole“.

Two 2018 books, Mary L. Gray and Siddharth Suri’s Ghost Work and Sarah T. Roberts’ Behind the Screen made it clear that “machine learning” and “AI” depend on poorly-paid unseen laborers. Dataveillance is a stowaway in every “smart” device. But this is a whole new level: the Kenyans report glimpses of bank cards, bedroom intimacy, even bathroom visits. The journalists were able to establish that the glasses’ AI requires a connection to Meta’s servers to answer questions, and there’s no opt out.

The UK’s Information Commissioner’s Office is investigating, and at Ars Technica Sarah Perez reports that a US lawsuit has been filed.

As the original Swedish report goes on to say, the EU has no adequacy agreement with Kenya. More disturbing is the fact that probably hundreds of people within Meta worked on this without seeing a problem.

In 1974, the Watergate-related revelation that US president Richard Nixon had recorded everything taking place in his office inspired folksinger Bill Steele to write the song The Walls Have Ears (MP3). What struck him particularly was that everyone saw it as unremarkable. “Unfortunately still current,” he commented in his 1977 liner notes. Nearly 50 years later, ditto.

***

A lot of (especially younger) people don’t remember that before 9/11 you could walk into most buildings without showing ID. Many authorities – the EU in particular – have long been unhappy with anonymity online, and one conspiratorial theory about age gating and the digital ID infrastructure being built in many places is that the goal is complete and pervasive identification. In the UK, requiring ID for all Internet access has occasionally popped up as a child safety idea, even though security experts recommend lying about birth dates and other personal data in the interests of self-protection against identity theft.

Now we have generative AI, and along comes a new paper that finds that large language models can be used to deanonymize people online at large scale by analyzing profiles and conversations. In one exercise, they matched Hacker News posts to LinkedIn profiles. In another, they linked users across subReddit communities. In a third, they split Reddit profiles to mimic the use of pseudonymous posting. Pseudonymity doesn’t offer meaningful protection (though I’m not sure how much it ever did), and preventing this type of attack is difficult. They also suggest platforms should reconsider their data access policies in line with their findings.

It’s hard to imagine most platforms will care much; users have long been expected to assess their own risk. Even smaller communities with a more concerned administration will not be in a position to know how many other services their users access, what they post there, or how it can be cross-linked. The difficulty of remaining anonymous online has been growing ever since 2000, when Latanya Sweeney showed it was possible to identify 87% of the population recorded in census data given just Zip code, date of birth, and gender. As psychics know, most people don’t really remember what they’ve said and how it can be linked and exploited by someone who’s paying attention. The paper concludes: we need a new threat model for privacy online.

***

The Internet, famously, was designed to support communications in the face of a bomb outage.

Building it required physical links – undersea cables, fiber connections, data centers, routers. For younger folks who have grown up with wifi and mobile phone connections, that physical layer may be invisible. But it matters no less than it did twenty-five years ago, when experts agreed that ten backhoes (among other things) could do more effective damage than bombs.

This week’s horrible, spreading war in the Middle East has seen the closure of the Strait of Hormuz and the Red see to commercial traffic. Indranil Ghosh reports at Rest of World that that 17 undersea cables pass through the Red Sea alone, and billions, soon trillions, of dollars in US technology investment depends on fiber optic cables running through war zones. There’s been reporting before now about the links between various Middle Eastern countries and Silicon Valley (see for example the recent book Gilded Rage, by Jacob Silverman), but until now much less about the technological interdependence put in jeopardy by the conflict. Ghosh also reports that drones have struck two Amazon Web Services data centers in UAE and one in Bahrein.

The issue is not so much direct damage to the cables as the impossibility of repairing them as long as access is closed. The Internet, designed with war in mind, is a product of peace.

Illustrations: Monument to Anonymous, by Meredith Bergmann.

Also this week: At the Plutopia podcast, we interview Kate Devlin, who studies human-AI interaction.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.