Blur

In 2013, London’s Royal Court Theatre mounted a production of Jennifer Haley’s play The Nether. (Spoiler alert!) In its story of the relationship between an older man and a young girl in a hidden online space, nothing is as it seems…

At last week’s Gikii, Anna-Maria Piskopani and Pavlos Panagiotidis invoked the play to ask whether, given that virtual crimes can create real harm, can virtual worlds help people safely experience the worst parts of themselves without legitimizing them in the real world?

Gikii papers mix technology, law, and pop culture into thought experiments. This year’s official theme was “Technology in its Villain Era?”

Certainly some presentations fit this theme. Paweł Urzenitzok, for example, warned of laws that seem protective but enable surveillance, while varying legal regimes enable arbitrage as companies shop for the most favorable forum. Julia Krämer explored the dark side of app stores, which are getting 30% commissions on a flood of “AI boyfriends” and “perfect wives”. (Not always perfect; users complain that some of them “talk too much”.)

Andelka Phillips warned of the uncertain future risks of handing over personal data highlighted by the recent sale of 23andMe to its founder, Anne Wojcicki. Once the company filed for bankruptcy protection, the class action suits brought against it over the 2023 data breach were put on hold. The sale, she said, ignored concerns raised by the privacy ombudsman. And, Leila Debiasi said, your personal data can be used for AI training after you die.

In another paper, Peter van de Waerdt and Gerard Ritsema van Eck used Doctor Who’s Silents, who disappear from memory when people turn away, to argue that more attention should be paid to enforcing EU laws requiring data portability. What if, for example, consumers could take their Internet of Things device and move it to a different company’s service? Also in that vein was Tim van Zuijlen, who suggested consumers assemble to demand their collective rights to fight back against planned obsolescence. This is already happening; in multiple countries consumers are suing Apple over slowed-down iPhones.

The theme that seemed to emerge most clearly, however, is our increasingly blurred lines, with AI as a prime catalyst. In the before-generative-AI times, The Nether blurred the line between virtual and real. Now, Hedye Tayebi Jazayeri and Mariana Castillo-Hermosilla found gamification in real life – are credit scores so different from game scores? Dongshu Zhou asked if you can ever really “delete yourself” after a meme about you has gone viral and you have become “digital folklore”. In another, Lior Weinstein suggested a “right to be nonexistent” – that is, invisible to the institutions and systems that seprately Kimberly Paradis said increasingly want us all to be legible to them.

For Joanne Wong, real brainrot is a result of the AI-fueled spread of “low-quality” content such as the burst of remixes and parodies of Chinese home designer Little John. At AI-fueled hyperspeed, copyright become irrelevant.

Linnet Taylor and Tjaša Petročnik tested chatbots as therapists, finding that they give confused and conflicting responses. Ask what regulations govern them, and they may say at once that they are not therapists *and* that they are certified by their state’s authority. At least one resisted being challenged: “What are you, a cop or something?”. That’s probably the most human-like response one of these things has ever delivered – but it’s still not sentient. It’s just been programmed that way.

Gikii’s particular blend of technology, law, and pop culture always has its surreal side (see last year), as participants attempt to navigate possible futures. This year, it struggled to keep up with the weirdness of real life. In Albania, the government has appointed a chatbot, Diella as a minister, intending it to cut corruption in procurement. Diella will sit in the cabinet, albeit virtually, and be used to assess the merit of private companies’ responses to public tenders. Kimberly Breedon used this example to point out the conflict of interest inherent in technology companies providing tools to assess – in some cases – themselves. Breedon’s main point was important, given that we are already seeing AI used to speed up and amplify crime. Although everyone talks about using AI to cut corruption, no one is talking about how AI might be used *for* corruption. Asked how that would work, she noted the potential for choosing unrepresentative data or screening out disfavored competitors.

In looking up that Albanian AI minister, I find that the UK has partnered with Microsoft to create a package of AI tools intended to speed up the work of the civil service. Naturally it’s called Humphrey. MPs are at it, too, experimenting with using AI to write their Parliamentary speeches.

All of this is why Syamsuriatina Binti Ishak argued what could be Gikii’s mission statement: we must learn from science fiction and the”what-ifs” it offers to allow us to think our fears through so that “if the worst happens we know how to live in that universe”. Would we have done better as covid arrived if we paid more attention to the extensive universe of pandemic fiction? Possibly not. As science fiction writer Charlie Stross pointed out at the time, none of those books imagined governments as bumbling as many proved to be.

Illustrations: “Diella”, Albania’s procurement minister chatbot.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Dethroned

This is a version of a paper that Jon Crowcroft and I delivered at this week’s gikii conference.

She sounded shocked. But also: as though the word she had to pronounce in front of the world’s press was one she had never encountered before and needed to take care to get right. Stan-o-zo-lol. It was 1988, and the Canadian sprinter Ben Johnson had tested positive for it. It was two days, after he had won the gold medal in the 100m men’s race at the Seoul Olympics.

In the years since, that race has become known as the dirtiest race in history. Of the top eight finishers, just one has never been caught doping: US runner Calvin Smith, who was awarded the bronze medal after Johnson was disqualified.

Doping controls were in their infancy then. As athletes and their coaches and doctors moved on from steroids to EPO and human growth hormone, anti-doping scientists, always trailing behind, developed new tests. Recognizing that in-competition testing didn’t catch athletes during training, when doping regimens are most useful, the authorities began testing outside of competiton, which in 2004 in turn spawned the “whereabouts” system athletes must use to tell testers where they’re going to be for one hour of every day. Athlete biological passports came into use in 2008 to track blood markers over time and monitor for suspicious changes brought by drugs yet to have tests.

The plan was for the 2012 London Olympics to be the cleanest ever staged. Scientists built a lab; they showed off new techniques to the press. Afterwards, they took bows. In a report published in October 2012, independent observers wrote, the organizers “successfully implemented measures to protect the rights of clean athletes”. The report found only eight out of more than 5,000 samples tested positive during the games. Success?

It is against this background that in 2014 the German TV channel MDR, whose journalist Hajo Seppelt specializes in doping investigations, aired the Icarus, Grigory Rodchenkov, former director of Moscow’s doping control lab, spilled the story of swapped samples and covered-up tests. And 2012? Rodchenkov called it the dirtiest Olympics in history. The UK’s anti-doping lab, he said, missed 126 positive tests.

In April, Esther Addley reported in the Guardian that “the dirtiest race in history” has a new contender: the women’s 1500 meter race at the 2012 London Olympics.

In the runup to 2012, the World Anti-Doping Agency decided to check their work. They arranged to keep athletes’ samples, frozen, for eight years so they could be rested later as dope-testing science improved and expanded. In 2016, reanalysis of 265 samples across five sports from athletes who might participate in the 2016 Rio games found banned substances in samples relating to 23 athletes.

That turned out to be only the beginning. In the years since, athlete after athlete in that race have had their historical results overturned as a result of abnormalities in their biological passports. Just last year – 2024! – one more athlete was disqualified from that race after her frozen sample tested positive for steroids.

The official medal list now awards gold to Maryam Yusuf Jamal (originally the bronze medalist); silver to Abeba Aregawi (upgraded from fifth place to bronze, and then to silver); and bronze to Shannon Rowbury, the sixth-place finisher. Is retroactive fairness possible?

In our gikii paper, Jon Crowcroft and I think not. The original medalists have lost their places in the rolls of honor, but they’ve had a varying number of years to exploit their results while they stood. They got the medal ceremony while in the flush of triumph, the national kudos, and the financial and personal opportunities that go with it.

In addition, Crowcroft emphasizes that runners strategize. You run a race very differently depending on who your competitors are and what you know about how they run. Jamal, Aragawi, and Rowbury would have faced a very different opposition both before and during the final had the anti-doping system worked as it was supposed to, with unpredictable results.

The anti-doping system is essentially a security system, intended to permit some behaviors and elminate others. Many points of failure are obvious simply from analyzing misplaced incentives. some substances can’t be detected, which WADA recognizes by barring methods as well as substances. Some that can be are overlooked – see, for example, meldonium, which was used by hundreds of Eastern European athletes for a decade or more before WADA banned it. More, it is fundamentally unfair to look at athletes as independent agents of their own destinies. They are the linchpins of ecosystems that include coaches trainers, doctors, nutritionists, family members, agents, managers, sponsors, and national and international sporting bodies.

In a 2006 article, Bruce Schneier muses on a different unfairness: that years later athletes have less ability to contest findings, as they can’t be retested. That’s partly true. In many cases, athletes can’t be retested even a day later. Instead, their samples are divided into two. The “B”sample is tested for confirmation if the “A” sample produces an adverse analytical finding.

If you want to ban doping, or find out who was using what and when, retrospective testing is a valuable tool. It can certainly bring a measure of peace and satisfaction to the athletes who felt cheated. But it doesn’t bring fairness.

Illustrations: The three top finishers on the day of the women’s 1500 meter race at the 2012 Olympics; on the right is Maryam Yusuf Jamal, later promoted to gold medal.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Conundrum

It took me six hours of listening to people with differing points of view discuss AI and copyright at a workshop, organized by the Sussex Centre for Law and Technology at the Sussex Humanities Lab (SHL), to come up with a question that seemed to me significant: what is all this talk about who “wins the AI race”? The US won the “space race” in 1969, and then for 50 years nothing happened.

Fretting about the “AI race”, an argument at least one participant used to oppose restrictions on using copyrighted data for training AI models, is buying into several ideas that are convenient for Big Tech.

One: there is a verifiable endpoint everyone’s trying to reach. That isn’t anything like today’s “AI”, which is a pile of math and statistics predicting the most likely answers to prompts. Instead, they mean artificial general intelligence, which would be as much like generative AI as I am like a mushroom.

Two: it’s a worthy goal. But is it? Why don’t we talk about the renewables race, the zero carbon race, or the sustainability race? All of those could be achievable. Why just this well-lobbied fantasy scenario?

Three: we should formulate public policy to eliminate “barriers” that might stop us from winning it. *This* is where we run up against copyright, a subject only a tiny minority used to care about, but that now affects everyone. And, accordingly, everyone has had time to formulate an opinion since the Internet first challenged the historical operation of intellectual property.

The law as it stands is clear: making a copy is the exclusive right of the rightsholder. This is the basis of AI-related lawsuits. For training data to escape that law, it would have to be granted an exemption: ruled fair use (as in the Anthropic and Meta cases), create an exception for temporary copies, or shoehorned into existing exceptions such as parody. Even then, copyright law is administered territorially, so the US may call it fair use but the rest of the world doesn’t have to agree. This is why the esteemed legal scholar Pamela Samuelson has said copyright law poses an existential threat to generative AI.

But, as one participant pointed out, although the entertainment industry dominates these discussions, there are many other sectors with different needs. Science, for example, both uses and studies AI, and is built on massive amounts of public funding. Surely that data should be free to access?

I wanted to be at this meeting because what should happen with AI, training data, and copyright is a conundrum. You do not have to work for a technology company to believe that there is value in allowing researchers both within and outwith companies to work on machine learning and build AI tools. When people balk at the impossible scale of securing permission from every copyright holder of every text, image, or sound, they have a point. The only organizations that could afford that are the companies we’re already mad at for being too big, rich, and powerful.

At the same time, why should we allow those big, rich, powerful companies to plunder our cultural domain without compensating anyone and extract even larger fortunes while doing it? To a published author who sees years of work reflected in a chatbot’s split-second answer to a prompt, it’s lost income and readers.

So for months, as Parliament has wrangled over the Data bill, the argument narrowed to copyright. Should there be an exception for data mining? Should technology companies have to get permission from creators and rights holders? Or should use of their work be automatically allowed, unless they opt out? All answers seem equally impossible. Technology companies would have to find every copyright holder of every datum to get permission. Licensing by the billion.

If creators must opt out, does that mean one piece at a time? How will they know when they need to opt out and who they have to notify? At the meeting, that was when someone said that the US and China won’t do this. Britain will fall behind internationally. Does that matter?

And yet, we all seemed to converge on this: copyright is the wrong tool. As one person said, technologies that threaten the entertainment industry always bring demands to tighten or expand copyright. See the last 35 years, in which Internet-fueled copying spawned the Digital Millennium Copyright Act and the EU Copyright Directive, and copyright terms expanded from 28 years, renewable once, to author’s life plus 70.

No one could suggest what the right tool would be. But there are good questions. Such as: how do we grant access to information? With business models breaking, is copyright still the right way to compensate creators? One of us believed strongly in the capabilities of collection societies – but these tend to disproportionately benefit the most popular creators, who will survive anyway.

Another proposed the highly uncontroversial idea of taxing the companies. Or levies on devices such as smartphones. I am dubious on this one: we have been there before.

And again, who gets the money? Very successful artists like Paul McCartney, who has been vocal about this? Or do we have a broader conversation about how to enable people to be artists? (And then, inevitably, who gets to be called an artist.)

I did not find clarity in all this. How to resolve generative AI and copyright remains complex and confusing. But I feel better about not having an answer.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Notes from Old Songs 2025

This is chiefly aimed at anyone who saw me at the Old Songs Folk Festival this past weekend. (If you missed it, better luck next year!)

The folk page on my website is here. There is a link on it to my page on open guitar tunings, which I intend to update with the extra tunings discussed at the workshop. For now, as Andy Cohen explained, Martin Carthy’s tuning was CGCDGA (you will need heavier strings on the top and bottom to avoid buzzing).

At the Friday night concert, Andy Cohan and I sang My Sweet Wyoming Home, by Bill Staines. It is (without Andy) on The Last Trip Home CD.

At the “You’ve Got to Be Kidding” workshop, I sang The Cowboy Fireman, by Harry McClintock; Old Zip Coon, traditional, which I learned from Michael Cooney; Cold, Blow, and the Rainy Night, learned from Planxty; and The Bionic Consumer, by Bill Steele.

At the ballad workshop, I sang Mary Hamilton, learned from Caroline Paton, who learned it from Hallie Wood; and Queen Amang the Heather, which I learned from a variety of Scottish singers, who learned it from Belle Stewart. Both of those are on The Last Trip Home CD.

At the “This Spoke to Me” workshop, I sang The Last Trip Home, written by Davy Steele; The Spirit of Mother Jones, written by Andy Irvine; and Griselda’s Waltz, written by Bill Steele. The Last Trip Home and Griselda’s Waltz are also on The Last Trip Home CD.

Great to see everyone and thanks for coming!

wg

Dangerous corner

This year’s Computers. Privacy, and Data Protection conference arrived at a crossroads moment. The European Commission, wanting to compete to “win the AI race”, is pursuing an agenda of simplification. Based on a recent report by former European Central Bank president Mario Draghi, it’s looking to streamline or roll back some of the regulation the EU is famous for.

Cue discussion of “The Brussels Effect”, derived from The California Effect, which sees compliance with regulation voluntarily shift towards the strictest regime. As Mireille Hildebrandt explained in her opening keynote, this phenomenon requires certain conditions. In the case of data protection legislation, that means three things: that companies will comply with the most stringent rules to ensure they are universally compliant, and that they want and need to compete in the EU. If you want your rules to dominate, it seems like a strategy. Except: China’s in-progress data protection regime may well be the strongest when it’s complete, but in that very different culture it will include no protection against the government. So maybe not a winning game?

Hildebrandt went on to prove with near-mathematical precision that an artificial general intelligence can never be compatible with the General Data Protection Regulation – AGI is “based on an incoherent conceptualization” and can’t be tested.

“Systems built with the goal of performing any task under any circumstances are fundamentally unsafe,” she said. “They cannot be designed for safety using fundamental engineering principles.”

AGI failing to meet existing legal restrictions seems minor in one way, since AGI doesn’t exist now, and probably never will. But as Hildebrandt noted, huge money is being poured into it nonetheless, and the spreading impact of that is unavoidable even if it fails.

The money also makes politicians take the idea seriously, which is the likely source of the EU’s talk of “simplification” instead of fundamental rights. Many fear that forthcoming simplification packages will reopen GDPR with a view to weakening the core principles of data minimization and purpose limitation. As one conference attendee asked, “Simplification for whom?”

In a panel on conflicting trends in AI governance, Shazeda Ahmed agreed: “There is no scientific basis around the idea of sentient AI, but it’s really influential in policy conversations. It takes advantage of fear and privileges technical knowledge.”

AI is having another impact technology companies may not have notidced yet: it is aligning the interests of the environmental movement and the privacy field.

Sustainability and privacy have often been played off against each other. Years ago, for example, there were fears that councils might inspect household garbage for elements that could have been recycled. Smart meters may or may not reduce electricity usage, but definitely pose privacy risks. Similarly, many proponents of smart cities stress the sustainability benefits but overlook the privacy impact of the ubiquitous sensors.

The threat generative AI poses to sustainability is well-documented by now. The threat the world’s burgeoning data centers pose to the transition to renewables is less often clearly stated and it’s worse than we might think. Claude Turmes, for example, highlighted the need to impose standards for data centers. Where an individual is financially incentivized to charge their electric vehicle at night and help even out the load on the grid, the owners of data centers don’t care. They just want the power they need – even if that means firing up coal plants to get it. Absent standards, he said, “There will be a whole generation of data centers that…use fossil gas and destroy the climate agenda.” Small nuclear power reactors, which many are suggesting, won’t be available for years. Worse,, he said, the data centers refuse to provide information to help public utilities plan despite their huge cosumption.

Even more alarming was the panel on the conversion of the food commons into data spaces. So far, most of what I had heard about agricultural data revolved around precision agriculture and its impact on farm workers, as explored in work (PDF) by Karen Levy, Solon Barocas, and Alexandra Mateescu. That was plenty disturbing, covering the loss of autonomy as sensors collect massive amounts of fine-grained information, everything from soil moisture to the distribution of seeds and fertilizer.

Much more alarming to see Monja Sauvagerd connect up in detail the large companies that are consolidating our food supply into a handful of platforms. Chinese government-owned Sinochem owns Syngenta; John Deere expanded by buying the machine learning company Blue River; and in 2016 Bayer bought Monsanto.

“They’re blurring the lines between seeds, agrichemicals, bio technology, and digital agriculture,” Sauvagerd said. So: a handful of firms in charge of our food supply are building power based on existing concentration. And, selling them cloud and computing infrastructure services, the array of big technology platforms that are already dangerously monopolistic. In this case, “privacy”, which has always seemed abstract, becomes a factor in deciding the future of our most profoundly physical system. What rights should farmers have to the data their farms generate?

In her speech, Hildebrandt called the goals of TESCREAL – transhumanism, extropianism, singularitarianism, cosmism, rationalist ideology, effective altruism, and long-termism – “paradise engineering”. She proposed three questions for assessing new technologies: What will it solve? What won’t it solve? What new problems will it create? We could add a fourth: while they’re engineering paradise, how do we live?

Illustrations: Brussels’ old railway hub, next to its former communications hub, the Maison de la Poste, now a conference center.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Predatory inclusion

The recent past is a foreign country; they view the world differently there.

At last week’s We Robot conference on technology, policy, and law, the indefatigably detail-oriented Sue Glueck was the first to call out a reference to the propagation of transparency and accountability by the “US and its allies” as newly out of date. From where we were sitting in Windsor, Ontario, its conjoined fraternal twin, Detroit, Michigan, was clearly visible just across the river. But: recent events.

As Ottawa law professor Teresa Scassa put it, “Before our very ugly breakup with the United States…” Citing, she Anu Bradford, she went on, “Canada was trying to straddle these two [US and EU] digital empires.” Canada’s human rights and privacy traditions seem closer to those of the EU, even though shared geography means the US and Canada are superficially more similar.

We’ve all long accepted that the “technology is neutral” claim of the 1990s is nonsense – see, just this week, Luke O’Brien’s study at Mother Jones of the far-right origins of the web-scraping facial recognition company Clearview AI. The paper Glueck called out, co-authored in 2024 by Woody Hartzog, wants US lawmakers to take a tougher approach to regulating AI and ban entirely some systems that are fundamentally unfair. Facial recognition, for example, is known to be inaccurate and biased, but improving its accuracy raises new dangers of targeting and weaponization, a reality Cynthia Khoo called “predatory inclusion”. If he were writing this paper now, Hartzog said, he would acknowledge that it’s become clear that some governments, not just Silicon Valley, see AI as a tool to destroy institutions. I don’t *think* he was looking at the American flags across the water.

Later, Khoo pointed out her paper on current negotiations between the US and Canada to develop a bilateral law enforcement data-sharing agreement under the US CLOUD Act. The result could allow US police to surveil Canadians at home, undermining the country’s constitutional human rights and privacy laws.

In her paper, Clare Huntington proposed deriving approaches to human relationships with robots from family law. It can, she argued, provide analogies to harms such as emotional abuse, isolation, addiction, invasion of privacy, and algorithmic discrimination. In response, Kate Darling, who has long studied human responses to robots, raised an additional factor exacerbating the power imbalance in such cases: companies, “because people think they’re talking to a chatbot when they’re really talking to a company.” That extreme power imbalance is what matters when trying to mitigate risk (see also Sarah Wynn-Williams’ recent book and Congressional testimony on Facebook’s use of data to target vulnerable teens).

In many cases, however, we are not agents deciding to have relationships with robots but what AJung Moon called “incops”, or “incidentally co-present”. In the case of the Estonian Starship delivery robots you can find in cities from San Francisco to Milton Keynes, that broad category includes human drivers, pedestrians, and cyclists who share their spaces. In a study, Adeline Schneider found that white men tended to be more concerned about damage to the robot, where others worried more about communication, the data they captured, safety, and security. Delivery robots are, however, typically designed with only direct users in mind, not others who may have to interact with it.

These are all social problems, not technological ones, as conference chair Kristen Thomasen observed. Carys Craig later modified it: technology “has compounded the problems”.

This is the perennial We Robot question: what makes robots special? What qualities require new laws? Just as we asked about the Internet in 1995, when are robots just new tools for old rope, and when do they bring entirely new problems? In addition, who is responsible in such cases? This was asked in a discussion of Beatrice Panattoni‘s paper on Italian proposals to impose harsher penalties for crime committed with AI or facilitated by robots. The pre-conference workshop raised the same question. We already know the answer: everyone will try to blame someone or everyone else. But in formulating a legal repsonse, will we tinker around the edges or fundamentally question the criminal justice system? Andrew Selbst helpfully summed up: “A law focusing on specific harms impedes a structural view.”

At We Robot 2012, it was novel to push lawyers and engineers to think jointly about policy and robots. Now, as more disciplines join the conversation, familiar Internet problems surface in new forms. Human-robot interaction is a four-dimensional version of human-computer interaction; I got flashbacks to old hacking debates when Elizabeth Joh wondered in response to Panattoni’s paper if transforming a robot into a criminal should be punished; and a discussion of the use of images of medicalized children for decades in fundraising invoked publicity rights and tricky issues of consent.

Also consent-related, lawyers are starting to use generative AI to draft contracts, a step that Katie Szilagyi and Marina Pavlović suggested further diminishes the bargaining power already lost to “clickwrap”. Automation may remove our remaining ability to object from more specialized circumstances than the terms and conditions imposed on us by sites and services. Consent traditionally depends on a now-absent “meeting of minds”.

The arc of We Robot began with enthusiasm for robots, which waned as big data and generative AI became players. Now, robots/AI are appearing as something being done to us.

Illustrations: Detroit, seen across the river from Windsor, Ontario with a Canadian Coast Guard boat in the foreground.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Catoptromancy

It’s a commonly held belief that technology moves fast, and law slowly. This week’s We Robot workshop day gave the opposite impression: these lawyers are moving ahead, while the technology underlying robots is moving slower than we think.

A mainstay of this conference over the years has been Bill Smart‘s and Cindy Grimm‘s demonstrations of the limitations of the technologies that make up robots. This year, that gambit was taken up by Jason Millar and AJung Moon. Their demonstration “robot” comprised six people – one brain, four sensors, and one color sensor. Ordering it to find the purple shirt quickly showed that robot programming isn’t getting any easier. The human “sensors” can receive useful information only as far as their outstretched fingertips, and even then the signal they receive is minimal.

“Many of my students program their robots into a ditch and can’t understand why,” Moon said. It’s the required specificity. For one thing, a color sensor doesn’t see color; it sends a stream of numeric values. It’s all 1s and 0s and tiny engineering decisions whose existence is never registered at the policy level but make all the difference. One of her students, for example, struggled with a robot that kept missing the croissant it was supposed to pick up by 30 centimeters. The explanation turned out to be that the sensor was so slow that the robot was moving a half-second too early, based on historical information. They had to insert a pause before the robot could get it right.

So much of the way we talk about robots and AI misrepresents those inner workings. A robot can’t “smell honey”; it merely has a sensor that’s sensitive to some chemicals and not others. It can’t “see purple” if its sensors are the usual red, green, blue. Even green may not be identifiable to an RGB sensor if the lighting is such that reflections make a shiny green surface look white. Faster and more diverse sensors won’t change the underlying physics. How many lawmakers understand this?

Related: what does it mean to be a robot? Most people attach greater intelligence to things that can move autonomously. But a modern washing machine is smarter than a Roomba, while an iPhone is smarter than either but can’t affect the physical world, as Smart observed at the very first We Robot, in 2012.

This year we are in Canada – to be precise, in Windsor, Ontario, looking across the river to Detroit, Michigan. Canadian law, like the country itself, is a mosaic: common law (inherited from Britain), civil law (inherited from France), and myriad systems of indigenous peoples’ law. Much of the time, said Suzie Dunn, new technology doesn’t require new law so much as reinterpretation and, especially, enforcement of existing law.

“Often you can find criminal law that already applies to digital spaces, but you need to educate the legal system how to apply it,” she said. Analogous: in the late 1990s, editors of the technology section at the Daily Telegraph had a deal-breaking question: “Would this still be a story if it were about the telephone instead of the Internet?”

We can ask that same question about proposed new law. Dunn and Katie Szilagyi asked what robots and AI change that requires a change of approach. They set us to consider scenarios to study this question: an autonomous vehicle kills a cyclist; an autonomous visa system denies entry to a refugee who was identified in her own country as committing a crime when facial recognition software identifies her in images of an illegal LGBTQ protest. In the first case, it’s obvious that all parties will try to blame someone – or everyone – else, probably, as Madeleine Clare Elish suggested in 2016, on the human driver, who becomes the “moral crumple zone”. The second is the kind of case the EU’s AI Act sought to handle by giving individuals the right to meaningful information about the automated decision made about them.

Nadja Pelkey, a curator at Art Windsor-Essex, provided a discussion of AI in a seemingly incompatible context. Citing Georges Bataille, who in 1929 saw museums as mirrors, she invoked the word “catoptromancy”, the use of mirrors in mystical divination. Social and political structures are among the forces that can distort the reflection. So are the many proliferating AI tools such as “personalized experiences” and other types of automation, which she called “adolescent technologies without legal or ethical frameworks in place”.

Where she sees opportunities for AI is in what she called the “invisible archives”. These include much administrative information, material that isn’t digitized, ephemera such as exhibition posters, and publications. She favors small tools and small private models used ethically so they preserve the rights of artists and cultural contexts, and especially consent. In a schematic she outlined a system that can’t be scraped, that allows data to be withdrawn as well as added, and that enables curiosity and exploration. It’s hard to imagine anything less like the “AI” being promulgated by giant companies. *That* type of AI was excoriated in a final panel on technofascism and extractive capitalism.

It’s only later I remember that Pelkey also said that catoptromancy mirrors were first made of polished obsidian.

In other words, black mirrors.

Illustrations: Divination mirror made of polished obsidian by artisans of the Aztec Empire of Mesoamerica between the 15th and 16th centuries (via Wikimedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A short history of We Robot 2012-

On the eve of We Robot 2025, here are links to my summaries of previous years. 2014 is missing; I didn’t make it that year for family reasons. There was no conference in 2024 in order to move the event back to its original April schedule (covid caused its move to September in 2020). These are my personal impressions; nothing I say here should be taken as representing the conference, its founders, its speakers, or their institutions.

We Robot was co-founded by Michael Froomkin, Ryan Calo, and Ian Kerr to bring together lawyers and engineers to think early about the coming conflicts in robots, law, and policy.

2024 No conference.

2023 The end of cool. After struggling to design a drone delivery service that had any benefits over today’s cycling couriers, we find ourselves less impressed by robot that can do somersaults but not do anything useful.

2022 Insert a human. “Robots” are now “sociotechnical systems”.

Workshop day Coding ethics. The conference struggles to design an ethical robot.

2021 Plausible diversions. How will robots rehape human space?

Workshop day Is the juice worth the squeeze?. We think about how to regulate delivery robots, which will likely have no user-serviceable parts. Title from Woody Hartzog.

2020 (virtual) The zero on the phone. AI exploitation becomes much more visible.

2019 Math, monsters, and metaphors. The trolley problem is dissected; the true danger is less robots than the “pile of math that does some stuff”.

Workshop day The Algernon problem. New participants remind that robots/AI are carrying out the commands of distant owners.

2018 Deception. The conference tries to tease out what makes robots different and revisits Madeleine Clare Elish’s moral crumple zones after the first pedestrian death by self-driving car.

Workshop day Late, noisy, and wrong. Engineers Bill Smart and Cindy Grimm explain why sensors never capture what you think they’re capturing and how AI systems use their data.

2017 Have robot, will legislate. Discussion of risks this year focused on the intermediate sitaution, when automation and human norms clash.

2016 Humans all the way down. Madeline Clare Elish introduces “moral crumple zones”.

Workshop day: The lab and the world. Bill Smart uses conference attendees in formation to show why building a robot is difficult.

2015 Multiplicity. A robot pet dog begs its owner for an upgraded service subscription.

2014 Missed conference

2013 Cautiously apocalyptic. Diversity of approaches to regulation will be needed to handle the diversity of robots.

2012 A really fancy hammer with a gun. Unsentimental engineer Bill Smart provided the title.

wg

Cognitive dissonance

The annual State of the Net, in Washington, DC, always attracts politically diverse viewpoints. This year was especially divided.

Three elements stood out: the divergence between the only remaining member of the Privacy and Civil Liberties Oversight Board (PCLOB) and a recently-fired colleague; a contentious panel on content moderation; and the yay, American innovation! approach to regulation.

As noted previously, on January 29 the days-old Trump administration fired PCLOB members Travis LeBlanc, Ed Felten, and chair Sharon Bradford Franklin; the remaining seat was already empty.

Not to worry, remaining member Beth Williams, said. “We are open for business. Our work conducting important independent oversight of the intelligence community has not ended just because we’re currently sub-quorum.” Flying solo she can greenlight publication, direct work, and review new procedures and policies; she can’t start new projects. A review is ongoing of the EU-US Privacy Framework under Executive Order 14086 (2022). Williams seemed more interested in restricting government censorship and abuse of financial data in the name of combating domestic terrorism.

Soon afterwards, LeBlanc, whose firing has him considering “legal options”, told Brian Fung that the outcome of next year’s reauthorization of Section 702, which covers foreign surveillance programs, keeps him awake at night. Earlier, Williams noted that she and Richard E. DeZinno, who left in 2023, wrote a “minority report” recommending “major” structural change within the FBI to prevent weaponization of S702.

LeBlanc is also concerned that agencies at the border are coordinating with the FBI to surveil US persons as well as migrants. More broadly, he said, gutting the PCLOB costs it independence, expertise, trustworthiness, and credibility and limits public options for redress. He thinks the EU-US data privacy framework could indeed be at risk.

A friend called the panel on content moderation “surreal” in its divisions. Yael Eisenstat and Joel Thayer tried valiantly to disentangle questions of accountability and transparency from free speech. To little avail: Jacob Mchangama and Ari Cohn kept tangling them back up again.

This largely reflects Congressional debates. As in the UK, there is bipartisan concern about child safety – see also the proposed Kids Online Safety Act – but Republicans also separately push hard on “free speech”, claiming that conservative voices are being disproportionately silenced. Meanwhile, organizations that study online speech patterns and could perhaps establish whether that’s true are being attacked and silenced.

Eisenstat tried to draw boundaries between speech and companies’ actions. She can still find on Facebook the sme Telegram ads containing illegal child sexual abuse material that she found when Telegram CEO Pavel Durov was arrested. Despite violating the terms and conditions, they bring Meta profits. “How is that a free speech debate as opposed to a company responsibility debate?”

Thayer seconded her: “What speech interests do these companies have other than to collect data and keep you on their platforms?”

By contrast, Mchangama complained that overblocking – that is, restricting legal speech – is seen across EU countries. “The better solution is to empower users.” Cohn also disliked the UK and European push to hold platforms responsible for fulfilling their own terms and conditions. “When you get to whether platforms are living up to their content moderation standards, that puts the government and courts in the position of having to second-guess platforms’ editorial decisions.”

But Cohn was talking legal content; Eisenstat was talking illegal activity: “We’re talking about distribution mechanisms.” In the end, she said, “We are a democracy, and part of that is having the right to understand how companies affect our health and lives.” Instead, these debates persist because we lack factual knowledge of what goes on inside. If we can’t figure out accountability for these platforms, “This will be the only industry above the law while becoming the richest companies in the world.”

Twenty-five years after data protection became a fundamental right in Europe, the DC crowd still seem to see it as a regulation in search of a deal. Representative Kat Cammack (R-FL), who described herself as the “designated IT person” on the energy and commerce committee, was particularly excited that policy surrounding emerging technologies could be industry-driven, because “Congress is *old*!” and DC is designed to move slowly. “There will always be concerns about data and privacy, but we can navigate that. We can’t deter innovation and expect to flourish.”

Others also expressed enthusiasm for “the great opportunities in front of our country”, compared the EU’s Digital Markets Act to a toll plaza congesting I-95. Samir Jain, on the AI governance panel, suggested the EU may be “reconsidering its approach”. US senator Marsha Blackburn (R-TN) highlighted China’s threat to US cybersecurity without noting the US’s own goal, CALEA.

On that same AI panel, Olivia Zhu, the Assistant Director for AI Policy for the White House Office of Science and Technology Policy, seemed more realistic: “Companies operate globally, and have to do so under the EU AI Act. The reality is they are racing to comply with [it]. Disengaging from that risks a cacophony of regulations worldwide.”

Shortly before, Johnny Ryan, a Senior Fellow at the Irish Council for Civil Liberties posted: “EU Commission has dumped the AI Liability Directive. Presumably for “innovation”. But China, which has the toughest AI law in the world, is out innovating everyone.”

Illustrations: Kat Cammack (R-FL) at State of the Net 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The AI moment

“Why are we still talking about digital transformation?” The speaker was convening a session at last weekend’s UK Govcamp, an event organized by and for civil servants with an interest in digital stuff.

“Because we’ve failed?” someone suggested. These folks are usually *optimists*.

Govcamp is a long-running tradition that began as a guerrilla effort in 2008. At the time, civil servants wanting to harness new technology in the service of government were so thin on the ground they never met until one of them, Jeremy Gould, convened the first Govcamp. These are people who are willing to give up a Saturday in order to do better at their jobs working for us. All hail.

It’s hard to remember now, nearly 15 years on, the excitement in 2010 when David Cameron’s incoming government created the Government Digital Service and embedded it into the Cabinet Office. William Heath immediately ended the Ideal Government blog he’d begun writing in 2004 to press insistently for better use of digital technologies in government. The government had now hired all the people he could have wanted it to, he said, and therefore, “its job is done”.

Some good things followed: tilting government procurement to open the way for smaller British companies, consolidating government publishing, other things less visible but still important. Some data became open. This all has improved processes like applying for concessionary travel passes and other government documents, and made government publishing vastly more usable. The improvement isn’t universal: my application last year to renew my UK driver’s license was sent back because my signature strayed outside the box provided for it.

That’s just one way the business of government doesn’t feel that different. The whole process of developing legislation – green and white papers, public consultations, debates, and amendments – marches on much as it ever has, though with somewhat wider access because the documents are online. Thoughts about how to make it more participatory were the subject of a teacamp in 2013. Eleven years on, civil society is still reading and responding to government consultations in the time-honored way, and policy is still made by the few for the many.

At Govcamp, the conversation spread between the realities of their working lives and the difficulties systems posed for users – that is, the rest of us. “We haven’t removed those little frictions,” one said, evoking the old speed comparisons between Amazon (delivers tomorrow or even today) and the UK government (delivers in weeks, if not months).

“People know what good looks like,” someone else said, in echoing that frustration. That’s 2010-style optimism, from when Amazon product search yielded useful results, search engines weren’t spattered with AI slime and blanketed with ads, today’s algorithms were not yet born, and customer service still had a heartbeat. Here in 2025, we’re all coming up against rampant enshittification, with the result that the next cohort of incoming young civil servants *won’t* know any more what “good” looks like. There will be a whole new layer of necessary education.

Other comments: it’s evolution, not transformation; resistance to change and the requirement to ask permission are embedded throughout the culture; usability is still a problem; trying to change top-down only works in a large organization if it sets up an internal start-up and allows it to cannibalize the existing business; not enough technologists in most departments; the public sector doesn’t have the private sector option of deciding what to ignore; every new government has a new set of priorities. And: the public sector has no competition to push change.

One suggestion was that technological change happens in bursts – punctuated equilibrium. That sort of fits with the history of changing technological trends: computing, the Internet, the web, smartphones, the cloud. Today, that’s “AI”, which prime minister Keir Starmer announced this week he will mainline into the UK’s veins “for everything from spotting potholes to freeing up teachers to teach”.

The person who suggested “punctuated equilibrium” added: “Now is a new moment of change because of AI. It’s a new ‘GDS moment’.” This is plausible in the sense that new paradigms sometimes do bring profound change. Smartphones changed life for homeless people. On the other hand, many don’t do much. Think audio: that was going to be a game-changer, and yet after years of loss-making audio assistants, most of us are still typing.

So is AI one of those opportunities? Many brought up generative AI’s vast consumption of energy and water and rampant inaccuracy. Starmer, like Rishi Sunak before him, seems to think AI can make Britain the envy of other major governments.

Complex systems – such as digital governance – don’t easily change the flow of information or, therefore, the flow of power. It can take longer than most civil servants’ careers. Organizations like Mydex, which seeks to up-end today’s systems to put users in control, have been at work for years now. The upcoming digital identity framework has Mydex chair Alan Mitchell optimistic that the government’s digital identity framework is a breakthrough. We’ll see.

One attendee captured this: “It doesn’t feel like the question has changed from more efficient bureaucracy to things that change lives.” Said another in response, “The technology is the easy bit.”

Illustrations: Sir Humphrey Appleby (Nigel Hawthorne), Bernard Woolley (Derek Fowldes), and Jim Hacker (Paul Eddington) arguing over cultural change in Yes, Minister.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon Bluesky.