Conundrum

It took me six hours of listening to people with differing points of view discuss AI and copyright at a workshop, organized by the Sussex Centre for Law and Technology at the Sussex Humanities Lab (SHL), to come up with a question that seemed to me significant: what is all this talk about who “wins the AI race”? The US won the “space race” in 1969, and then for 50 years nothing happened.

Fretting about the “AI race”, an argument at least one participant used to oppose restrictions on using copyrighted data for training AI models, is buying into several ideas that are convenient for Big Tech.

One: there is a verifiable endpoint everyone’s trying to reach. That isn’t anything like today’s “AI”, which is a pile of math and statistics predicting the most likely answers to prompts. Instead, they mean artificial general intelligence, which would be as much like generative AI as I am like a mushroom.

Two: it’s a worthy goal. But is it? Why don’t we talk about the renewables race, the zero carbon race, or the sustainability race? All of those could be achievable. Why just this well-lobbied fantasy scenario?

Three: we should formulate public policy to eliminate “barriers” that might stop us from winning it. *This* is where we run up against copyright, a subject only a tiny minority used to care about, but that now affects everyone. And, accordingly, everyone has had time to formulate an opinion since the Internet first challenged the historical operation of intellectual property.

The law as it stands is clear: making a copy is the exclusive right of the rightsholder. This is the basis of AI-related lawsuits. For training data to escape that law, it would have to be granted an exemption: ruled fair use (as in the Anthropic and Meta cases), create an exception for temporary copies, or shoehorned into existing exceptions such as parody. Even then, copyright law is administered territorially, so the US may call it fair use but the rest of the world doesn’t have to agree. This is why the esteemed legal scholar Pamela Samuelson has said copyright law poses an existential threat to generative AI.

But, as one participant pointed out, although the entertainment industry dominates these discussions, there are many other sectors with different needs. Science, for example, both uses and studies AI, and is built on massive amounts of public funding. Surely that data should be free to access?

I wanted to be at this meeting because what should happen with AI, training data, and copyright is a conundrum. You do not have to work for a technology company to believe that there is value in allowing researchers both within and outwith companies to work on machine learning and build AI tools. When people balk at the impossible scale of securing permission from every copyright holder of every text, image, or sound, they have a point. The only organizations that could afford that are the companies we’re already mad at for being too big, rich, and powerful.

At the same time, why should we allow those big, rich, powerful companies to plunder our cultural domain without compensating anyone and extract even larger fortunes while doing it? To a published author who sees years of work reflected in a chatbot’s split-second answer to a prompt, it’s lost income and readers.

So for months, as Parliament has wrangled over the Data bill, the argument narrowed to copyright. Should there be an exception for data mining? Should technology companies have to get permission from creators and rights holders? Or should use of their work be automatically allowed, unless they opt out? All answers seem equally impossible. Technology companies would have to find every copyright holder of every datum to get permission. Licensing by the billion.

If creators must opt out, does that mean one piece at a time? How will they know when they need to opt out and who they have to notify? At the meeting, that was when someone said that the US and China won’t do this. Britain will fall behind internationally. Does that matter?

And yet, we all seemed to converge on this: copyright is the wrong tool. As one person said, technologies that threaten the entertainment industry always bring demands to tighten or expand copyright. See the last 35 years, in which Internet-fueled copying spawned the Digital Millennium Copyright Act and the EU Copyright Directive, and copyright terms expanded from 28 years, renewable once, to author’s life plus 70.

No one could suggest what the right tool would be. But there are good questions. Such as: how do we grant access to information? With business models breaking, is copyright still the right way to compensate creators? One of us believed strongly in the capabilities of collection societies – but these tend to disproportionately benefit the most popular creators, who will survive anyway.

Another proposed the highly uncontroversial idea of taxing the companies. Or levies on devices such as smartphones. I am dubious on this one: we have been there before.

And again, who gets the money? Very successful artists like Paul McCartney, who has been vocal about this? Or do we have a broader conversation about how to enable people to be artists? (And then, inevitably, who gets to be called an artist.)

I did not find clarity in all this. How to resolve generative AI and copyright remains complex and confusing. But I feel better about not having an answer.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Sovereign

On May 19, a group of technologists, researchers, economists, and scientists published an open letter calling on British prime minister Keir Starmer to prioritize the development of “sovereign advanced AI capabilities through British startups and industry”. I am one of the many signatories. Britain’s best shot at the kind of private AI research lab under discussion was Deepmind, sold to Google in 2014; the country has nothing now that’s domestically owned. ”

Those with long memories know that Leo was the first computer used for a business application – running Lyons tea rooms. In the 1980s, Britain led personal computing.

But the bigger point is less about AI in specific and more about information technology generally. At a panel at Computers, Privacy, and Data Protection in 2022, the former MEP Jan Philipp Albrecht, who was the special rapporteur for the General Data Protection Regulation, outlined his work building up cloud providers and local hardware as the Minister for Energy, Agriculture, the Environment, Nature and Digitalization of Schleswig-Holstein. As he explained, the public sector loses a great deal when it takes the seemingly easier path of buying proprietary software and services. Among the lost opportunities: building capacity and sovereignty. While his organization used services from all over the world, it set its own standards, one of which was that everything must be open source,

As the events of recent years are making clear, proprietary software fails if you can’t trust the country it’s made in, since you can’t wholly audit what it does. Even more important, once a company is bedded in, it can be very hard to excise it if you want to change supplier. That “customer lock-in” is, of course, a long-running business strategy, and it doesn’t only apply to IT. If we’re going to spend large sums of money on IT, there’s some logic to investing it in building up local capacity; one of the original goals in setting up the Government Digital Service was shifting to smaller, local suppliers instead of automatically turning to the largest and most expensive international ones.

The letter calls relying on US technology companies and services a “national security risk. Elsewhere, I have argued that we must find ways to build trusted systems out of untrusted components, but the problem here is more complex because of the sensitivity of government data. Both the US and China have the right to command access to data stored by their companies, and the US in particular does not grant foreigners even the few privacy rights it grants its citizens.

It’s also long past time for countries to stop thinking in terms of “winning the AI race”. AI is an umbrella term that has no single meaning. Instead, it would be better to think in terms of there being many applications of AI, and trying to build things that matter.

***

As predicted here two years ago, AI models are starting to collapse, Stephen J. Vaughan writes at The Register.

The basic idea is that as the web becomes polluted with synthetically-generated data, the quality of the data used to train the large language models degrades, so the models themselves become less useful. Even without that, the AI-with-everything approach many search engines are taking is poisoning their usefulness. Model collapse just makes it worse.

We would point out to everyone frantically adding “AI” to their services that the historical precedents are not on their side. In the late 1990s, every site felt it had to be a portal, so they all had search, and weather, and news headlines, and all sorts of crap that made it hard to find the search results. The result? Google disrupted all that with a clean, white page with no clutter (those were the days). Users all switched. Yahoo is the most obvious survivor from that period, and I think it’s because it does have some things – notably financial data – that it does extremely well.

It would be more satisfying to be smug about this, but the big issue is that companies are going on spraying toxic pollution over the services we all need to be able to use. How bad does it have to get before they stop?

***

At Privacy Law Scholars this week, in a discussion of modern corporate oligarchs and their fantasies of global domination, an attendee asked if any of us had read the terms of service for Starlink. She wanted to draw out attention to the following passage, under “Governing Law”:

For Services provided to, on, or in orbit around the planet Earth or the Moon, this Agreement and any disputes between us arising out of or related to this Agreement, including disputes regarding arbitrability (“Disputes”) will be governed by and construed in accordance with the laws of the State of Texas in the United States. For Services provided on Mars, or in transit to Mars via Starship or other spacecraft, the parties recognize Mars as a free planet and that no Earth-based government has authority or sovereignty over Martian activities. Accordingly, Disputes will be settled through self-governing principles, established in good faith, at the time of Martian settlement.

Reminder: Starlink has contracts worth billions of dollars to provide Internet infrastructure in more than 100 countries.

So who’s signing this?

Illustrations: The Martian (Ray Walston) in the 1963-1966 TV series My Favorite Martian.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Dangerous corner

This year’s Computers. Privacy, and Data Protection conference arrived at a crossroads moment. The European Commission, wanting to compete to “win the AI race”, is pursuing an agenda of simplification. Based on a recent report by former European Central Bank president Mario Draghi, it’s looking to streamline or roll back some of the regulation the EU is famous for.

Cue discussion of “The Brussels Effect”, derived from The California Effect, which sees compliance with regulation voluntarily shift towards the strictest regime. As Mireille Hildebrandt explained in her opening keynote, this phenomenon requires certain conditions. In the case of data protection legislation, that means three things: that companies will comply with the most stringent rules to ensure they are universally compliant, and that they want and need to compete in the EU. If you want your rules to dominate, it seems like a strategy. Except: China’s in-progress data protection regime may well be the strongest when it’s complete, but in that very different culture it will include no protection against the government. So maybe not a winning game?

Hildebrandt went on to prove with near-mathematical precision that na artificial general intelligence can never be compatible with the General Data Protection Regulation – AGI is “based on an incoherent conceptualization” and can’t be tested.

“Systems built with the goal of performing any task under any circumstances are fundamentally unsafe,” she said. “They cannot be designed for safety using fundamental engineering principles.”

AGI failing to meet existing legal restrictions seems minor in one way, since AGI doesn’t exist now, and probably never will. But as Hildebrandt noted, huge money is being poured into it nonetheless, and the spreading impact of that is unavoidable even if it fails.

The money also makes politicians take the idea seriously, which is the likely source of the EU’s talk of “simplification” instead of fundamental rights. Many fear that forthcoming simplification packages will reopen GDPR with a view to weakening the core principles of data minimization and purpose limitation. As one conference attendee asked, “Simplification for whom?”

In a panel on conflicting trends in AI governance, Shazeda Ahmed agreed: “There is no scientific basis around the idea of sentient AI, but it’s really influential in policy conversations. It takes advantage of fear and privileges technical knowledge.”

AI is having another impact technology companies may not have notidced yet: it is aligning the interests of the environmental movement and the privacy field.

Sustainability and privacy have often been played off against each other. Years ago, for example, there were fears that councils might inspect household garbage for elements that could have been recycled. Smart meters may or may not reduce electricity usage, but definitely pose privacy risks. Similarly, many proponents of smart cities stress the sustainability benefits but overlook the privacy impact of the ubiquitous sensors.

The threat generative AI poses to sustainability is well-documented by now. The threat the world’s burgeoning data centers pose to the transition to renewables is less often clearly stated and it’s worse than we might think. Claude Turmes, for example, highlighted the need to impose standards for data centers. Where an individual is financially incentivized to charge their electric vehicle at night and help even out the load on the grid, the owners of data centers don’t care. They just want the power they need – even if that means firing up coal plants to get it. Absent standards, he said, “There will be a whole generation of data centers that…use fossil gas and destroy the climate agenda.” Small nuclear power reactors, which many are suggesting, won’t be available for years. Worse,, he said, the data centers refuse to provide information to help public utilities plan despite their huge cosumption.

Even more alarming was the panel on the conversion of the food commons into data spaces. So far, most of what I had heard about agricultural data revolved around precision agriculture and its impact on farm workers, as explored in work (PDF) by Karen Levy, Solon Barocas, and Alexandra Mateescu. That was plenty disturbing, covering the loss of autonomy as sensors collect massive amounts of fine-grained information, everything from soil moisture to the distribution of seeds and fertilizer.

Much more alarming to see Monja Sauvagerd connect up in detail the large companies that are consolidating our food supply into a handful of platforms. Chinese government-owned Sinochem owns Syngenta; John Deere expanded by buying the machine learning company Blue River; and in 2016, Bayer bought Monsanto.

“They’re blurring the lines between seeds, agrichemicals, bio technology, and digital agriculture,” Sauvagerd said. So: a handful of firms in charge of our food supply are building power based on existing concentration. And, selling them cloud and computing infrastructure services, the array of big technology platforms that are already dangerously monopolistic. In this case, “privacy”, which has always seemed abstract, becomes a factor in deciding the future of our most profoundly physical system. What rights should farmers have to the data their farms generate?

In her speech, Hildebrandt called the goals of TESCREAL – transhumanism, extropianism, singularitarianism, cosmism, rationalist ideology, effective altruism, and long-termism – “paradise engineering”. She proposed three questions for assessing new technologies: What will it solve? What won’t it solve? What new problems will it create? We could add a fourth: while they’re engineering paradise, how do we live?

Illustrations: Brussels’ old railway hub, next to its former communications hub, the Maison de la Poste, now a conference center.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Hallucinations

It makes obvious sense that the people most personally affected by a crime should have the right to present their views in court. Last week, in Arizona, Stacey Wales, the sister of Chris Pelkey, who was killed in a road rage shooting in 2021, delegated her victim impact statement offering forgiveness to Pelkey’s artificially-generated video likeness. According to Cy Neff at the Guardian, the judge praised this use of AI and said he felt the forgiveness was “genuine”. It is unknown if it affected his sentencing.

It feels instinctively wrong to use a synthesized likeness this way to represent living relatives, who could have written any script they chose – even, had they so desired, one presenting this reportedly peaceful religious man’s views as a fierce desire for vengeance. *Of course* seeing it acted out by a movie-like AI simulation of the deceased victim packs emotional punch. But that doesn’t make it *true* or, as Wales calls it at the YouTube video link above, “his own impact statement”. It remains the thoughts of his family and friends, culled from their possibly imperfect memories of things Pelkey said during his lifetime, and if it’s going to be presented in a court, it ought to be presented by the people who wrote the script.

This is especially true because humans are so susceptible to forming relationships with *anything*, whether it’s a basketball that reminds you of home, as in the 2000 movie Cast Away, or a chatbot that appears to answer your questions, as in 1966’s ELIZA or today’s ChatGPT.

There is a lot of that about. Recently, Miles Klee reported at Rolling Stone that numerous individuals are losing loved ones to “spiritual fantasies” engendered by intensive and deepening interaction with chatbots. This reminds of Ouija boards, which seem to respond to people’s questions but in reality react to small muscle movements in the operators’ hands.

Ouija boards “lie” because their operators unconsciously guide them to spell out words via the ideomotor effect. Those small, unnoticed muscle movements are also, more impressively, responsible for table tilting. The operators add to the illusion by interpreting the meaning of whatever the Ouija board spells out.

Chatbots “hallucinate” because the underlying large language models, based on math and statistics, predict the most likely next words and phrases with no understanding of meaning. But a conundrum is developing: as the large language models underlying chatbots improve, the bots are becoming *more*, not less, prone to deliver untruths.

At The Register, Thomas Claburn reports that researchers at Carnegie-Mellon, the University of Michigan, and the Allen Institute for AI find that AI models will “lie” in to order to meet the goals set for them. In the example in their paper, a chatbot instructed to sell a new painkiller that the company knows is more addictive than its predecessor will deny its addictiveness in the interests of making the sale. This is where who owns the technology and sets its parameters is crucial.

This result shouldn’t be too surprising. In her 2019 book, You Look Like a Thing and I Love You, Janelle Shane highlighted AIs’ tendency to come up with “short-cuts” that defy human expectations and limitations to achieve the goals set for them. No one has yet reported that a chatbot has been intentionally programmed to lead its users from simple scheduling to a belief that they are talking to a god – or are one themselves, as Klee reports. This seems more like operator error, as unconscious as the ideomotor effect

OpenAI reported at the end of April that it was rolling back GPT-4o to an earlier version because the chatbot had become too “sycophantic”. Tthe chatbot’s tendency to flatter its users apparently derived from the company’s attempt to make it “feel more intuitive”.

It’s less clear why Elon Musk’s Grok has been shoehorning rants alleging white genocide in South Africa into every answer it gives to every question, no matter how unrelated, as Kyle Orland reports at Ars Technica.

Meanwhile, at the New York Times Cade Metz and Karen Weise find that AI hallucinations are getting worse as the bots become more powerful. They give examples, but we all have our own: irrelevant search results, flat-out wrong information, made-up legal citations. Metz and Weise say “it’s not entirely clear why”, but note that the reasoning systems that DeepSeek so explosively introduced in February are more prone to errors, and that those errors compound the more time they spend stepping through a problem. That seems logical, just as a tiny error in an early step can completely derail a mathematical proof.

This all being the case, it would be nice if people would pause to rethink how they use this technology. At Lawfare, Cullen O’Keefe and Ketan Ramakrishnan are already warning about the next stage, agentic AI, which is being touted as a way to automate law enforcement. Lacking fear of punishment, AIs don’t have the motivations humans do to follow the law (nor can a mistargeted individual reason with them). Therefore, they must be instructed to follow the law, with all the problems of translating human legal code into binary code that implies.

I miss so much the days when you could chat online with a machine and know that really underneath it was just a human playing pranks.

Illustrations: “Mystic Tray” Ouija board (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Three times a monopolist

It’s multiply official: Google is a monopoly.

The latest such ruling is a decision handed down on April 17 by Judge Leonie Brinkema in United States of America v. Google LLC, a 2023 case that focuses on Google’s control over both the software publishers use to manage online ads and the exchanges where those same ads are bought and sold. In August 2024, Judge Amit P. Mehta also ruled Google was a monopoly; that United States of America v. Google LLC, filed in 2020, focused on Google’s payments to mobile phone companies, wireless carriers, and browser companies to promote its search engine. Before *that*, in 2023 a jury found in Epic Games v. Google that Google violated antitrust laws with respect to the Play Store and Judge James Donato ordered it to allow alternative app stores on Android devices by November 1, 2024. Appeals are proceeding.

Google has more trouble to look forward to. At The Overspill, veteran journalist Charles Arthur is a member of a class representative bringing a UK case against Google. The AdTechClaim case seeks £13.6 billion in damages, claiming that Google’s adtech system has diverted revenues that otherwise would have accrued to UK-based website and app publishers. Reuters reported last week on the filing of a second UK challenge, a £5 billion suit representing thousands of businesses who claim Google manipulated the search ecosystem to block out rivals and force advertisers to rely on its platform. Finally, the Competition and Markets Authority is conducting its own investigation into the company’s search and advertising practices.

It is hard to believe that all of this will go away leaving Google intact, despite the company’s resistance to each one. We know from past experience that fines change nothing; only structural remedies will

The US findings against Google seem to have taken some commentators by surprise, perhaps assuming that the Trump administration would have a dampening effect. Trump, however, seems more exercised about the EU’s and UK’s mounting regulatory actions. Just this week the European Commission fined Apple €500 million and Meta €200 million, the first under the Digital Markets Act, and ordered them to open up user choice within 60 days. The White House has called some of these recent fines a new form of economic blackmail.

I’ve observed before that antitrust cases are often well behind the times, partly because these cases take so long to litigate. It wasn’t until 2024 that Google lost its 2017 appeal to the European Court of Justice in the Foundem search case and was ordered to pay a €2.4 billion fine. That case was first brought in 2009.

In 2014, I imagined that Google’s recently-concluded purchase of Nest smart thermostats might form the basis of an antitrust suit in 2024. Obviously, that didn’t happen; I wish instead the UK government had blocked Google’s acquisition of DeepMind. Partly, because perhaps the pre-monopolization of AI could have been avoided. And partly because I’ve been reading Angus Hanton’s recent book, Vassal State, and keeping it would have hugely benefited Britain.

Unfortunately, forcing Google to divest DeepMind is not on anyone’s post-trial list of possible remedies. In October, the Department of Justice filed papers listing a series of possibilities for the search engine case. The most-discussed of these was ordering Google to divest Chrome. In a sensible world, however, one must hope remedies will be found that address the differing problems these cases were brought to address.

At Big, Matt Stoller suggests that the latest judgment increases the likelihood that Google will be broken up, the first such order since AT&T in 1984. The DoJ, now under Trump’s control, could withdraw, but, Stoller points out, the list of plaintiffs includes several state attorneys general, and the DoJ can’t dictate what they do.

Trying to figure out what remedies would make real change is a difficult game, as the folks at the the April 20 This Week In Tech podcast say. This is unlike the issue around Google’s and Apple’s app stores that the European Commission fines cover, where it’s comparatively straightforward to link opening up their systems to alternatives and changing their revenue structure to ensuring that app makers and publishers get a fairer percentage.

Breaking up the company to separate Chrome, search, adtech, and Android would disable the company’s ability to use those segments as levers. In such a situation Google and/or its parent, Alphabet, could not, as now, use them in combination to maintain its ongoing data collection and build a durable advantage in training sophisticated models to underpin automated services. But would forcing the company to divest those segments create competition in any of them? Each would likely remain dominant in its field.

Yet something must be done. Even though Microsoft was not in the end broken up in 2001 when the incoming Bush administration settled the case, the experience of being investigated and found guilty of monopolistic behavior changed the company. None of today’s technology companies are likely to follow suit unless they’re forced; these companies are too big, too powerful, too rich, and too arrogant. If Google is not forced to change its structure or its business model, all of them will be emboldened to behave in even worse ways. As unimaginable as that seems.

Illustrations: “The kind of anti-trust legislation we need”, by J. S. Pughe (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Predatory inclusion

The recent past is a foreign country; they view the world differently there.

At last week’s We Robot conference on technology, policy, and law, the indefatigably detail-oriented Sue Glueck was the first to call out a reference to the propagation of transparency and accountability by the “US and its allies” as newly out of date. From where we were sitting in Windsor, Ontario, its conjoined fraternal twin, Detroit, Michigan, was clearly visible just across the river. But: recent events.

As Ottawa law professor Teresa Scassa put it, “Before our very ugly breakup with the United States…” Citing, she Anu Bradford, she went on, “Canada was trying to straddle these two [US and EU] digital empires.” Canada’s human rights and privacy traditions seem closer to those of the EU, even though shared geography means the US and Canada are superficially more similar.

We’ve all long accepted that the “technology is neutral” claim of the 1990s is nonsense – see, just this week, Luke O’Brien’s study at Mother Jones of the far-right origins of the web-scraping facial recognition company Clearview AI. The paper Glueck called out, co-authored in 2024 by Woody Hartzog, wants US lawmakers to take a tougher approach to regulating AI and ban entirely some systems that are fundamentally unfair. Facial recognition, for example, is known to be inaccurate and biased, but improving its accuracy raises new dangers of targeting and weaponization, a reality Cynthia Khoo called “predatory inclusion”. If he were writing this paper now, Hartzog said, he would acknowledge that it’s become clear that some governments, not just Silicon Valley, see AI as a tool to destroy institutions. I don’t *think* he was looking at the American flags across the water.

Later, Khoo pointed out her paper on current negotiations between the US and Canada to develop a bilateral law enforcement data-sharing agreement under the US CLOUD Act. The result could allow US police to surveil Canadians at home, undermining the country’s constitutional human rights and privacy laws.

In her paper, Clare Huntington proposed deriving approaches to human relationships with robots from family law. It can, she argued, provide analogies to harms such as emotional abuse, isolation, addiction, invasion of privacy, and algorithmic discrimination. In response, Kate Darling, who has long studied human responses to robots, raised an additional factor exacerbating the power imbalance in such cases: companies, “because people think they’re talking to a chatbot when they’re really talking to a company.” That extreme power imbalance is what matters when trying to mitigate risk (see also Sarah Wynn-Williams’ recent book and Congressional testimony on Facebook’s use of data to target vulnerable teens).

In many cases, however, we are not agents deciding to have relationships with robots but what AJung Moon called “incops”, or “incidentally co-present”. In the case of the Estonian Starship delivery robots you can find in cities from San Francisco to Milton Keynes, that broad category includes human drivers, pedestrians, and cyclists who share their spaces. In a study, Adeline Schneider found that white men tended to be more concerned about damage to the robot, where others worried more about communication, the data they captured, safety, and security. Delivery robots are, however, typically designed with only direct users in mind, not others who may have to interact with it.

These are all social problems, not technological ones, as conference chair Kristen Thomasen observed. Carys Craig later modified it: technology “has compounded the problems”.

This is the perennial We Robot question: what makes robots special? What qualities require new laws? Just as we asked about the Internet in 1995, when are robots just new tools for old rope, and when do they bring entirely new problems? In addition, who is responsible in such cases? This was asked in a discussion of Beatrice Panattoni‘s paper on Italian proposals to impose harsher penalties for crime committed with AI or facilitated by robots. The pre-conference workshop raised the same question. We already know the answer: everyone will try to blame someone or everyone else. But in formulating a legal repsonse, will we tinker around the edges or fundamentally question the criminal justice system? Andrew Selbst helpfully summed up: “A law focusing on specific harms impedes a structural view.”

At We Robot 2012, it was novel to push lawyers and engineers to think jointly about policy and robots. Now, as more disciplines join the conversation, familiar Internet problems surface in new forms. Human-robot interaction is a four-dimensional version of human-computer interaction; I got flashbacks to old hacking debates when Elizabeth Joh wondered in response to Panattoni’s paper if transforming a robot into a criminal should be punished; and a discussion of the use of images of medicalized children for decades in fundraising invoked publicity rights and tricky issues of consent.

Also consent-related, lawyers are starting to use generative AI to draft contracts, a step that Katie Szilagyi and Marina Pavlović suggested further diminishes the bargaining power already lost to “clickwrap”. Automation may remove our remaining ability to object from more specialized circumstances than the terms and conditions imposed on us by sites and services. Consent traditionally depends on a now-absent “meeting of minds”.

The arc of We Robot began with enthusiasm for robots, which waned as big data and generative AI became players. Now, robots/AI are appearing as something being done to us.

Illustrations: Detroit, seen across the river from Windsor, Ontario with a Canadian Coast Guard boat in the foreground.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Catoptromancy

It’s a commonly held belief that technology moves fast, and law slowly. This week’s We Robot workshop day gave the opposite impression: these lawyers are moving ahead, while the technology underlying robots is moving slower than we think.

A mainstay of this conference over the years has been Bill Smart‘s and Cindy Grimm‘s demonstrations of the limitations of the technologies that make up robots. This year, that gambit was taken up by Jason Millar and AJung Moon. Their demonstration “robot” comprised six people – one brain, four sensors, and one color sensor. Ordering it to find the purple shirt quickly showed that robot programming isn’t getting any easier. The human “sensors” can receive useful information only as far as their outstretched fingertips, and even then the signal they receive is minimal.

“Many of my students program their robots into a ditch and can’t understand why,” Moon said. It’s the required specificity. For one thing, a color sensor doesn’t see color; it sends a stream of numeric values. It’s all 1s and 0s and tiny engineering decisions whose existence is never registered at the policy level but make all the difference. One of her students, for example, struggled with a robot that kept missing the croissant it was supposed to pick up by 30 centimeters. The explanation turned out to be that the sensor was so slow that the robot was moving a half-second too early, based on historical information. They had to insert a pause before the robot could get it right.

So much of the way we talk about robots and AI misrepresents those inner workings. A robot can’t “smell honey”; it merely has a sensor that’s sensitive to some chemicals and not others. It can’t “see purple” if its sensors are the usual red, green, blue. Even green may not be identifiable to an RGB sensor if the lighting is such that reflections make a shiny green surface look white. Faster and more diverse sensors won’t change the underlying physics. How many lawmakers understand this?

Related: what does it mean to be a robot? Most people attach greater intelligence to things that can move autonomously. But a modern washing machine is smarter than a Roomba, while an iPhone is smarter than either but can’t affect the physical world, as Smart observed at the very first We Robot, in 2012.

This year we are in Canada – to be precise, in Windsor, Ontario, looking across the river to Detroit, Michigan. Canadian law, like the country itself, is a mosaic: common law (inherited from Britain), civil law (inherited from France), and myriad systems of indigenous peoples’ law. Much of the time, said Suzie Dunn, new technology doesn’t require new law so much as reinterpretation and, especially, enforcement of existing law.

“Often you can find criminal law that already applies to digital spaces, but you need to educate the legal system how to apply it,” she said. Analogous: in the late 1990s, editors of the technology section at the Daily Telegraph had a deal-breaking question: “Would this still be a story if it were about the telephone instead of the Internet?”

We can ask that same question about proposed new law. Dunn and Katie Szilagyi asked what robots and AI change that requires a change of approach. They set us to consider scenarios to study this question: an autonomous vehicle kills a cyclist; an autonomous visa system denies entry to a refugee who was identified in her own country as committing a crime when facial recognition software identifies her in images of an illegal LGBTQ protest. In the first case, it’s obvious that all parties will try to blame someone – or everyone – else, probably, as Madeleine Clare Elish suggested in 2016, on the human driver, who becomes the “moral crumple zone”. The second is the kind of case the EU’s AI Act sought to handle by giving individuals the right to meaningful information about the automated decision made about them.

Nadja Pelkey, a curator at Art Windsor-Essex, provided a discussion of AI in a seemingly incompatible context. Citing Georges Bataille, who in 1929 saw museums as mirrors, she invoked the word “catoptromancy”, the use of mirrors in mystical divination. Social and political structures are among the forces that can distort the reflection. So are the many proliferating AI tools such as “personalized experiences” and other types of automation, which she called “adolescent technologies without legal or ethical frameworks in place”.

Where she sees opportunities for AI is in what she called the “invisible archives”. These include much administrative information, material that isn’t digitized, ephemera such as exhibition posters, and publications. She favors small tools and small private models used ethically so they preserve the rights of artists and cultural contexts, and especially consent. In a schematic she outlined a system that can’t be scraped, that allows data to be withdrawn as well as added, and that enables curiosity and exploration. It’s hard to imagine anything less like the “AI” being promulgated by giant companies. *That* type of AI was excoriated in a final panel on technofascism and extractive capitalism.

It’s only later I remember that Pelkey also said that catoptromancy mirrors were first made of polished obsidian.

In other words, black mirrors.

Illustrations: Divination mirror made of polished obsidian by artisans of the Aztec Empire of Mesoamerica between the 15th and 16th centuries (via Wikimedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A short history of We Robot 2012-

On the eve of We Robot 2025, here are links to my summaries of previous years. 2014 is missing; I didn’t make it that year for family reasons. There was no conference in 2024 in order to move the event back to its original April schedule (covid caused its move to September in 2020). These are my personal impressions; nothing I say here should be taken as representing the conference, its founders, its speakers, or their institutions.

We Robot was co-founded by Michael Froomkin, Ryan Calo, and Ian Kerr to bring together lawyers and engineers to think early about the coming conflicts in robots, law, and policy.

2024 No conference.

2023 The end of cool. After struggling to design a drone delivery service that had any benefits over today’s cycling couriers, we find ourselves less impressed by robot that can do somersaults but not do anything useful.

2022 Insert a human. “Robots” are now “sociotechnical systems”.

Workshop day Coding ethics. The conference struggles to design an ethical robot.

2021 Plausible diversions. How will robots rehape human space?

Workshop day Is the juice worth the squeeze?. We think about how to regulate delivery robots, which will likely have no user-serviceable parts. Title from Woody Hartzog.

2020 (virtual) The zero on the phone. AI exploitation becomes much more visible.

2019 Math, monsters, and metaphors. The trolley problem is dissected; the true danger is less robots than the “pile of math that does some stuff”.

Workshop day The Algernon problem. New participants remind that robots/AI are carrying out the commands of distant owners.

2018 Deception. The conference tries to tease out what makes robots different and revisits Madeleine Clare Elish’s moral crumple zones after the first pedestrian death by self-driving car.

Workshop day Late, noisy, and wrong. Engineers Bill Smart and Cindy Grimm explain why sensors never capture what you think they’re capturing and how AI systems use their data.

2017 Have robot, will legislate. Discussion of risks this year focused on the intermediate sitaution, when automation and human norms clash.

2016 Humans all the way down. Madeline Clare Elish introduces “moral crumple zones”.

Workshop day: The lab and the world. Bill Smart uses conference attendees in formation to show why building a robot is difficult.

2015 Multiplicity. A robot pet dog begs its owner for an upgraded service subscription.

2014 Missed conference

2013 Cautiously apocalyptic. Diversity of approaches to regulation will be needed to handle the diversity of robots.

2012 A really fancy hammer with a gun. Unsentimental engineer Bill Smart provided the title.

wg

Optioned

The UK’s public consultation on creating a copyright exception for AI model training closed on Tuesday, and it was profoundly unsatisfying.

Many, many creators and rights holders (who are usually on opposing sides when it comes to contract negotiations) have opposed the government’s proposals. Every national newspaper ran the same Make It Fair front page opposing them; musicians released a silent album. In the Guardian, the peer and independent filmmaaker Beeban Kidron calls the consultation “fixed” in favor of the AI companies. Kidron’s resume includes directing Bridget Jones: The Edge of Reason (2004) and the meticulously researched 2013 study of teens online, InRealLife, and she goes on to call the government’s preferred option a “wholesale transfer of wealth from hugely successful sector that invests hundreds of millions in the UK to a tech industry that extracts profit that is not assured and will accrue largely to the US and indeed China.”

The consultation lists four options: leave the situation as it is; require AI companies to get licenses to use copyrighted work (like everyone else has to); allow AI companies to use copyrighted works however they want; and allow AI companies to use copyrighted works but grant rights holders the right to opt out.

I don’t like any of these options. I do believe that creators will figure out how to use AI tools to produce new and valuable work. I *also* believe that rights holders will go on doing their best to use AI to displace or impoverish creators. That is already happening in journalism and voice acting, and was a factor in the 2023 Hollywood writers’ strike. AI companies have already shown that won’t necessarily abide by arrangements that lack the force of law. The UK government acknowledged this in its consultation document, saying that “more than 50% of AI companies observe the longstanding Internet convention robots.txt.” So almost half of them *don’t*.

At Pluralistic, Cory Doctorow argued in February 2023 that copyright won’t solve the problems facing creators. His logic is simple: after 40 years of expanding copyright terms (from a maximum of 56 years in 1975 to “author’s life plus 70” now), creators are being paid *less* than they were then. Yes, I know Taylor Swift has broken records for tour revenues and famously took back control of her own work. but millions of others need, as Doctorow writes, structural market changes. Doctorow highlights what happened with sampling: the copyright maximalists won, and now musicians are required to sign away sampling rights to their labels, who pocket the resulting royalties.

For this sort of reason, the status quo, which the consultation calls “option 0”, seems likely to open the way to lots more court cases and conflicting decisions, but provide little benefit to anyone. A licensing regime (“option 1”) will likely go the way of sampling. If you think of AI companies as inevitably giant “pre-monopolized” outfits, like Vladen Joler at last year’s Computers, Privacy, and Data Protection conference, “Option 2” looks like simply making them richer and more powerful at the expense of everyone else in the world. But so does “option 3”, since that *also* gives AI companies the ability to use anything they want. Large rights holders will opt out and demand licensing fees, which they will keep, and small ones will struggle to exercise their rights.

As Kidron said, the government’s willingness to take chances with the country’s creators’ rights is odd, since intellectual property is a sector in which Britain really *is* a world leader. On the other hand, as Moody says, all of it together is an anthill compared to the technology sector.

None of these choices is a win for creators or the public. The government’s preferred option 3 seems unlikely to achieve its twin goals of making Britain a world leader in AI and mainlining AI into the veins of the nation, as the government put it last month.

China and the US both have complete technology stacks *and* gigantic piles of data. The UK is likely better able to matter in AI development than many countries – see for example DeepMind, which was founded here in 2010. On the other hand, also see DeepMind for the probable future: Google bought it in 2014, and now its technology and profits belong to that giant US company.

At Walled Culture, Glyn Moody argued last May that requiring the AI companies to pay copyright industries makes no sense; he regards using creative material for training purposes as “just a matter of analysis” that should not require permission. And, he says correctly, there aren’t enough such materials anyway. Instead, he and Mike Masnick at Techdirt propose that the generative AI companies should pay creators of all types – journalists, musicians, artists, filmmakers, book authors – to provide them with material they can use to train their models, and the material so created should be placed in the public domain. In turn it could become new building blocks the public can use to produce even more new material. As a model for supporting artists, patronage is old.

I like this effort to think differently a lot better than any of the government’s options.

Illustrations:: Tuesday’s papers, unprecedentedly united to oppose the government’s copyright plan.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Cognitive dissonance

The annual State of the Net, in Washington, DC, always attracts politically diverse viewpoints. This year was especially divided.

Three elements stood out: the divergence between the only remaining member of the Privacy and Civil Liberties Oversight Board (PCLOB) and a recently-fired colleague; a contentious panel on content moderation; and the yay, American innovation! approach to regulation.

As noted previously, on January 29 the days-old Trump administration fired PCLOB members Travis LeBlanc, Ed Felten, and chair Sharon Bradford Franklin; the remaining seat was already empty.

Not to worry, remaining member Beth Williams, said. “We are open for business. Our work conducting important independent oversight of the intelligence community has not ended just because we’re currently sub-quorum.” Flying solo she can greenlight publication, direct work, and review new procedures and policies; she can’t start new projects. A review is ongoing of the EU-US Privacy Framework under Executive Order 14086 (2022). Williams seemed more interested in restricting government censorship and abuse of financial data in the name of combating domestic terrorism.

Soon afterwards, LeBlanc, whose firing has him considering “legal options”, told Brian Fung that the outcome of next year’s reauthorization of Section 702, which covers foreign surveillance programs, keeps him awake at night. Earlier, Williams noted that she and Richard E. DeZinno, who left in 2023, wrote a “minority report” recommending “major” structural change within the FBI to prevent weaponization of S702.

LeBlanc is also concerned that agencies at the border are coordinating with the FBI to surveil US persons as well as migrants. More broadly, he said, gutting the PCLOB costs it independence, expertise, trustworthiness, and credibility and limits public options for redress. He thinks the EU-US data privacy framework could indeed be at risk.

A friend called the panel on content moderation “surreal” in its divisions. Yael Eisenstat and Joel Thayer tried valiantly to disentangle questions of accountability and transparency from free speech. To little avail: Jacob Mchangama and Ari Cohn kept tangling them back up again.

This largely reflects Congressional debates. As in the UK, there is bipartisan concern about child safety – see also the proposed Kids Online Safety Act – but Republicans also separately push hard on “free speech”, claiming that conservative voices are being disproportionately silenced. Meanwhile, organizations that study online speech patterns and could perhaps establish whether that’s true are being attacked and silenced.

Eisenstat tried to draw boundaries between speech and companies’ actions. She can still find on Facebook the sme Telegram ads containing illegal child sexual abuse material that she found when Telegram CEO Pavel Durov was arrested. Despite violating the terms and conditions, they bring Meta profits. “How is that a free speech debate as opposed to a company responsibility debate?”

Thayer seconded her: “What speech interests do these companies have other than to collect data and keep you on their platforms?”

By contrast, Mchangama complained that overblocking – that is, restricting legal speech – is seen across EU countries. “The better solution is to empower users.” Cohn also disliked the UK and European push to hold platforms responsible for fulfilling their own terms and conditions. “When you get to whether platforms are living up to their content moderation standards, that puts the government and courts in the position of having to second-guess platforms’ editorial decisions.”

But Cohn was talking legal content; Eisenstat was talking illegal activity: “We’re talking about distribution mechanisms.” In the end, she said, “We are a democracy, and part of that is having the right to understand how companies affect our health and lives.” Instead, these debates persist because we lack factual knowledge of what goes on inside. If we can’t figure out accountability for these platforms, “This will be the only industry above the law while becoming the richest companies in the world.”

Twenty-five years after data protection became a fundamental right in Europe, the DC crowd still seem to see it as a regulation in search of a deal. Representative Kat Cammack (R-FL), who described herself as the “designated IT person” on the energy and commerce committee, was particularly excited that policy surrounding emerging technologies could be industry-driven, because “Congress is *old*!” and DC is designed to move slowly. “There will always be concerns about data and privacy, but we can navigate that. We can’t deter innovation and expect to flourish.”

Others also expressed enthusiasm for “the great opportunities in front of our country”, compared the EU’s Digital Markets Act to a toll plaza congesting I-95. Samir Jain, on the AI governance panel, suggested the EU may be “reconsidering its approach”. US senator Marsha Blackburn (R-TN) highlighted China’s threat to US cybersecurity without noting the US’s own goal, CALEA.

On that same AI panel, Olivia Zhu, the Assistant Director for AI Policy for the White House Office of Science and Technology Policy, seemed more realistic: “Companies operate globally, and have to do so under the EU AI Act. The reality is they are racing to comply with [it]. Disengaging from that risks a cacophony of regulations worldwide.”

Shortly before, Johnny Ryan, a Senior Fellow at the Irish Council for Civil Liberties posted: “EU Commission has dumped the AI Liability Directive. Presumably for “innovation”. But China, which has the toughest AI law in the world, is out innovating everyone.”

Illustrations: Kat Cammack (R-FL) at State of the Net 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.