It’s a commonly held belief that technology moves fast, and law slowly. This week’s We Robot workshop day gave the opposite impression: these lawyers are moving ahead, while the technology underlying robots is moving slower than we think.
A mainstay of this conference over the years has been Bill Smart‘s and Cindy Grimm‘s demonstrations of the limitations of the technologies that make up robots. This year, that gambit was taken up by Jason Millar and AJung Moon. Their demonstration “robot” comprised six people – one brain, four sensors, and one color sensor. Ordering it to find the purple shirt quickly showed that robot programming isn’t getting any easier. The human “sensors” can receive useful information only as far as their outstretched fingertips, and even then the signal they receive is minimal.
“Many of my students program their robots into a ditch and can’t understand why,” Moon said. It’s the required specificity. For one thing, a color sensor doesn’t see color; it sends a stream of numeric values. It’s all 1s and 0s and tiny engineering decisions whose existence is never registered at the policy level but make all the difference. One of her students, for example, struggled with a robot that kept missing the croissant it was supposed to pick up by 30 centimeters. The explanation turned out to be that the sensor was so slow that the robot was moving a half-second too early, based on historical information. They had to insert a pause before the robot could get it right.
So much of the way we talk about robots and AI misrepresents those inner workings. A robot can’t “smell honey”; it merely has a sensor that’s sensitive to some chemicals and not others. It can’t “see purple” if its sensors are the usual red, green, blue. Even green may not be identifiable to an RGB sensor if the lighting is such that reflections make a shiny green surface look white. Faster and more diverse sensors won’t change the underlying physics. How many lawmakers understand this?
Related: what does it mean to be a robot? Most people attach greater intelligence to things that can move autonomously. But a modern washing machine is smarter than a Roomba, while an iPhone is smarter than either but can’t affect the physical world, as Smart observed at the very first We Robot, in 2012.
This year we are in Canada – to be precise, in Windsor, Ontario, looking across the river to Detroit, Michigan. Canadian law, like the country itself, is a mosaic: common law (inherited from Britain), civil law (inherited from France), and myriad systems of indigenous peoples’ law. Much of the time, said Suzie Dunn, new technology doesn’t require new law so much as reinterpretation and, especially, enforcement of existing law.
“Often you can find criminal law that already applies to digital spaces, but you need to educate the legal system how to apply it,” she said. Analogous: in the late 1990s, editors of the technology section at the Daily Telegraph had a deal-breaking question: “Would this still be a story if it were about the telephone instead of the Internet?”
We can ask that same question about proposed new law. Dunn and Katie Szilagyi asked what robots and AI change that requires a change of approach. They set us to consider scenarios to study this question: an autonomous vehicle kills a cyclist; an autonomous visa system denies entry to a refugee who was identified in her own country as committing a crime when facial recognition software identifies her in images of an illegal LGBTQ protest. In the first case, it’s obvious that all parties will try to blame someone – or everyone – else, probably, as Madeleine Clare Elish suggested in 2016, on the human driver, who becomes the “moral crumple zone”. The second is the kind of case the EU’s AI Act sought to handle by giving individuals the right to meaningful information about the automated decision made about them.
Nadja Pelkey, a curator at Art Windsor-Essex, provided a discussion of AI in a seemingly incompatible context. Citing Georges Bataille, who in 1929 saw museums as mirrors, she invoked the word “catoptromancy”, the use of mirrors in mystical divination. Social and political structures are among the forces that can distort the reflection. So are the many proliferating AI tools such as “personalized experiences” and other types of automation, which she called “adolescent technologies without legal or ethical frameworks in place”.
Where she sees opportunities for AI is in what she called the “invisible archives”. These include much administrative information, material that isn’t digitized, ephemera such as exhibition posters, and publications. She favors small tools and small private models used ethically so they preserve the rights of artists and cultural contexts, and especially consent. In a schematic she outlined a system that can’t be scraped, that allows data to be withdrawn as well as added, and that enables curiosity and exploration. It’s hard to imagine anything less like the “AI” being promulgated by giant companies. *That* type of AI was excoriated in a final panel on technofascism and extractive capitalism.
It’s only later I remember that Pelkey also said that catoptromancy mirrors were first made of polished obsidian.
In other words, black mirrors.
Illustrations: Divination mirror made of polished obsidian by artisans of the Aztec Empire of Mesoamerica between the 15th and 16th centuries (via Wikimedia
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.