It makes obvious sense that the people most personally affected by a crime should have the right to present their views in court. Last week, in Arizona, Stacey Wales, the sister of Chris Pelkey, who was killed in a road rage shooting in 2021, delegated her victim impact statement offering forgiveness to Pelkey’s artificially-generated video likeness. According to Cy Neff at the Guardian, the judge praised this use of AI and said he felt the forgiveness was “genuine”. It is unknown if it affected his sentencing.
It feels instinctively wrong to use a synthesized likeness this way to represent living relatives, who could have written any script they chose – even, had they so desired, one presenting this reportedly peaceful religious man’s views as a fierce desire for vengeance. *Of course* seeing it acted out by a movie-like AI simulation of the deceased victim packs emotional punch. But that doesn’t make it *true* or, as Wales calls it at the YouTube video link above, “his own impact statement”. It remains the thoughts of his family and friends, culled from their possibly imperfect memories of things Pelkey said during his lifetime, and if it’s going to be presented in a court, it ought to be presented by the people who wrote the script.
This is especially true because humans are so susceptible to forming relationships with *anything*, whether it’s a basketball that reminds you of home, as in the 2000 movie Cast Away, or a chatbot that appears to answer your questions, as in 1966’s ELIZA or today’s ChatGPT.
There is a lot of that about. Recently, Miles Klee reported at Rolling Stone that numerous individuals are losing loved ones to “spiritual fantasies” engendered by intensive and deepening interaction with chatbots. This reminds of Ouija boards, which seem to respond to people’s questions but in reality react to small muscle movements in the operators’ hands.
Ouija boards “lie” because their operators unconsciously guide them to spell out words via the ideomotor effect. Those small, unnoticed muscle movements are also, more impressively, responsible for table tilting. The operators add to the illusion by interpreting the meaning of whatever the Ouija board spells out.
Chatbots “hallucinate” because the underlying large language models, based on math and statistics, predict the most likely next words and phrases with no understanding of meaning. But a conundrum is developing: as the large language models underlying chatbots improve, the bots are becoming *more*, not less, prone to deliver untruths.
At The Register, Thomas Claburn reports that researchers at Carnegie-Mellon, the University of Michigan, and the Allen Institute for AI find that AI models will “lie” in to order to meet the goals set for them. In the example in their paper, a chatbot instructed to sell a new painkiller that the company knows is more addictive than its predecessor will deny its addictiveness in the interests of making the sale. This is where who owns the technology and sets its parameters is crucial.
This result shouldn’t be too surprising. In her 2019 book, You Look Like a Thing and I Love You, Janelle Shane highlighted AIs’ tendency to come up with “short-cuts” that defy human expectations and limitations to achieve the goals set for them. No one has yet reported that a chatbot has been intentionally programmed to lead its users from simple scheduling to a belief that they are talking to a god – or are one themselves, as Klee reports. This seems more like operator error, as unconscious as the ideomotor effect
OpenAI reported at the end of April that it was rolling back GPT-4o to an earlier version because the chatbot had become too “sycophantic”. Tthe chatbot’s tendency to flatter its users apparently derived from the company’s attempt to make it “feel more intuitive”.
It’s less clear why Elon Musk’s Grok has been shoehorning rants alleging white genocide in South Africa into every answer it gives to every question, no matter how unrelated, as Kyle Orland reports at Ars Technica.
Meanwhile, at the New York Times Cade Metz and Karen Weise find that AI hallucinations are getting worse as the bots become more powerful. They give examples, but we all have our own: irrelevant search results, flat-out wrong information, made-up legal citations. Metz and Weise say “it’s not entirely clear why”, but note that the reasoning systems that DeepSeek so explosively introduced in February are more prone to errors, and that those errors compound the more time they spend stepping through a problem. That seems logical, just as a tiny error in an early step can completely derail a mathematical proof.
This all being the case, it would be nice if people would pause to rethink how they use this technology. At Lawfare, Cullen O’Keefe and Ketan Ramakrishnan are already warning about the next stage, agentic AI, which is being touted as a way to automate law enforcement. Lacking fear of punishment, AIs don’t have the motivations humans do to follow the law (nor can a mistargeted individual reason with them). Therefore, they must be instructed to follow the law, with all the problems of translating human legal code into binary code that implies.
I miss so much the days when you could chat online with a machine and know that really underneath it was just a human playing pranks.
Illustrations: “Mystic Tray” Ouija board (via Wikimedia).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.