The AI who was God

Three subjects dominated 2025: increasing AI infestation, expanding surveillance use of biometrics, and, age verification and online safety. The last spent the year spreading across the world, including, most recently, to Louisiana. There, on December 22, a US District Court blocked the law on First Amendment grounds in a suit brought by the trade association NetChoice. Less than a week earlier on similar grounds, NetChoice also won a suit in Arkansas that would have penalized platforms for “using designs or algorithms” that they “know or should have known” could harm users by for example leading to addiction, drug use, or self-harm. The judge in this case called the law “unconstitutionally vague”. Personally, it seems like it would be hard to prove cause and effect.

However, much of the rest of the year felt in many ways like rinse-and-repeat, only bigger and more frustrating. The immediate future – 2026 – therefore looks like more of all of those perennial topics, especially surveillance. This time next year we will still be fighting over age verification, network neutrality, national identification systems, surveillance, data protection, security issues surrounding the Internet of Things and other “smart” devices, social media bans, and access to strong encryption, along with other perennials such as copyright and digital sovereignty.

It is however possible that AI might have gone quiet by then. Three types of reasons: financial, technical, and social.

To take finances first, concerns about the AI bubble have been building all year. In the latest of his series of diatribes about this and the “rot economy”, Ed Zitron writes that AI is bringing “enshittification Stage Four”, in which companies, having already turned on their users and customers, turn on their shareholders. Zitron traces the circular deals, the massive debt, the extravagant claims, and the disproportionately small revenues, and invokes the adage, if something can’t go on forever, it will stop.

On the technical side, no matter what Elon Musk predicts, more sober commentary at MIT Technology Review is calling “reset”. As Adam Becker writes in More Everything Forever, one thing that can’t go on ad infinitum is exponentially increasing computing power-up: exponential growth always hits resource limits. It is entirely possible that come 2027 we’ll have run out of all sorts of road on this current paradigm of “AI”. If so, expect to hear a lot more about how quantum is ready to remake the world. Generative AI will still be bigger ten years from now (just like the Internet in 2000, when the dot-com boom crashed), but it won’t become sentient and fix climate change.

Brief digression. On Mastodon, Icelandic web developer Baldur Bjarnason posts that he’s hearing people claim that studies showing that large language models won’t lead to AGI are “whitewashing creationism”. Uh…huh?

On the social side, pressure is mounting to curb the industry’s growth. US politicians including US senator Bernie Sanders (D-VT) and Florida governor Ron DeSantis (R) are both working to slow data center construction. Data centers guzzle power and water, as Zitron also explains, and nearby residents pay both directly and indirectly.

Other harms keep mounting up. The year’s Retraction Watch annual report includes myriad fake references Salesforce fired 4,000 people before only now realizing that large language models can’t do their jobs; other companies nonetheless want to copy it. Organizers canceled a concert by Canadian musician Ashley MacIsaac after a Google AI summary wrongly said he’d been convicted of sexual assault.

At Utah’s Park Record, Cannon Taylor reported recently that in late October an AI summary indicated that a West Jordan, Utah police officer had morphed into a frog. Simple explanation: ta Harry Potter movie playing in the background had been recorded by the officer’s bodycam during an investigation. Per the story, the summary seemed humanly written until, “And then the officer turned into a frog, and a magic book appeared and began granting wishes.”

The story goes on to report several different AI software trials. One, used in Summit County, has a setting that inserts deliberate errors into the summaries to expose officers who don’t thoroughly check them. With that turned off, the time savings over having officers write their own summaries are considerable. Summit County turned it on.The time savings vanished. The County decided to pass.

Back when pranksters used to deface web pages for fun, a pasttime more embarrassing than harmful, I thought it would be much worse when they learned to make small, hard-to-detect changes that poisoned the information supply.

AI is perfect for automating this.

In their “FakeParts” paper (PDF), researchers at the Institut Polytechnique de Paris discuss a disturbing example: subtle, localized AI-driven changes to otherwise real videos. These fakeparts blend in seamlessly; identifying them is far harder than spotting a complete fake, which on its own is hard enough. The researchers warn that subtle changes to facial expressions or gestures can change the emotional content of genuine statements, great for creating targeted attacks and sophisticated disinformation campaigns.

Cut to James Thurber‘s 1939 fable, The Owl Who Was God. If AI kills us, it will be because we trust it without applying common sense.

Illustrations: Barred owl (photo by Steve Bellovin, used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Last year’s news

It was tempting to skip wrapping up 2023, because at first glance large language models seemed so thoroughly dominant (and boring to revisit), but bringing the net.wars archive list up to date showed a different story. To be fair, this is partly personal bias: from the beginning LLMs seemed fated to collapse under the weight of their own poisoning; AI Magazine predicted such an outcome as early as June.

LLMs did, however, seem to accelerate public consciousness of three long-running causes of concern: privacy and big data; corporate cooption of public and private resources; and antitrust enforcement. That acceleration may be LLMs’ more important long-term effect. In the short term, the justifiably bigger concern is their propensity to spread disinformation and misinformation in the coming year’s many significant elections.

Enforcement of data protection laws has been slowly ramping up in any case, and the fines just keep getting bigger, culminating in May’s fine against Meta for €1.2 billion. Given that fines, no matter how large, seem insignificant compared to the big technology companies’ revenues, the more important trend is issuing constraints on how they do business. That May fine came with an order to stop sending EU citizens’ data to the US. Meta responded in October by announcing a subscription tier for European Facebook users: €160 a year will buy freedom from ads. Freedom from Facebook remains free.

But Facebook is almost 20 years old; it had years in which to grow without facing serious regulation. By contrast, ChatGPT, which OpenAI launched just over a year ago, has already faced investigation by the US Federal Trade Commission and been banned temporarily by the Italian data protection authority (it was reinstated a month later with conditions). It’s also facing more than a dozen lawsuits claiming copyright infringement; the most recent of these was filed just this week by the New York Times. It has settled one of these suits by forming a partnership with Axel Springer.

It all suggests a lessening tolerance for “ask forgiveness, not permission”. As another example, Clearview AI has spent most of the four years since Kashmir Hill alerted the world to its existence facing regulatory bans and fines, and public disquiet over the rampant spread of live facial recognition continues to grow. Add in the continuing degradation of exTwitter, the increasing number of friends who say they’re dropping out of social media generally, and the revival of US antitrust actions with the FTC’s suit against Amazon, and it feels like change is gathering.

It would be a logical time, for an odd reason: each of the last few decades as seen through published books has had a distinctive focus with respect to information technology. I discovered this recently when, for various reasons, I reorganized my hundreds of books on net.wars-type subjects dating back to the 1980s. How they’re ordered matters: I need to be able to find things quickly when I want them. In 1990, a friend’s suggestion of categorizing by topic seemed logical: copyright, privacy, security, online community, robots, digital rights, policy… The categories quickly broke down and cross-pollinated. In rebuilding the library, what to replace it with?

The exercise, which led to alphabetizing by author’s name within decade of publication, revealed that each of the last few decades has been distinctive enough that it’s remarkably easy to correctly identify a book’s decade without turning to the copyright page to check. The 1980s and 1990s were about exploration and explanation. Hype led us into the 2000s, which were quieter in publishing terms, though marked by bursts of business books that spanned the dot-com boom, bust, and renewal. The 2010s brought social media, content moderation, and big data, and a new set of technologies to hype, such as 3D printing and nanotechnology (about which we hear nothing now). The 2020s, it’s too soon to tell…but safe to say disinformation, AI, and robots are dominating these early years.

The 2020s books to date are trying to understand how to rein in the worst effects of Big Tech: online abuse, cryptocurrency fraud, disinformation, the loss of control as even physical devices turn into manufacturer-controlled subscription services, and, as predicted in 2018 by Christian Wolmar, the ongoing failure of autonomous vehicles to take over the world as projected just ten years ago.

While Teslas are not autonomous, the company’s Silicon Valley ethos has always made them seem more like information technology than cars. Bad idea, as Reuters reports; its investigation found a persistent pattern of mishaps such as part failures and wheels falling off – and an equally persistent pattern of the company blaming the customer, even when the car was brand new. If we don’t want shoddy goods and data invasion with everything to be our future, fighting back is essential. In 2032, I hope looking back shows that story.

The good news going into 2024 is, as the Center for the Public Domain at Duke University, Public Domain Review and Cory Doctorow write, the bumper crop of works entering the public domain: sound recordings (for the first time in 40 years), DH Lawrence’s Lady Chatterley’s Lover, Agatha Christie’s The Mystery of the Blue Train, Ben Hecht and Charles MacArthur’s play The Front Page. and the first of Mickey Mouse. Happy new year.

Illustrations: Promotional still from the 1928 production of The Front Page, which enters the public domain on January 1, 2024 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon