The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
By Emily Bender and Alex Hanna
HarperCollins
ISBN: 978-0-06-341856-1
Enormous sums of money are sloshing around AI development. Amazon is handing $8 billion to Anthropic. Microsoft is adding $1 billion worth of Azure cloud computing to its existing massive stake in Open AI. And Nvidia is pouring $100 billion in the form of chips into Open AI’s project to build a gigantic data center, while Oracle is borrowing $100 billion in order to give OpenAI $300 billion worth of cloud computing. Current market *revenue* projections? 85 billion in 2029. So they’re all fighting for control over the Next Big Thing, which projections suggest will never pay off. Warnings that the AI bubble may be about to splatter us all are coming from Cory Doctorow and Ed Zitron – and the Daily Telegraph, The Atlantic, and the Wall Street Journal. Bain Capital says the industry needs another $800 billion in investment now and $2 trillion by 2030 to meet demand.
Many talk about the bubble and economic consequences if it bursts. Few talk about the opportunity costs as AI sucks money and resources away from other things that might be more valuable. In The AI Con, linguistics professor Emily Bender and DAIR Institute director of research Alex Hanna provide an exception. Bender is one of the four authors of the seminal 2021 paper On the Dangers of Stochastic Parrots, which arguably founded AI-skepticism.
In the book, the authors review much that’s familiar: the many layers of humans required to code, train, correct, and mind “AI”: the programmers, designers, data labelers, and raters, along with the humans waiting to take over when the AI fails. They also go into the water, energy, and labor demands of data centers and present approaches to AI.
Crucially, they avoid both doomerism and boosterism, which they understand as alternative sides of the same coin. Both the fully automated hellscape Doomers warn against and and the Boosters’ world governed by a benign synthetic intelligence ignore the very real harms taking place at present. Doomers promote “AI safety” using “fake scenarios” meant to frighten us. Think HAL in the movie 2001: A Space Odyssey or Nick Bostrum’s paperclip maximizer. Boosters rail against the constraints implicit in sustainability, trust and safety organizations within technology companies, and government regulation. We need, Bender and Hanna write, to move away from speculative risks and toward working on the real problems we have. Hype, they conclude, doesn’t have to be true to do harm.
The book ends with a chapter on how to resist hype. Among their strategies: persistently ask questions such as how a system is evaluated, who is harmed and who benefits, how the system was developed and with what kind of data and labor practices. Avoid language that humanizes the system – no “hallucinations” for errors. Advocate for transparency and accountability, and resist the industry’s claims that the technology is so new there is no way to regulate it. The technology may be new, but the principles are old. And, when necessary, just say no and resist the narrative that its progress is inevitable.