Review: The Oracle

The Oracle
by Ari Juels
Talos Press
ISBN: 978-1-945863-85-1
Ebook ISBN: 978-1-945863-86-8

In 1994, a physicist named Timothy C. May posited the idea of an anonymous information market he called blacknet. With anonymity secured by cryptography, participants could trade government secrets. And, he wrote in 1988’s Crypto-Anarchist Manifesto “An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion.” In May’s time, the big thing missing to enable such a market was a payment system. Then, in 2008, came bitcoin and the blockchain.

In 2015, Ari Juels, now the Weill Family Foundation and Joan and Sanford I. Weill Professor at Cornell Tech but previously chief scientist at the cryptography company RSA, saw blacknet potential in etherum’s adoption of “smart contracts”, an idea that had been floating around since the 1990s. Smart contracts are computer programs that automatically execute transactions when specified conditions are met without the need for a trusted intermediary to provide guarantees. Among other possibilities, they can run on blockchains – the public, tamperproof, shared ledger that records cryptocurrency transactions.

In the resulting research paper on criminal smart contracts PDF), Juels and co-authors Ahmed Kosba and Elaine Shi wrote: “We show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism).”

It’s not often a research paper becomes the basis for a techno-thriller novel, but Juels has prior form. His 2009 novel Tetraktys imagined that members of an ancient Pythagorean cult had figured out how to factor prime numbers, thereby busting the widely-used public key cryptography on which security on the Internet depends. Juels’ hero in that book was uniquely suited to help the NSA track down the miscreants because he was both a cryptographer and the well-schooled son of an expert on the classical world. Juels could almost be describing himself: before turning to cryptography he studied classical literature at Amherst and Oxford.

Juels’ new book, The Oracle, has much in common with his earlier work. His alter-ego here is a cryptographer working on blockchains and smart contracts. Links to the classical world – in this case, a cult derived from the oracle at Delphi – are provided by an FBI agent and art crime investigator who enlists his help when a rogue smart contract is discovered that offers $10,000 to kill an archeology professor, soon followed by a second contract offering $700,000 for a list of seven targets. Soon afterwards, our protagonist discovers he’s first on that list, and he has only a few days to figure out who wrote the code and save his own life. That quest also includes helping the FBI agent track down some Delphian artifacts that we learn from flashbacks to classical times were removed from the oracle’s temple and hidden.

The Delphi oracle, Juels writes, “revealed divine truth in response to human questions”. The oracles his cryptographer is working on are “a source of truth for questions asked by smart contracts about the real world”. In Juels’ imagining, the rogue assassination contract is issued with trigger words that could be expected to appear in a death announcement. When someone tries to claim the bounty, the smart contract checks news sources for those words, only paying out if it finds them. Juels has worked hard to make the details of both classical and cryptographic worlds comprehensible. They remain stubbornly complex, but you can follow the story easily enough even if you panic at the thought of math.

The tension is real, both within and without the novel. Juels’ idea is credible enough that it’s a relief when he says the contracts as described are not feasible with today’s technology, and may never become so (perhaps especially because the fictional criminal smart contract is written in flawless computer code). The related paper also notes that some details of their scheme have been left out so as not to enable others to create these rogue contracts for real. Whew. For now.

Unclear and unpresent dangers

Monthly computer magazines used to fret that their news pages would be out of date by the time the new issue reached readers. This week in AI, a blog posting is out of date before you hit send.

This – Friday – morning, the Italian data protection authority, Il Garante, has ordered ChatGPT to stop processing the data of Italian users until it complies with the General Data Protection Regulation. Il Garante’s objections, per Apple’s translation, posted by Ian Brown: ChatGPT provides no legal basis for collecting and processing its massive store of the personal data used to train the model, and that it fails to filter out users under 13.

This may be the best possible answer to the complaint I’d been writing below.

On Wednesday, the Future of Life Institute published an open letter calling for a six-month pause on developing systems more powerful than Open AI’s current state of the art, GPT4. Barring Elon Musk, Steve Wozniack, and Skype co-founder Jaan Tallinn, most of the signatories are unfamiliar names to most of us, though the companies and institutions they represent aren’t – Pinterest, the MIT Center for Artificial Intelligence, UC Santa Cruz, Ripple, ABN-Amro Bank. Almost immediately, there was a dispute over the validity of the signatures..

My first reaction was on the order of: huh? The signatories are largely people who are inventing this stuff. They don’t have to issue a call. They can just *stop*, work to constrain the negative impacts of the services they provide, and lead by example. Or isn’t that sufficiently performative?

A second reaction: what about all those AI ethics teams that Silicon Valley companies are disbanding? Just in the last few weeks, these teams have been axed or cut at Microsoft and Twitch; Twitter of course ditched such fripperies last November in Musk’s inaugural wave of cost-cutting. The letter does not call to reinstate these.

The problem, as familiar critics such as Emily Bender pointed out almost immediately, is that the threats the letter focuses on are distant not-even-thunder. As she went on to say in a Twitter thread, the artificial general intelligence of the Singularitarian’s rapture is nowhere in sight. By focusing on distant threats – longtermism – we ignore the real and present problems whose roots are being continuously more deeply embedded into the new-building infrastructure: exploited workers, culturally appropriated data, lack of transparency around the models and algorithms used to build these systems….basically, all the ways they impinge upon human rights.

This isn’t the first time such a letter has been written and circulated. In 2015, Stephen Hawking, Musk, and about 150 others similarly warned of the dangers of the rise of “superintelligences”. Just a year later, in 2016, Pro Publica investigated the algorithm behind COMPAS, a risk-scoring criminal justice system in use in US courts in several states. Under Julia Angwin‘s scrutiny, the algorithm failed at both accuracy and fairness; it was heavily racially biased. *That*, not some distant fantasy, was the real threat to society.

“Threat” is the key issue here. This is, at heart, a letter about a security issue, and solutions to security issues are – or should be – responses to threat models. What is *this* threat model, and what level of resources to counter it does it justify?

Today, I’m far more worried by the release onto public roads of Teslas running Full Self Drive helmed by drivers with an inflated sense of the technology’s reliability than I am about all of human work being wiped away any time soon. This matters because, as Jessie Singal, author of There Are No Accidents, keeps reminding us, what we call “accidents” are the results of policy decisions. If we ignore the problems we are presently building in favor of fretting about a projected fantasy future, that, too, is a policy decision, and the collateral damage is not an accident. Can’t we do both? I imagine people saying. Yes. But only if we *do* both.

In a talk this week for a group at the French international research group AI Act. This effort began well before today’s generative tools exploded into public consciousness, and isn’t likely to conclude before 2024. It is, therefore, much more focused on the kinds of risks attached to public sector scandals like COMPAS and those documented in Cathy O’Neil’s 2017 book Weapons of Math Destruction, which laid bare the problems with algorithmic scoring with little to tether it to reality.

With or without a moratorium, what will “AI” look like in 2024? It has changed out of recognition just since the last draft text was published. Prediction from this biological supremacist: it still won’t be sentient.

All this said, as Edwards noted, even if the letter’s proposal is self-serving, a moratorium on development is not necessarily a bad idea. It’s just that if the risk is long-term and existential, what will six months do? If the real risk is the hidden continued centralization of data and power, then those six months could be genuinely destructive. So far, it seems like its major function is as a distraction. Resist.

Illustrations: IBM’s Watson, which beat two of Jeopardy‘s greatest champions in 2011. It has since failed to transform health care.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.