RIP Ross J. Anderson

RIP Ross J. Anderson, who died on March 28, at 67 and leaves behind a smoking giant crater in the fields of security engineering and digital rights activism. For the former, he was a professor of security engineering at Cambridge University and Edinburgh University, a Fellow of the Royal Society, and a recipient of the BCS Lovelace medal. His giant textbook Security Engineering is a classic. In digital rights activism, he founded the Foundation for Information Policy Research (see also tenth anniversary and 15th anniversary) and the UK Crypto mailing list, and, understanding that the important technology laws were being made at the EU level, he pushed for the formation of European Digital Rights to act as an umbrella organization for the national digital rights organizations springing up in many countries. He also was one of the pioneers in security economics, and founded the annual Workshop on the Economics of Information Security, convening on April 8 for the 23rd time.

One reason Anderson was so effective in the area of digital rights is that he had the ability to look forward and see the next challenge while it was still forming. Even more important, he had an extraordinary ability to explain complex concepts in an understandable manner. You can experience this for yourself at the YouTube channel where he posted a series of lectures on security engineering or by reading any of the massive list of papers available at his home page.

He had a passionate and deep-seated sense of injustice. In the 1980s and 1990s, when customers complained about phantom ATM withdrawals and the banks tried to claim their software was infallible, he not only conducted a detailed study but adopted fraud in payment systems as an ongoing research interest.

He was a crucial figure in the fight over encryption policy, opposing key escrow in the 1990s and “lawful access” in the 2020s, for the same reasons: the laws of mathematics say that there is no such thing as a hole only good guys can exploit. His name is on many key research papers in this area.

In the days since his death, numerous former students and activists have come forward with stories of his generosity and wit, his eternal curiosity to learn new things, and the breadth and depth of his knowledge. And also: the forthright manner that made him cantankerous.

I think I first encountered Ross at the 1990s Scrambling for Safety events organized by Privacy International. He was slow to trust journalists, shall we say, and it was ten years before I felt he’d accepted me. The turning point was a conference where we both arrived at the lunch counter at the same time. I happened to be out of cash, and he bought me a sandwich.

Privately, in those early days I sometimes referred to him as the “mad cryptographer” because interviews with him often led to what seemed like off-the-wall digressions. One such led to an improbable story about the US Embassy in Moscow being targeted with microwaves in an attempt at espionage. This, I found later, was true. Still, I felt best not to quote it in the interests of getting people to listen to what he was saying about crypto policy.

My favorite Ross memory, though, is this: one night shortly before Christmas maybe ten years ago – by this time we were friends – when I interviewed him over Skype for yet another piece. It was late in Britain, and I’m not sure he was fully sober. Before he would talk about security, knowing of my interest in folk music, he insisted on playing several tunes on the Scottish chamber pipes. He played well. Pipe music was another of his consuming interests, and he brought to it as much intensity and scholarship as he did to all his other passions.

Ross J. Anderson, b. September 15, 1956, d. March 28, 2024.

Alabama never got the bomb

There is this to be said for nuclear weapons: they haven’t scaled. Since 1969, when Tom Lehrer warned about proliferation (“We’ll try to stay serene and calm | When Alabama gets the bomb”), a world of treaties, regulation, and deterrents has helped, but even if it hadn’t, building and updating nuclear weapons remains stubbornly expensive. (That said, the current situation is scary enough.)

The same will not be true of drones, James Patton Rogers explained in a recent talk at Kings College London about his new book, Precision: A History of American Warfare. Already, he says, drones are within reach for non-governmental actors such as Mexican drug cartels. At the BBC, Jonathan Marcus estimated in February 2022 that more than 100 nations and non-state actors already have combat drones and these systems are proliferating rapidly. The brief moment in which the US and Israel had an exclusive edge is already gone; Rogers says Iran and Turkey are “drone powers”. Back to the BBC in 2022: Marcus writes that some terrorist groups had already been able to build attack drone systems using commercial components for a few hundred dollars. Rogers put the number of countries with drone capability in 2023 at 113, plus 65 armed groups. He also called them one of the “greatest threats to state security”, noting the speed and abruptness with which they’ve flipped from being protective and their potential for “assassinations, strikes, saturation attacks”.

Rogers, who calls his book an “intellectual history”, traces the beginnings of precision to the end of the long, muddy, casualty-filled conflict of World War I. Never again: instead, remote attacks on military-industrial targets that limit troops on the ground and loss of life. The arrival of the atomic bomb and Russia’s development of same changed focus to the Dr Strangelove-style desire for the technology to mount massive retaliation. John F. Kennedy successfully campaigned on the missile gap. (In this part of Rogers’ presentation, it was impossible not to imagine how effective this amount of energy could have been if directed toward climate change…)

The 1990s and the Gulf War brought a revival of precision in the form of the first cruise missiles and the first drones. But as long ago as 1988 there were warnings that the US could not monopolize drones and they would become a threat. “We need an international accord to control drone proliferation,” Rogers said.

But the threat to state security was not Rogers’ answer when an audience member asked him, “What keeps you awake at night?”

“Drone mass killings targeting ethnic diasporas in cities.”

Authoritarian governments have long reached out to control opposition outside their borders. In 1974, I rented an apartment from the Greek owner of a local highly-regarded restaurant. A day later, a friend reacted in horror: didn’t I know that restaurateur was persona-non-patronize because he had reported Greek student protesters in Ithaca, New York to the military junta then in power and there had been consequences for their families back home? No, I did not.

As an informant, landlord’s powers were limited, however. He could go to and photograph protests; if he couldn’t identify the students he could still send their pictures. But he couldn’t amass comprehensive location data tracking their daily lives, operate a facial recognition system, or monitor them on social media and infer their social graphs. A modern authoritarian government equipped with Internet connections can do all of that and more, and the data it can’t gather itself it can obtain by purchase, contract, theft, hacking, or compulsion.

In Canada, opponents of Chinese Communist Party policies report harassment and intimidation. Freedom House reports that China’s transnational repression also includes spyware, digital threats, physical assault, and cooption of other countries, all escalating since 2014. There’s no reason for this sort of thing to be limited to the Chinese (and Russians); Citizen Lab has myriad examples of governments’ use of spyware to target journalists, political opponents, and activists, inside or outside the countries where they’re active.

Today, even in democratic countries there is an ongoing trend toward increased and more militaristic surveillance of migrants and borders. In 2021, Statewatch reported on the militarization of the EU’s borders along the Mediterranean, including a collaboration between Airbus and two Israeli companies to use drones to intercept migrant vessels Another workshop that same year made plain the way migrants are being dataveilled by both governments and the aid agencies they rely on for help. In 2022, the courts ordered the UK government to stop seizing the smartphones belonging to migrants arriving in small boats.

Most people remain unaware of this unless some poliitician boasts about it as part of a tough-on-immigration platform. In general, rights for any kind of foreigners – immigrants, ethnic minorities – are a hard sell, if only because non-citizens have no vote, and an even harder one against the headwind of “they are not us” rhetoric. Threats of the kind Rogers imagined are not the sort nations are in the habit of protecting against.

It isn’t much of a stretch to imagine all those invasive technologies being harnessed to build a detailed map of particular communities. From there, given affordable drones, you just need to develop enough malevolence to want to kill them off, and be the sort of country that doesn’t care if the rest of the world despises you for it.

Illustrations: British migrants to Australia in 1949 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Facts are scarified

The recent doctored Palace photo has done almost as much as the arrival of generative AI to raise fears that in future we will completely lose the ability to identify fakes. The royal photo was sloppily composited – no AI needed – for reasons unknown (though Private Eye has a suggestion). A lot of conspiracy theorizing could be avoided if the palace would release the untouched original(s), but as things are, the photograph is a perfect example of how to provide the fuel for spreading nonsense to 400 million people.

The most interesting thing about the incident was discovering the rules media apply to retouching photos. AP specified, for example, that it does not use altered or digitally manipulated images. It allows cropping and minor adjustments to color and tone where necessary, but bans more substantial changes, even retouching to remove red eye. As Holly Hunter’s character says, trying to uphold standards in the 1987 movie Broadcast News (written by James Brooks), “We are not here to stage the news.”

The desire to make a family photo as appealing as possible is understandable; the motives behind spraying the world with misinformation are less clear and more varied. I’ve long argued here that for this reason combating misinformation and disinformation is similar to cybersecurity because of the complexity of the problem and the diversity of actors and agendas. At last year’s Disinformation Summit in Cambridge cybersecurity was, sadly, one of the missing communities.

Just a couple of weeks ago the BBC announced its adoption of C2PA for authenticating images, developed by a group of technology and media companies including the BBC, the New York Times, Microsoft, and Adobe. The BBC says that many media organizations are beginning to adopt C2PA, and even Meta is considering it. Edits must be signed, and create a chain of provenance all the way back to the original photo. In 2022, the BBC and the Royal Society co-hosted a workshop on digital provenance, following a Royal Society report, at which C2PA featured prominently.

That’s potentially a valuable approach for publishing and broadcast, where the conduit to the public is controlled by one of a relatively small number of organizations. And you can see why those organizations would want it: they need, and in many cases are struggling to retain, public trust. It is, however, too complex a process for the hundreds of millions of people with smartphone cameras posting images to social media, and unworkable for citizen journalists capturing newsworthy events in real time. Ancillary issue: sophisticated phone cameras try so hard to normalize the shots we take that they falsify the image at source. In 2020, Californians attempting to capture the orange color of their smoke-filled sky were defeated by autocorrection that turned it grey. So, many images are *originally* false.

In lengthy blog posting, Neal Krawitz analyzes difficulties with C2PA. He lists security flaws, but also is opposed to the “appeal to authority” approach, which he dubs a “logical fallacy”. In the context of the Internet, it’s worse than that; we already know what happens when a tiny handful of commercial companies (in this case, chiefly Adobe) become the gatekeeper for billions of people.

All of this was why I was glad to hear about work in progress at a workshop last week, led by Mansoor Ahmed-Rengers, a PhD candidate studying system security: Human-Oriented Proof Standard (HOPrS). The basic idea is to build an “Internet-wide, decentralised, creator-centric and scalable standard that allows creators to prove the veracity of their content and allows viewers to verify this with a simple ‘tick’.” Co-sponsoring the workshop was Open Origins, a project to distinguish between synthetic and human-created content.

It’s no accident that HOPrS’ mission statement echoes the ethos of the original Internet; as security researcher Jon Crowcroft explains, it’s part of long-running work on redecentralization. Among HOPrS’ goals, Ahmed-Rengers listed: minimal centralization; the ability for anyone to prove their content; Internet-wide scalability; open decision making; minimal disruption to workflow; and easy interpretability of proof/provenance. The project isn’t trying to cover all bases – that’s impossible. Given the variety of motivations for fakery, there will have to be a large ecosystem of approaches. Rather, HOPrS is focusing specifically on the threat model of an adversary determined to sow disinformation, giving journalists and citizens the tools they need to understand what they’re seeing.

Fakes are as old as humanity. In a brief digression, we were reminded that the early days of photography were full of fakery: the Cottingley Fairies, the Loch Ness monster, many dozens of spirit photographs. The Cottingley Fairies, cardboard cutouts photographed by Elsie Wright, 16, and Florence Griffiths, 9, were accepted as genuine by Sherlock Holmes creator Sir Arthur Conan Doyle, famously a believer in spiritualism. To today’s eyes, trained on millions of photographs, they instantly read as fake. Or take Ireland’s Knock apparitions, flat, unmoving, and, philosophy professor David Berman explained in 1979, magic lantern projections. Our generation, who’ve grown up with movies and TV, would I think have instantly recognized that as fake, too. Which I believe tells us something: yes, we need tools, but we ourselves will get better at detecting fakery, as unlikely as it seems right now. The speed with which the royal photo was dissected showed how much we’ve learned just since generative AI became available.

Illustrations: The first of the Cottingley Fairies photographs (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Borderlines

Think back to the year 2000. New York’s World Trade Center still stood. Personal digital assistants were a niche market. There were no smartphones (the iPhone arrived in 2006) or tablets (the iPad took until 2010). Social media was nascent; Facebook first opened in 2004. The Good Friday agreement was just two years old, and for many in Britain “terrorists” were still “Irish”. *That* was when the UK passed the Terrorism Act (2000).

Usually when someone says the law can’t keep up with technological change they mean that technology can preempt regulation at speed. What the documentary Phantom Parrot shows, however, is that technological change can profoundly alter the consequences of laws already on the books. The film’s worked example is Schedule 7 of the 2000 Terrorism Act, which empowers police to stop, question, search, and detain people passing through the UK’s borders. They do not need prior authority or suspicion, but may only stop and question people for the purpose of determining whether the individual may be or have been concerned in the commission, preparation, or instigation of acts of terrorism.

Today this law means that anyone ariving at the UK border may be compelled to unlock access to data charting their entire lives. The Hansard record of the debate on the bill shows clearly that lawmakers foresaw problems: the classification of protesters as terrorists, the uselessness of fighting terrorism by imprisoning the innocent (Jeremy Corbyn), the reversal of the presumption of innocence. But they could not foresee how far-reaching the powers the bill granted would become.

The film’s framing story begins in November 2016, when Muhammed Rabbani arrived at London’s Heathrow Airport from Doha and was stopped and questioned by police under Schedule 7. They took his phone and laptop and asked for his passwords. He refused to supply them. On previous occasions, when he had similarly refused, they’d let him go. This time, he was arrested. Under Schedule 7, the penalty for such a refusal can be up to three months in jail.

Rabbani is managing director of CAGE International, a human rights organization that began by focusing on prisoners seized under the war on terror and expanded its mission to cover “confronting other rule of law abuses taking place under UK counter-terrorism strategy”. Rabbani’s refusal to disclose his passwords was, he said later, because he was carrying 30,000 confidential documents relating to a client’s case. A lawyer can claim client confidentiality, but not NGOs. In 2018, the appeals court ruled the password demands were lawful.

In September 2017, Rabbani was convicted. He was g iven a 12-month conditional discharge and ordered to pay £620 in costs. As Rabbani says in the film, “The law made me a terrorist.” No one suspected him of being a terrorist or placing anyone in danger; but the judge made clear she had no choice under the law and so he nonetheless has been convicted of a terrorism offense. On appeal in 2018, his conviction was upheld. We see him collect his returned devices – five years on from his original detention.

Britain is not the only country that regards him with suspicion. Citing his conviction, in 2023 France banned him, and, he claims, Poland deported him.

Unsurprisingly, CAGE is on the first list of groups that may be dubbed “extremist” under the new definition of extremism released last week by communities secretary Michael Gove. The direct consequence of this designation is a ban on participation in public life – chiefly, meetings with central and local government. The expansion of the meaning of “extremist”, however, is alarming activists on all sides.

Director Kate Stonehill tells the story of Rabbani’s detention partly through interviews and partly through a reenactment using wireframe-style graphics and a synthesized voice that reads out questions and answers from the interview transcripts. A cello of doom provides background ominance. Laced through this narrative are others. A retired law enforcement office teaches a class to use extraction and analysis tools, in which we see how extensive the information available to them really is. Ali Al-Marri and his lawyer review his six years of solitary detention as an enemy combatant in Charleston, South Carolina. Lastly, Stonehill calls on Ryan Gallegher’s reporting, which exposed the titular Phantom Parrot, the program to exploit the data retained under Schedule 7. There are no records of how many downloads have been taken.

The retired law enforcement officer’s class is practically satire. While saying that he himself doesn’t want to be tracked for safety reasons, he tells students to grab all the data they can when they have the opportunity. They are in Texas: “Consent’s not even a problem.” Start thinking outside of the box, he tells them.

What the film does not stress is this: rights are largely suspended at all borders. In 2022, the UK extended Schedule 7 powers to include migrants and refugees arriving in boats.

The movie’s future is bleak. At the Chaos Computer Congress, a speaker warns that gait recognition, eye movement detection, and speech analysis (accents, emotion) and and other types of analysis will be much harder to escape and enable watchers to do far more with the ever-vaster stores of data collected from and about each of us.

“These powers are capable of being misused,” said Douglas Hogg in the 1999 Commons debate. “Most powers that are capable of being misused will be misused.” The bill passed 210-1.

Illustrations: Still shot from the wireframe reenactment of Rabbani’s questioning in Phantom Parrot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Bill Gates Problem

The Bill Gates Problem: Reckoning with the Myth of the Good Billionaire
By Tim Schwab
Metropolitan Books
ISBN: 978-1-25085009-6

Thirty years ago, the Federal Trade Commission began investigating one of the world’s largest technology companies on antitrust grounds. Was it leveraging its monopoly in one area to build dominance in others? Did it bully smaller competitors into disclosing their secrets, which it then copied? And so on. That company was Microsoft, Windows was giving it leverage over office productivity software, web browsers, and media players, and its leader was Bill Gates. In 1999, the courts ruled Microsoft a monopoly.

At the time, it was relatively commonplace for people to complain that Gates was insufficiently charitable. Why wasn’t he more philanthropic, given his vast and increasing wealth? (Our standards for billionaire wealth were lower back then.) Be careful what you wish for…

The transition from monopolist mogul to beneficent social entrepreneur where Tim Schwab starts in The Bill Gates Problem: Reckoning with the Myth of the Good Billionaire. In Schwab’s view, the reason is well-executed PR, in which category he includes the many donations the foundation makes to journalism organizations.

I have heard complaints for years that the Bill and Melinda Gates Foundation’s approach to philanthropy favors expensive technological interventions over cheaper, well-established ones. In education that might mean laptops and edtech software rather than training teachers; in medicine that might mean vaccine research rather than clean water. Schwab’s investigative work turns up dozens such stories in the areas BMGF works in: family planning, education, health. Yet, Schwab writes, citing numerous sources for his figures, for all the billions BMGF has poured into these areas, it has failed to meet its stated objectives.

You can argue that case, but Schwab moves on from there to examine the damaging effects of depending on a billionaire, no matter how smart and well-intentioned, to finance services that might more properly be the business of the state. No one elected Gates, and no one has voted on the priorities he has chosen to set. The covid pandemic provides a particularly good example. One of the biggest concerns as efforts to produce vaccines got underway was ensuring that access would not be limited to rich countries. Many believed that the most efficient way of doing this was to refrain from patenting the vaccines, and help poorer countries build their own production facilities. Gates was one of those who opposed this approach, arguing that patents were necessary to reward pharmaceutical companies for the investment they poured into research, and also that few countries had the expertise to make the vaccines. Gates gave in to pressure and reversed his position in May 2021 to support a “narrow waiver”. Reading that BMGF is the biggest funder of the WHO and remembering his preference for technological interventions made me wonder: how much do we have Gates to thank for the emphasis on vaccines and the reluctance to push cheaper non-pharmaceutical interventions like masks, HEPA filters, and ventilation in countries like the UK?

Schwab goes into plenty of detail about all this. But his wider point is to lay out the power Gates’s massive wealth – both the foundation’s and his own – gives him over the charitable sector and, through public-partnerships, many of the nations in which he operates. Schwab also calls Gates’s approach “philanthropic colonialism” because the bulk of his donations go to organizations based in the West, rather than directly to their counterparts elsewhere.

Pointing out the amount of taxpayer subsidy the foundation gets through the tax exemptions charities get, Schwab asks if we’re really getting value for our money. Wouldn’t we be better off collecting taxes and setting our own agendas? Is there really any such thing as a “good” billionaire?

Competitive instincts

This week – Wednesday, March 6 – saw the EU’s Digital Markets Act come into force. As The Verge reminds us, the law is intended to give users more choice and control by forcing technology’s six biggest “gatekeepers” to embrace interoperability and avoid preferencing their own offerings across 22 specified services. The six: Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. Alphabet’s covered list is the longest: advertising, app store, search engine, maps, and shopping, plus Android, Chrome, and YouTube. For Apple, it’s the app store, operating system, and web browser. Meta’s list includes Facebook, WhatsApp, and Instagram, plus Messenger, Ads, and Facebook Marketplace. Amazon: third-party marketplace and advertising business. Microsoft: Windows and internal features. ByteDance just has TikTok.

The point is to enable greater competition by making it easier for us to pick a different web browser, uninstall unwanted features (like Cortana), or refuse the collection and use of data to target us with personalized ads. Some companies are haggling. Meta, for example, is trying to get Messenger and Marketplace off the list, while Apple has managed to get iMessage removed from the list. More notably, though, the changes Apple is making to support third-party app stores have been widely cricitized as undermining any hope of success for independents.

Americans visiting Europe are routinely astonished at the number of cookie consent banners that pop up as they browse the web. Comments on Mastodon this week have reminded that this was their churlish choice to implement the 2009 Cookie Directive and 2018 General Data Protection Regulation in user-hostile ways. It remains to be seen how grown-up the technology companies will be in this new round of legal constraints. Punishing users won’t get the EU law changed.

***

The last couple of weeks have seen a few significant outages among Internet services. Two weeks ago, AT&T’s wireless service went down for many hours across the US after a failed software update. On Tuesday, while millions of Americans were voting in the presidential primaries, it was Meta’s turn, when a “technical issue” took out both Facebook and Instagram (and with the latter, Threads) for a couple of hours. Concurrently but separately, users of Ad Manager had trouble logging in at Google, and users of Microsoft Teams and exTwitter also reported some problems. Ironically, Meta’s outage could have been fixed faster if the engineers trying to fix it hadn’t had trouble gaining remote access to the servers they needed to fix (and couldn’t gain access to the physical building because their passes didn’t work either).

Outages like these should serve as reminders not to put all your login eggs in one virtual container. If you use Facebook to log into other sites, besides the visibility you’re giving Meta into your activities elsewhere, those sites will be inaccessible any time Facebook goes down. In the case of AT&T, one reason this outage was so disturbing – the FTC is formally investigating it – is that the company has applied to get rid of its landlines in California. While lots of people no longer have landlines, they’re important in rural areas where cell service can be spotty, some services such as home alarm systems and other equipment depend on them, and they function in emergencies when electric power fails.

But they should also remind that the infrastructure we’re deprecating in favor of “modern” Internet stuff was more robust than the new systems we’re increasingly relying on. A home with smart devices that cannot function without an uninterrupted Internet connection is far more fragile and has more points of failure than one without them, just as you can read a paper map when your phone is dead. At The Verge, Jennifer Pattison Tuohy tests a bunch of smart kitchen appliances including a faucet you can operate via Alexa or Google voice assistants. As in digital microwave ovens, telling the faucet the exact temperature and flow rate you want…seems unnecessarily detailed. “Connect with your water like never before,” the faucet manufacturer’s website says. Given the direction of travel of many companies today, I don’t want new points of failure between me and water.

***

It has – already! – been three years since Australia’s News Media Bargaining Code led to Facebook and Google signing three-year deals that have primarily benefited Rupert Murdoch’s News Corporation, owner of most of Australia’s press. A week ago, Meta announced it will not renew the agreement. At The Conversation, Rod Sims, who chaired the commission that formulated the law, argues it’s time to force Meta into the arbitration the code created. At ABC Science, however, James Purtill counters that the often “toxic” relationship between Facebook and news publishers means that forcing the issue won’t solve the core problem of how to pay for news, since advertising has found elsewheres it would rather be. (Separately, in Europe, 32 media organizations covering 17 countries have filed a €2.1 billion lawsuit against Google, matching a similar one filed last year in the UK, alleging that the company abused its dominant position to deprive them of advertising revenue.)

Purtill predicts, I think correctly, that attempting to force Meta to pay up will instead bring Facebook to ban news, as in Canada, following the passage of a similar law. Facebook needed news once; it doesn’t now. But societies do. Suddenly, I’m glad to pay the BBC’s license fee.

Illustrations: Red deer (via Wikimedia.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Oracle

The Oracle
by Ari Juels
Talos Press
ISBN: 978-1-945863-85-1
Ebook ISBN: 978-1-945863-86-8

In 1994, a physicist named Timothy C. May posited the idea of an anonymous information market he called blacknet. With anonymity secured by cryptography, participants could trade government secrets. And, he wrote in 1988’s Crypto-Anarchist Manifesto “An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion.” In May’s time, the big thing missing to enable such a market was a payment system. Then, in 2008, came bitcoin and the blockchain.

In 2015, Ari Juels, now the Weill Family Foundation and Joan and Sanford I. Weill Professor at Cornell Tech but previously chief scientist at the cryptography company RSA, saw blacknet potential in etherum’s adoption of “smart contracts”, an idea that had been floating around since the 1990s. Smart contracts are computer programs that automatically execute transactions when specified conditions are met without the need for a trusted intermediary to provide guarantees. Among other possibilities, they can run on blockchains – the public, tamperproof, shared ledger that records cryptocurrency transactions.

In the resulting research paper on criminal smart contracts PDF), Juels and co-authors Ahmed Kosba and Elaine Shi wrote: “We show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism).”

It’s not often a research paper becomes the basis for a techno-thriller novel, but Juels has prior form. His 2009 novel Tetraktys imagined that members of an ancient Pythagorean cult had figured out how to factor prime numbers, thereby busting the widely-used public key cryptography on which security on the Internet depends. Juels’ hero in that book was uniquely suited to help the NSA track down the miscreants because he was both a cryptographer and the well-schooled son of an expert on the classical world. Juels could almost be describing himself: before turning to cryptography he studied classical literature at Amherst and Oxford.

Juels’ new book, The Oracle, has much in common with his earlier work. His alter-ego here is a cryptographer working on blockchains and smart contracts. Links to the classical world – in this case, a cult derived from the oracle at Delphi – are provided by an FBI agent and art crime investigator who enlists his help when a rogue smart contract is discovered that offers $10,000 to kill an archeology professor, soon followed by a second contract offering $700,000 for a list of seven targets. Soon afterwards, our protagonist discovers he’s first on that list, and he has only a few days to figure out who wrote the code and save his own life. That quest also includes helping the FBI agent track down some Delphian artifacts that we learn from flashbacks to classical times were removed from the oracle’s temple and hidden.

The Delphi oracle, Juels writes, “revealed divine truth in response to human questions”. The oracles his cryptographer is working on are “a source of truth for questions asked by smart contracts about the real world”. In Juels’ imagining, the rogue assassination contract is issued with trigger words that could be expected to appear in a death announcement. When someone tries to claim the bounty, the smart contract checks news sources for those words, only paying out if it finds them. Juels has worked hard to make the details of both classical and cryptographic worlds comprehensible. They remain stubbornly complex, but you can follow the story easily enough even if you panic at the thought of math.

The tension is real, both within and without the novel. Juels’ idea is credible enough that it’s a relief when he says the contracts as described are not feasible with today’s technology, and may never become so (perhaps especially because the fictional criminal smart contract is written in flawless computer code). The related paper also notes that some details of their scheme have been left out so as not to enable others to create these rogue contracts for real. Whew. For now.

Anachronistics

“In my mind, computers and the Internet arrived at the same time,” my twenty-something companion said, delivering an entire mindset education in one sentence.

Just a minute or two earlier, she had asked in some surprise, “Did bulletin board systems predate the Internet?” Well, yes: BBSs were a software package running on a single back room computer with a modem users dialed into, whereas the Internet is this giant sprawling mess of millions of computers connected together…simple first, complex later.

Her confusion is understandable: from her perspective, computers and the Internet did arrive at the same time, since her first conscious encounters with them were simultaneous.

But still, speaking as someone who first programmed a (mainframe, with punch cards) computer in 1972 as a student, who got her first personal computer in 1982, and got online in 1991 by modem and 1999 by broadband and to whom the sequence of events is memorable: wow.

A 25-year-old today was born in 1999 (the year I got broadband). Her counterpart 15 years hence (born 2014, the year a smartphone replaced my personal digital assistant) may think smart phones and the Internet were simultaneous. And sometime around 2045 *her* counterpart born in 2020 (two years before ChatGPT was released) might think generative text and image systems were contemporaneous with the first computers.

I think this confusion must have something to do with the speed of change in a relatively narrow sector. I’m sure that even though they all entered my life simultaneously, by the time I was 25 I knew that radio preceded TV (because my parents grew up with radio), bicycles preceded cars, and that handwritten manuscripts predated printed books (because medieval manuscripts). But those transitions played out over multiple lifetimes, if not centuries, and all those memories were personal. Few of us reminisce about the mainframes of the 1960s because most of us didn’t have access to them.

And yet, understanding the timeline of earlier technologies probably mattered less than not understanding the sequence of events in information technology. Jumbling the arrival dates of the pieces of information technology means failing to understand dependencies. What currently passes for “AI” could not exist without being able to train models on giant piles of data that the Internet and the web made possible, and that took 20 years to build. Neural networks pioneer Geoff Hinton came up with the ideas for convolutional neural networks as long ago as the 1980s, but it took until the last decade for them to become workable. That’s because it took that long to build sufficiently powerful computers and to amass enough training data. How do you understand the ongoing battle between those who wish to protect privacy via data protection laws and those who want data to flow freely without hindrance if you do not understand what those masses of data are important for?

This isn’t the only such issue. A surprising number of people who should know better seem to believe that the solution to all our ills with social media is to destroy Section 230, apparently believing that if S230 allowed Big Tech to get big, it must be wrong. Instead, the reality is also that it allows small sites to exist and it is the legal framework that allows content moderation. Improve it by all means, but understand its true purpose first.

Reviewing movies and futurist projections such as Vannevar Bush’s 1946 essay As We May Think (PDF) and Alan Turing’s lecture, Computing Machinery and Intelligence? (PDF) doesn’t really help because so many ideas arrive long before they’re feasible. The crew in the original 1966 Star Trek series (to say nothing of secret agent Maxwell Smart in 1965) were talking over wireless personal communicators. A decade earlier, Arthur C. Clarke (in The Nine Billion Names of God) and Isaac Asimov (in The Last Question) were putting computers – albeit analog ones – in their stories. Asimov in particular imagined a sequence that now looks prescient, beginning with something like a mainframe, moving on to microcomputers, and finishing up with a vast fully interconnected network that can only be held in hyperspace. (OK, it took trillions of years, starting in 2061, but still..) Those writings undoubtedly inspired the technologists of the last 50 years when they decided what to invent.

This all led us to fakes: as the technology to create fake videos, images, and texts continues to improve, she wondered if we will ever be able to keep up. Just about every journalism site is asking some version of that question; they’re all awash in stories about new levels of fakery. My 25-year-old discussant believes the fakes will always be improving faster than our methods of detection – an arms race like computer security, to which I’ve compared problems of misinformation / disinformation before.

I’m more optimistic. I bet even a few years from now today’s versions of generative “AI” will look as primitive to us as the special effects in a 1963 episode of Dr Who or the magic lantern used to create the Knock apparitions do to generations raised on movies, TV, and computer-generated imagery. Humans are adaptable; we will find ways to identify what is authentic that aren’t obvious in the shock of the new. We might even go back to arguing in pubs.

Illustrations: Secret agent Maxwell Smart (Don Adams) talking on his shoe phone (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon