Core values

Follow the money; follow the incentives.

Cybersecurity is an intractable problem for many of the same reasons climate change is: often the people paying the cost are not the people who derive the benefits. The foundation of the Workshop on the Economics of Information Security is often traced to the 2001 paper Why Information Security is Hard, by the late Ross Anderson. There were earlier hints, most notably in the 1999 paper Users Are Not the Enemy by Angela Sasse and Anne Adams.

Anderson’s paper directly examined and highlighted the influence of incentives on security behavior. Sasse’s paper was ostensibly about password policies and the need to consider human factors in designing them. But hidden underneath was the fact that the company department that called her in was not the IT team or the help desk team but accounting. Help desk costs to support users who forgot their passwords were rising so fast they threatened to swamp the company.

At the 23rd WEIS, held this week in Dallas (see also 2020), papers studied questions like which values drive people’s decisions when hit by ransomware attacks (Zinaida Benenson); whether the psychological phenomenon of delay discounting could be used to understand the security choices people make (Einar Snekkenes); and whether a labeling scheme would help get people to pay for security (L Jean Camp).

The latter study found that if you keep the label simple, people will actually pay for security. It’s a seemingly small but important point: throughout the history of personal computing, security competes with so many other imperatives that it’s rarely a factor in purchasing decisions. Among those other imperatives: cost, convenience, compatibility with others, and ease of use. But also: it remains near-impossible to evaluate how secure a product or provider is. Only the largest companies are in a position to ask detailed questions of cloud providers, for example,

Or, in an example provided by Chitra Marti, rare is the patient who can choose a hospital based on the security arrangements it has in place to protect its data. Marti asked a question I haven’t seen before: what is the role of market concentration in cybersecurity? To get at this, Marti looked at the decade’s experience of electronic medical records in hospitals since the big post-2008 recession push to digitize. Since 2010, more than 150 million records have been breached.

Of course, monoculture is a known problem in cybersecurity as it is in agriculture: if every machine runs the same software all machines are vulnerable to the same attacks. Similarly, the downsides of monopoly – poorer service, higher prices, lower quality – are well known. Marti’s study tying the two together found correlations in the software hospitals run and rarely change, even after a breach, though they do adopt new security measures. Hospitals choose software vendors for all sorts of reasons such as popularity, widspread use in their locality, or market leadership. The difficulty of deciding to change may be exacerbated by positive benefits to their existing choice that would be lost and outweigh the negatives.

These broader incentives help explain, as Richard Clayton set out, why distributed denial of service attacks remain so intractable. A key problem is “reflectors”, which amplify attacks by using spoofed IP addresses to send requests where the size of the response will dwarf the request. With this technique, a modest amount of outgoing traffic lands a flood on the chosen target (the one whose IP address has been spoofed). Fixing infrastructure to prevent these reflectors is tedious and only prevents damage to others. Plus, the provider involved may have to sacrifice the money they are paid to carry the traffic. For reasons like these, over the years the size of DDoS attacks has grown until only the largest anti-DDoS providers can cope with them. These realities are also why the early effort to push providers to fix their systems – RFC 2267 – failed. The incentives, in classic WEIS terms, are misaligned.

Clayton was able to use the traffic data he was already collecting to create a short list of the largest reflected amplified DDoS attacks each week and post it on a private Slack channel so providers could inspect their logs to trace it back to the source

At this point a surprising thing happened: the effort made a difference. Reflected amplified attacks dropped noticeably. The reasons, he and Ben Collier argue in their paper, have to do with the social connections among network engineers, the most senior of whom helped connect the early Internet and have decades-old personal relationships with their peers that have been sustained through forums such as NANOG and M3AAWG. This social capital and shared set of values kicked in when Clayton’s action lists moved the problem from abuse teams into the purview of network engineer s. Individual engineers began racing ahead; Amazon recently highlighted AWS engineer Tom Scholl’s work tracing back traffic and getting attacks stopped.

Clayton concluded by proposing “infrastructural capital” to cover the mix of human relationships and the position in the infrastructure that makes them matter. It’s a reminder that underneath those giant technology companies there still lurks the older ethos on which the Internet was founded, and humans whose incentives are entirely different from profit-making. And also: that sometimes intractable problems can be made less intractable.

Illustrations: WEIS waits for the eclipse.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The second greatest show on earth

There is this to be said for seeing your second total eclipse of the sun: if the first one went well, you can be more relaxed about what you get to see. In 2017, sitting in Centennial Park in Nashville, we saw everything. So in Dallas in 2024, I could tell myself, “It will be interesting even if we can’t see the sun.”

As it happened, we had cloud with lots of breaks. The cloud obscured such phenomena as Bailey’s Beads and the diamond ring – but the play of light on the broken clouds as the sun popped back out was amazing all by itself. The corona-surrounded sun playing peek-a-boo with us was stunningly beautiful. And all too soon it was over. It seemed shorter than 2017, even though totality was nearly twice as long – 3:52 compared to about two minutes.

One thing definitely missing from Nashville was a phenomenon that’s less often discussed: the 360-degree sunset all around the horizon. Sitting in Dallas surrounded by buildings, the horizon was not visible as it was in that Nashville park.

On Sunday, April 7, it seemed like half the country was moving into position for today in a process that involved placing a bet on the local weather. I had friends scattered in Vermont, Montreal, and several locations in upstate New York. Our intermittent cloud compared favorably with at least one of the New York locations. Daytime darkness and watching and listening to animals’ reactions is still interesting…but it remains frustrating to know that the Big Show is going on without you.

The hundreds of photos on show hide the real thrill of seeing totality: the sense of connection to humanity past, present, and future, and across the animal kingdom. The strangers around you become part of your life, however briefly. The inexorable movements of earth, sun, and moon put us all in our place.

The toast bubble

To The Big Bang Theory (“The Russian Rocket Reaction”, S5e05):

Howard: Someone has to go up with the telescope as a payload specialist, and guess who that someone is!
Sheldon: Muhammed Li.
Howard: Who’s Muhammed Li?
Sheldon: Muhammed is the most common first name in the world, Li the most common surname, and as I didn’t know the answer I thought that gave me a mathematical edge.

Experts tell me that exchange doesn’t perfectly explain how generative AI works; it’s too simplistic. Generative AI – or a Sheldon made more nuanced by his writers – takes into account contextual information to calculate the probable next word. So it wouldn’t pick from all the first names and surnames in the world. It might, however, pick from the names of all the payload specialists or some other group it correlated, or confect one.

More than a year on, I still can’t find a use for generative “AI” that is so unreliable and inscrutable. At Exponential View, Azeem Azhar has written about the “answer engine” Perplexity.ai. While it’s helpful that Perplexity provides references for its answers, it was producing misinformation by the third question I asked it, and offered no improvement when challenged. Wikipedia spent many years being accused of unreliability, too, but at least there you can read the talk page and understand how the editors arrived at the text they present.

On The Daily Show this week, Jon Stewart ranted about AI and interviewed FTC chair Lina Khan. Well-chosen video clips showed AI company heads’ true colors, telling the public AI is an assistant for humans while telling money people and each other that AI will enable greater productivity with fewer workers and help eliminate “the people tax”.

More interesting, however, was Khan’s note that the FTC is investigating the investments and partnerships in AI to understand if they’re giving current technology giants undue influence in the marketplace. If, in her example, all the competitors in a market outsource their pricing decisions to the same algorithm they may be guilty of price fixing even if they’re not actively colluding. And these markets are consolidating at an ever-earlier stage. Snapchat and WhatsApp had millions of users by the time Facebook thought it prudent to buy them rather than let them become dangerous competitors. AI is pre-consolidating: the usual suspects have been buying up AI startups and models at pace.

“More profound than fire or electricity,” Google CEO Sundar Pichai tells a camera at one point, speaking about AI. The last time I heard this level of hyperbole it was about the Internet in the 1990s, shortly before the bust. A friend’s answer to this sort of thing has never varied: “I’d rather have indoor plumbing.”

***

Last week the Federal District Court in Manhattan sentenced FTX CEO Sam Bankman-Fried to 25 years in prison for stealing $8 billion. In the end, you didn’t have to understand anything complicated about cryptocurrencies; it was just good old embezzlement.

And then the price of bitcoin went *up*. At the Guardian, Molly White explains that this is because cryptoevangelists are pushing the idea that the sector can reach its full potential, now that Bankman-Fried and other bad apples have been purged. But, as she says, nothing has really changed. No new use case has come along to make cryptocurrencies more useful, more valuable, or more trustworthy.

Both cryptocurrencies and generative AI are bubbles. The difference is that the AI bubble will likely leave behind it some technologies and knowledge that are genuinely useful; it will be like the Internet, which boomed and busted before settling in to change the world. Cryptocurrencies are more like the Dutch tulips. Unfortunately, in the meantime both these bubbles are consuming energy at an insane rate. How many wildfires is bitcoin worth?

**

I’ve seen a report suggesting that the last known professional words of the late Ross Anderson may have been, “Do they take us for fools?”

He was referring to the plans, debated in the House of Commons on March 25, to amend the Investigatory Powers Act to allow the government to pre-approve (or disapprove) new security features technology firms want to intorduce. The government is of course saying it’s all perfectly innocent, intended to keep the country safe. But recent clashes in the decades-old conflict over strong encryption have seen the technology companies roll out features like end-to-end encryption (Meta) and decide not to implement others, like client-side scanning (Apple). The latest in a long line of UK governments that want access to encrypted text was hardly going to take that quietly. So here we are, debating this yet again. Yet the laws of mathematics still haven’t changed: there is no such thing as a security hole that only “good guys” can use.

***

Returning to AI, it appears that costs may lead Google to charge for access to its AI-enhanced search, as Alex Hern reports at the Guardian. Hern thinks this is good news for its AI-focused startup competitors, which already charge for top-tier tools and who are at risk of being undercut by Google. I think it’s good for users by making it easy to avoid the AI “enhancement”. Of course, DuckDuckGo already does this without all the tracking and monopoly mishegoss.

Illustrations: Jon Stewart uninspired by Mark Zuckerberg’s demonstration of AI making toast.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

RIP Ross J. Anderson

RIP Ross J. Anderson, who died on March 28, at 67 and leaves behind a smoking giant crater in the fields of security engineering and digital rights activism. For the former, he was a professor of security engineering at Cambridge University and Edinburgh University, a Fellow of the Royal Society, and a recipient of the BCS Lovelace medal. His giant textbook Security Engineering is a classic. In digital rights activism, he founded the Foundation for Information Policy Research (see also tenth anniversary and 15th anniversary) and the UK Crypto mailing list, and, understanding that the important technology laws were being made at the EU level, he pushed for the formation of European Digital Rights to act as an umbrella organization for the national digital rights organizations springing up in many countries. He also was one of the pioneers in security economics, and founded the annual Workshop on the Economics of Information Security, convening on April 8 for the 23rd time.

One reason Anderson was so effective in the area of digital rights is that he had the ability to look forward and see the next challenge while it was still forming. Even more important, he had an extraordinary ability to explain complex concepts in an understandable manner. You can experience this for yourself at the YouTube channel where he posted a series of lectures on security engineering or by reading any of the massive list of papers available at his home page.

He had a passionate and deep-seated sense of injustice. In the 1980s and 1990s, when customers complained about phantom ATM withdrawals and the banks tried to claim their software was infallible, he not only conducted a detailed study but adopted fraud in payment systems as an ongoing research interest.

He was a crucial figure in the fight over encryption policy, opposing key escrow in the 1990s and “lawful access” in the 2020s, for the same reasons: the laws of mathematics say that there is no such thing as a hole only good guys can exploit. His name is on many key research papers in this area.

In the days since his death, numerous former students and activists have come forward with stories of his generosity and wit, his eternal curiosity to learn new things, and the breadth and depth of his knowledge. And also: the forthright manner that made him cantankerous.

I think I first encountered Ross at the 1990s Scrambling for Safety events organized by Privacy International. He was slow to trust journalists, shall we say, and it was ten years before I felt he’d accepted me. The turning point was a conference where we both arrived at the lunch counter at the same time. I happened to be out of cash, and he bought me a sandwich.

Privately, in those early days I sometimes referred to him as the “mad cryptographer” because interviews with him often led to what seemed like off-the-wall digressions. One such led to an improbable story about the US Embassy in Moscow being targeted with microwaves in an attempt at espionage. This, I found later, was true. Still, I felt best not to quote it in the interests of getting people to listen to what he was saying about crypto policy.

My favorite Ross memory, though, is this: one night shortly before Christmas maybe ten years ago – by this time we were friends – when I interviewed him over Skype for yet another piece. It was late in Britain, and I’m not sure he was fully sober. Before he would talk about security, knowing of my interest in folk music, he insisted on playing several tunes on the Scottish chamber pipes. He played well. Pipe music was another of his consuming interests, and he brought to it as much intensity and scholarship as he did to all his other passions.

Ross J. Anderson, b. September 15, 1956, d. March 28, 2024.

Alabama never got the bomb

There is this to be said for nuclear weapons: they haven’t scaled. Since 1969, when Tom Lehrer warned about proliferation (“We’ll try to stay serene and calm | When Alabama gets the bomb”), a world of treaties, regulation, and deterrents has helped, but even if it hadn’t, building and updating nuclear weapons remains stubbornly expensive. (That said, the current situation is scary enough.)

The same will not be true of drones, James Patton Rogers explained in a recent talk at Kings College London about his new book, Precision: A History of American Warfare. Already, he says, drones are within reach for non-governmental actors such as Mexican drug cartels. At the BBC, Jonathan Marcus estimated in February 2022 that more than 100 nations and non-state actors already have combat drones and these systems are proliferating rapidly. The brief moment in which the US and Israel had an exclusive edge is already gone; Rogers says Iran and Turkey are “drone powers”. Back to the BBC in 2022: Marcus writes that some terrorist groups had already been able to build attack drone systems using commercial components for a few hundred dollars. Rogers put the number of countries with drone capability in 2023 at 113, plus 65 armed groups. He also called them one of the “greatest threats to state security”, noting the speed and abruptness with which they’ve flipped from being protective and their potential for “assassinations, strikes, saturation attacks”.

Rogers, who calls his book an “intellectual history”, traces the beginnings of precision to the end of the long, muddy, casualty-filled conflict of World War I. Never again: instead, remote attacks on military-industrial targets that limit troops on the ground and loss of life. The arrival of the atomic bomb and Russia’s development of same changed focus to the Dr Strangelove-style desire for the technology to mount massive retaliation. John F. Kennedy successfully campaigned on the missile gap. (In this part of Rogers’ presentation, it was impossible not to imagine how effective this amount of energy could have been if directed toward climate change…)

The 1990s and the Gulf War brought a revival of precision in the form of the first cruise missiles and the first drones. But as long ago as 1988 there were warnings that the US could not monopolize drones and they would become a threat. “We need an international accord to control drone proliferation,” Rogers said.

But the threat to state security was not Rogers’ answer when an audience member asked him, “What keeps you awake at night?”

“Drone mass killings targeting ethnic diasporas in cities.”

Authoritarian governments have long reached out to control opposition outside their borders. In 1974, I rented an apartment from the Greek owner of a local highly-regarded restaurant. A day later, a friend reacted in horror: didn’t I know that restaurateur was persona-non-patronize because he had reported Greek student protesters in Ithaca, New York to the military junta then in power and there had been consequences for their families back home? No, I did not.

As an informant, landlord’s powers were limited, however. He could go to and photograph protests; if he couldn’t identify the students he could still send their pictures. But he couldn’t amass comprehensive location data tracking their daily lives, operate a facial recognition system, or monitor them on social media and infer their social graphs. A modern authoritarian government equipped with Internet connections can do all of that and more, and the data it can’t gather itself it can obtain by purchase, contract, theft, hacking, or compulsion.

In Canada, opponents of Chinese Communist Party policies report harassment and intimidation. Freedom House reports that China’s transnational repression also includes spyware, digital threats, physical assault, and cooption of other countries, all escalating since 2014. There’s no reason for this sort of thing to be limited to the Chinese (and Russians); Citizen Lab has myriad examples of governments’ use of spyware to target journalists, political opponents, and activists, inside or outside the countries where they’re active.

Today, even in democratic countries there is an ongoing trend toward increased and more militaristic surveillance of migrants and borders. In 2021, Statewatch reported on the militarization of the EU’s borders along the Mediterranean, including a collaboration between Airbus and two Israeli companies to use drones to intercept migrant vessels Another workshop that same year made plain the way migrants are being dataveilled by both governments and the aid agencies they rely on for help. In 2022, the courts ordered the UK government to stop seizing the smartphones belonging to migrants arriving in small boats.

Most people remain unaware of this unless some poliitician boasts about it as part of a tough-on-immigration platform. In general, rights for any kind of foreigners – immigrants, ethnic minorities – are a hard sell, if only because non-citizens have no vote, and an even harder one against the headwind of “they are not us” rhetoric. Threats of the kind Rogers imagined are not the sort nations are in the habit of protecting against.

It isn’t much of a stretch to imagine all those invasive technologies being harnessed to build a detailed map of particular communities. From there, given affordable drones, you just need to develop enough malevolence to want to kill them off, and be the sort of country that doesn’t care if the rest of the world despises you for it.

Illustrations: British migrants to Australia in 1949 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Facts are scarified

The recent doctored Palace photo has done almost as much as the arrival of generative AI to raise fears that in future we will completely lose the ability to identify fakes. The royal photo was sloppily composited – no AI needed – for reasons unknown (though Private Eye has a suggestion). A lot of conspiracy theorizing could be avoided if the palace would release the untouched original(s), but as things are, the photograph is a perfect example of how to provide the fuel for spreading nonsense to 400 million people.

The most interesting thing about the incident was discovering the rules media apply to retouching photos. AP specified, for example, that it does not use altered or digitally manipulated images. It allows cropping and minor adjustments to color and tone where necessary, but bans more substantial changes, even retouching to remove red eye. As Holly Hunter’s character says, trying to uphold standards in the 1987 movie Broadcast News (written by James Brooks), “We are not here to stage the news.”

The desire to make a family photo as appealing as possible is understandable; the motives behind spraying the world with misinformation are less clear and more varied. I’ve long argued here that for this reason combating misinformation and disinformation is similar to cybersecurity because of the complexity of the problem and the diversity of actors and agendas. At last year’s Disinformation Summit in Cambridge cybersecurity was, sadly, one of the missing communities.

Just a couple of weeks ago the BBC announced its adoption of C2PA for authenticating images, developed by a group of technology and media companies including the BBC, the New York Times, Microsoft, and Adobe. The BBC says that many media organizations are beginning to adopt C2PA, and even Meta is considering it. Edits must be signed, and create a chain of provenance all the way back to the original photo. In 2022, the BBC and the Royal Society co-hosted a workshop on digital provenance, following a Royal Society report, at which C2PA featured prominently.

That’s potentially a valuable approach for publishing and broadcast, where the conduit to the public is controlled by one of a relatively small number of organizations. And you can see why those organizations would want it: they need, and in many cases are struggling to retain, public trust. It is, however, too complex a process for the hundreds of millions of people with smartphone cameras posting images to social media, and unworkable for citizen journalists capturing newsworthy events in real time. Ancillary issue: sophisticated phone cameras try so hard to normalize the shots we take that they falsify the image at source. In 2020, Californians attempting to capture the orange color of their smoke-filled sky were defeated by autocorrection that turned it grey. So, many images are *originally* false.

In lengthy blog posting, Neal Krawitz analyzes difficulties with C2PA. He lists security flaws, but also is opposed to the “appeal to authority” approach, which he dubs a “logical fallacy”. In the context of the Internet, it’s worse than that; we already know what happens when a tiny handful of commercial companies (in this case, chiefly Adobe) become the gatekeeper for billions of people.

All of this was why I was glad to hear about work in progress at a workshop last week, led by Mansoor Ahmed-Rengers, a PhD candidate studying system security: Human-Oriented Proof Standard (HOPrS). The basic idea is to build an “Internet-wide, decentralised, creator-centric and scalable standard that allows creators to prove the veracity of their content and allows viewers to verify this with a simple ‘tick’.” Co-sponsoring the workshop was Open Origins, a project to distinguish between synthetic and human-created content.

It’s no accident that HOPrS’ mission statement echoes the ethos of the original Internet; as security researcher Jon Crowcroft explains, it’s part of long-running work on redecentralization. Among HOPrS’ goals, Ahmed-Rengers listed: minimal centralization; the ability for anyone to prove their content; Internet-wide scalability; open decision making; minimal disruption to workflow; and easy interpretability of proof/provenance. The project isn’t trying to cover all bases – that’s impossible. Given the variety of motivations for fakery, there will have to be a large ecosystem of approaches. Rather, HOPrS is focusing specifically on the threat model of an adversary determined to sow disinformation, giving journalists and citizens the tools they need to understand what they’re seeing.

Fakes are as old as humanity. In a brief digression, we were reminded that the early days of photography were full of fakery: the Cottingley Fairies, the Loch Ness monster, many dozens of spirit photographs. The Cottingley Fairies, cardboard cutouts photographed by Elsie Wright, 16, and Florence Griffiths, 9, were accepted as genuine by Sherlock Holmes creator Sir Arthur Conan Doyle, famously a believer in spiritualism. To today’s eyes, trained on millions of photographs, they instantly read as fake. Or take Ireland’s Knock apparitions, flat, unmoving, and, philosophy professor David Berman explained in 1979, magic lantern projections. Our generation, who’ve grown up with movies and TV, would I think have instantly recognized that as fake, too. Which I believe tells us something: yes, we need tools, but we ourselves will get better at detecting fakery, as unlikely as it seems right now. The speed with which the royal photo was dissected showed how much we’ve learned just since generative AI became available.

Illustrations: The first of the Cottingley Fairies photographs (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Borderlines

Think back to the year 2000. New York’s World Trade Center still stood. Personal digital assistants were a niche market. There were no smartphones (the iPhone arrived in 2006) or tablets (the iPad took until 2010). Social media was nascent; Facebook first opened in 2004. The Good Friday agreement was just two years old, and for many in Britain “terrorists” were still “Irish”. *That* was when the UK passed the Terrorism Act (2000).

Usually when someone says the law can’t keep up with technological change they mean that technology can preempt regulation at speed. What the documentary Phantom Parrot shows, however, is that technological change can profoundly alter the consequences of laws already on the books. The film’s worked example is Schedule 7 of the 2000 Terrorism Act, which empowers police to stop, question, search, and detain people passing through the UK’s borders. They do not need prior authority or suspicion, but may only stop and question people for the purpose of determining whether the individual may be or have been concerned in the commission, preparation, or instigation of acts of terrorism.

Today this law means that anyone ariving at the UK border may be compelled to unlock access to data charting their entire lives. The Hansard record of the debate on the bill shows clearly that lawmakers foresaw problems: the classification of protesters as terrorists, the uselessness of fighting terrorism by imprisoning the innocent (Jeremy Corbyn), the reversal of the presumption of innocence. But they could not foresee how far-reaching the powers the bill granted would become.

The film’s framing story begins in November 2016, when Muhammed Rabbani arrived at London’s Heathrow Airport from Doha and was stopped and questioned by police under Schedule 7. They took his phone and laptop and asked for his passwords. He refused to supply them. On previous occasions, when he had similarly refused, they’d let him go. This time, he was arrested. Under Schedule 7, the penalty for such a refusal can be up to three months in jail.

Rabbani is managing director of CAGE International, a human rights organization that began by focusing on prisoners seized under the war on terror and expanded its mission to cover “confronting other rule of law abuses taking place under UK counter-terrorism strategy”. Rabbani’s refusal to disclose his passwords was, he said later, because he was carrying 30,000 confidential documents relating to a client’s case. A lawyer can claim client confidentiality, but not NGOs. In 2018, the appeals court ruled the password demands were lawful.

In September 2017, Rabbani was convicted. He was g iven a 12-month conditional discharge and ordered to pay £620 in costs. As Rabbani says in the film, “The law made me a terrorist.” No one suspected him of being a terrorist or placing anyone in danger; but the judge made clear she had no choice under the law and so he nonetheless has been convicted of a terrorism offense. On appeal in 2018, his conviction was upheld. We see him collect his returned devices – five years on from his original detention.

Britain is not the only country that regards him with suspicion. Citing his conviction, in 2023 France banned him, and, he claims, Poland deported him.

Unsurprisingly, CAGE is on the first list of groups that may be dubbed “extremist” under the new definition of extremism released last week by communities secretary Michael Gove. The direct consequence of this designation is a ban on participation in public life – chiefly, meetings with central and local government. The expansion of the meaning of “extremist”, however, is alarming activists on all sides.

Director Kate Stonehill tells the story of Rabbani’s detention partly through interviews and partly through a reenactment using wireframe-style graphics and a synthesized voice that reads out questions and answers from the interview transcripts. A cello of doom provides background ominance. Laced through this narrative are others. A retired law enforcement office teaches a class to use extraction and analysis tools, in which we see how extensive the information available to them really is. Ali Al-Marri and his lawyer review his six years of solitary detention as an enemy combatant in Charleston, South Carolina. Lastly, Stonehill calls on Ryan Gallegher’s reporting, which exposed the titular Phantom Parrot, the program to exploit the data retained under Schedule 7. There are no records of how many downloads have been taken.

The retired law enforcement officer’s class is practically satire. While saying that he himself doesn’t want to be tracked for safety reasons, he tells students to grab all the data they can when they have the opportunity. They are in Texas: “Consent’s not even a problem.” Start thinking outside of the box, he tells them.

What the film does not stress is this: rights are largely suspended at all borders. In 2022, the UK extended Schedule 7 powers to include migrants and refugees arriving in boats.

The movie’s future is bleak. At the Chaos Computer Congress, a speaker warns that gait recognition, eye movement detection, and speech analysis (accents, emotion) and and other types of analysis will be much harder to escape and enable watchers to do far more with the ever-vaster stores of data collected from and about each of us.

“These powers are capable of being misused,” said Douglas Hogg in the 1999 Commons debate. “Most powers that are capable of being misused will be misused.” The bill passed 210-1.

Illustrations: Still shot from the wireframe reenactment of Rabbani’s questioning in Phantom Parrot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Bill Gates Problem

The Bill Gates Problem: Reckoning with the Myth of the Good Billionaire
By Tim Schwab
Metropolitan Books
ISBN: 978-1-25085009-6

Thirty years ago, the Federal Trade Commission began investigating one of the world’s largest technology companies on antitrust grounds. Was it leveraging its monopoly in one area to build dominance in others? Did it bully smaller competitors into disclosing their secrets, which it then copied? And so on. That company was Microsoft, Windows was giving it leverage over office productivity software, web browsers, and media players, and its leader was Bill Gates. In 1999, the courts ruled Microsoft a monopoly.

At the time, it was relatively commonplace for people to complain that Gates was insufficiently charitable. Why wasn’t he more philanthropic, given his vast and increasing wealth? (Our standards for billionaire wealth were lower back then.) Be careful what you wish for…

The transition from monopolist mogul to beneficent social entrepreneur where Tim Schwab starts in The Bill Gates Problem: Reckoning with the Myth of the Good Billionaire. In Schwab’s view, the reason is well-executed PR, in which category he includes the many donations the foundation makes to journalism organizations.

I have heard complaints for years that the Bill and Melinda Gates Foundation’s approach to philanthropy favors expensive technological interventions over cheaper, well-established ones. In education that might mean laptops and edtech software rather than training teachers; in medicine that might mean vaccine research rather than clean water. Schwab’s investigative work turns up dozens such stories in the areas BMGF works in: family planning, education, health. Yet, Schwab writes, citing numerous sources for his figures, for all the billions BMGF has poured into these areas, it has failed to meet its stated objectives.

You can argue that case, but Schwab moves on from there to examine the damaging effects of depending on a billionaire, no matter how smart and well-intentioned, to finance services that might more properly be the business of the state. No one elected Gates, and no one has voted on the priorities he has chosen to set. The covid pandemic provides a particularly good example. One of the biggest concerns as efforts to produce vaccines got underway was ensuring that access would not be limited to rich countries. Many believed that the most efficient way of doing this was to refrain from patenting the vaccines, and help poorer countries build their own production facilities. Gates was one of those who opposed this approach, arguing that patents were necessary to reward pharmaceutical companies for the investment they poured into research, and also that few countries had the expertise to make the vaccines. Gates gave in to pressure and reversed his position in May 2021 to support a “narrow waiver”. Reading that BMGF is the biggest funder of the WHO and remembering his preference for technological interventions made me wonder: how much do we have Gates to thank for the emphasis on vaccines and the reluctance to push cheaper non-pharmaceutical interventions like masks, HEPA filters, and ventilation in countries like the UK?

Schwab goes into plenty of detail about all this. But his wider point is to lay out the power Gates’s massive wealth – both the foundation’s and his own – gives him over the charitable sector and, through public-partnerships, many of the nations in which he operates. Schwab also calls Gates’s approach “philanthropic colonialism” because the bulk of his donations go to organizations based in the West, rather than directly to their counterparts elsewhere.

Pointing out the amount of taxpayer subsidy the foundation gets through the tax exemptions charities get, Schwab asks if we’re really getting value for our money. Wouldn’t we be better off collecting taxes and setting our own agendas? Is there really any such thing as a “good” billionaire?

Competitive instincts

This week – Wednesday, March 6 – saw the EU’s Digital Markets Act come into force. As The Verge reminds us, the law is intended to give users more choice and control by forcing technology’s six biggest “gatekeepers” to embrace interoperability and avoid preferencing their own offerings across 22 specified services. The six: Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft. Alphabet’s covered list is the longest: advertising, app store, search engine, maps, and shopping, plus Android, Chrome, and YouTube. For Apple, it’s the app store, operating system, and web browser. Meta’s list includes Facebook, WhatsApp, and Instagram, plus Messenger, Ads, and Facebook Marketplace. Amazon: third-party marketplace and advertising business. Microsoft: Windows and internal features. ByteDance just has TikTok.

The point is to enable greater competition by making it easier for us to pick a different web browser, uninstall unwanted features (like Cortana), or refuse the collection and use of data to target us with personalized ads. Some companies are haggling. Meta, for example, is trying to get Messenger and Marketplace off the list, while Apple has managed to get iMessage removed from the list. More notably, though, the changes Apple is making to support third-party app stores have been widely cricitized as undermining any hope of success for independents.

Americans visiting Europe are routinely astonished at the number of cookie consent banners that pop up as they browse the web. Comments on Mastodon this week have reminded that this was their churlish choice to implement the 2009 Cookie Directive and 2018 General Data Protection Regulation in user-hostile ways. It remains to be seen how grown-up the technology companies will be in this new round of legal constraints. Punishing users won’t get the EU law changed.

***

The last couple of weeks have seen a few significant outages among Internet services. Two weeks ago, AT&T’s wireless service went down for many hours across the US after a failed software update. On Tuesday, while millions of Americans were voting in the presidential primaries, it was Meta’s turn, when a “technical issue” took out both Facebook and Instagram (and with the latter, Threads) for a couple of hours. Concurrently but separately, users of Ad Manager had trouble logging in at Google, and users of Microsoft Teams and exTwitter also reported some problems. Ironically, Meta’s outage could have been fixed faster if the engineers trying to fix it hadn’t had trouble gaining remote access to the servers they needed to fix (and couldn’t gain access to the physical building because their passes didn’t work either).

Outages like these should serve as reminders not to put all your login eggs in one virtual container. If you use Facebook to log into other sites, besides the visibility you’re giving Meta into your activities elsewhere, those sites will be inaccessible any time Facebook goes down. In the case of AT&T, one reason this outage was so disturbing – the FTC is formally investigating it – is that the company has applied to get rid of its landlines in California. While lots of people no longer have landlines, they’re important in rural areas where cell service can be spotty, some services such as home alarm systems and other equipment depend on them, and they function in emergencies when electric power fails.

But they should also remind that the infrastructure we’re deprecating in favor of “modern” Internet stuff was more robust than the new systems we’re increasingly relying on. A home with smart devices that cannot function without an uninterrupted Internet connection is far more fragile and has more points of failure than one without them, just as you can read a paper map when your phone is dead. At The Verge, Jennifer Pattison Tuohy tests a bunch of smart kitchen appliances including a faucet you can operate via Alexa or Google voice assistants. As in digital microwave ovens, telling the faucet the exact temperature and flow rate you want…seems unnecessarily detailed. “Connect with your water like never before,” the faucet manufacturer’s website says. Given the direction of travel of many companies today, I don’t want new points of failure between me and water.

***

It has – already! – been three years since Australia’s News Media Bargaining Code led to Facebook and Google signing three-year deals that have primarily benefited Rupert Murdoch’s News Corporation, owner of most of Australia’s press. A week ago, Meta announced it will not renew the agreement. At The Conversation, Rod Sims, who chaired the commission that formulated the law, argues it’s time to force Meta into the arbitration the code created. At ABC Science, however, James Purtill counters that the often “toxic” relationship between Facebook and news publishers means that forcing the issue won’t solve the core problem of how to pay for news, since advertising has found elsewheres it would rather be. (Separately, in Europe, 32 media organizations covering 17 countries have filed a €2.1 billion lawsuit against Google, matching a similar one filed last year in the UK, alleging that the company abused its dominant position to deprive them of advertising revenue.)

Purtill predicts, I think correctly, that attempting to force Meta to pay up will instead bring Facebook to ban news, as in Canada, following the passage of a similar law. Facebook needed news once; it doesn’t now. But societies do. Suddenly, I’m glad to pay the BBC’s license fee.

Illustrations: Red deer (via Wikimedia.)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Oracle

The Oracle
by Ari Juels
Talos Press
ISBN: 978-1-945863-85-1
Ebook ISBN: 978-1-945863-86-8

In 1994, a physicist named Timothy C. May posited the idea of an anonymous information market he called blacknet. With anonymity secured by cryptography, participants could trade government secrets. And, he wrote in 1988’s Crypto-Anarchist Manifesto “An anonymous computerized market will even make possible abhorrent markets for assassinations and extortion.” In May’s time, the big thing missing to enable such a market was a payment system. Then, in 2008, came bitcoin and the blockchain.

In 2015, Ari Juels, now the Weill Family Foundation and Joan and Sanford I. Weill Professor at Cornell Tech but previously chief scientist at the cryptography company RSA, saw blacknet potential in etherum’s adoption of “smart contracts”, an idea that had been floating around since the 1990s. Smart contracts are computer programs that automatically execute transactions when specified conditions are met without the need for a trusted intermediary to provide guarantees. Among other possibilities, they can run on blockchains – the public, tamperproof, shared ledger that records cryptocurrency transactions.

In the resulting research paper on criminal smart contracts PDF), Juels and co-authors Ahmed Kosba and Elaine Shi wrote: “We show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism).”

It’s not often a research paper becomes the basis for a techno-thriller novel, but Juels has prior form. His 2009 novel Tetraktys imagined that members of an ancient Pythagorean cult had figured out how to factor prime numbers, thereby busting the widely-used public key cryptography on which security on the Internet depends. Juels’ hero in that book was uniquely suited to help the NSA track down the miscreants because he was both a cryptographer and the well-schooled son of an expert on the classical world. Juels could almost be describing himself: before turning to cryptography he studied classical literature at Amherst and Oxford.

Juels’ new book, The Oracle, has much in common with his earlier work. His alter-ego here is a cryptographer working on blockchains and smart contracts. Links to the classical world – in this case, a cult derived from the oracle at Delphi – are provided by an FBI agent and art crime investigator who enlists his help when a rogue smart contract is discovered that offers $10,000 to kill an archeology professor, soon followed by a second contract offering $700,000 for a list of seven targets. Soon afterwards, our protagonist discovers he’s first on that list, and he has only a few days to figure out who wrote the code and save his own life. That quest also includes helping the FBI agent track down some Delphian artifacts that we learn from flashbacks to classical times were removed from the oracle’s temple and hidden.

The Delphi oracle, Juels writes, “revealed divine truth in response to human questions”. The oracles his cryptographer is working on are “a source of truth for questions asked by smart contracts about the real world”. In Juels’ imagining, the rogue assassination contract is issued with trigger words that could be expected to appear in a death announcement. When someone tries to claim the bounty, the smart contract checks news sources for those words, only paying out if it finds them. Juels has worked hard to make the details of both classical and cryptographic worlds comprehensible. They remain stubbornly complex, but you can follow the story easily enough even if you panic at the thought of math.

The tension is real, both within and without the novel. Juels’ idea is credible enough that it’s a relief when he says the contracts as described are not feasible with today’s technology, and may never become so (perhaps especially because the fictional criminal smart contract is written in flawless computer code). The related paper also notes that some details of their scheme have been left out so as not to enable others to create these rogue contracts for real. Whew. For now.