A hole is a hole

We told you so.

By “we” I mean thousands, of privacy advocates, human rights activists, technical experts, and information security journalists.

By “so”, I mean: we all said repeatedly over decades that there is no such thing as a magic hole that only “good guys” can use. If you build a supposedly secure system but put in a hole to give the “authorities” access to communications, that hole can and will be exploited by “bad guys” you didn’t want spying on you.

The particular hole Chinese hackers used to spy on the US is the Communications Assistance for Law Enforcement Act (1994). CALEA mandates that telecommunications providers design their equipment so that they can wiretap any customer if law enforcement presents a warrant. At Techcrunch, Zack Whittaker recaps much of the history, tracing technology giants’ new emphasis on end-to-end encryption to the 2013 Snowden revelations of the government’s spying on US citizens.

The mid-1990s were a time of profound change for telecommunications: the Internet was arriving, exchanges were converting from analog to digital, and deregulation was providing new competition for legacy telcos. In those pre-broadband years, hundreds of ISPs offered dial-up Internet access. Law enforcement could no longer just call up a single central office to place a wiretap. When CALEA was introduced, critics were clear and prolific; for an in-depth history see Susan Landau’s and Whit Diffie’s book, Privacy on the Line (originally published 1998, second edition 2007). The net.wars archive includes a compilation of years of related arguments, and at Techdirt, Mike Masnick reviews the decades of law enforcement insistence that they need access to encrypted text. “Lawful access” is the latest term of art.

In the immediate post-9/11 shock, some of those who insisted on the 1990s version of today’s “lawful access” – key escrow, took the opportunity to tell their opponents (us) that the attacks proved we’d been wrong. One such was the just-departed Jack Straw, the home secretary from 1997 to (June) 2001, who blamed BBC Radio Four and “…large parts of the industry, backed by some people who I think will now recognise they were very naive in retrospect”. That comment sparked the first net.wars column. We could now say, “Right back atcha.”

Whatever you call an encryption backdoor, building a hole into communications security was, is, and will always be a dangerous idea, as the Dutch government recently told the EU. Now, we have hard evidence.

***

The time is long gone when people used to be snobbish about Internet addresses (see net.wars-the-book, chapter three). Most of us are therefore unlikely to have thought much about the geekishly popular “.io”. It could be a new-fangled generic top-level domain – but it’s not. We have been reading linguistic meaning into what is in fact a country code. Which is all fine and good, except that the country it belongs to is the Chagos Islands, also known as the British Indian Ocean Territory, which I had never heard of until the British government announced recently that it will hand the islands back to Mauritius (instead of asking the Chagos Islanders what they want…). Gareth Edwards made the connection: when that transfer happens, .io will cease to exist (h/t Charles Arthur’s The Overspill).

Edwards goes on to discuss the messy history of orphaned country code domains: Yugoslavia, and the Soviet Union. As a result, ICANN, the naming authority, now has strict rules that mandate termination in such cases. This time, there’s a lot at stake: .io is a favorite among gamers, crypto companies, and many others, some of them substantial businesses. Perhaps a solution – such as setting .io up anew as a gTLD with its domains intact – will be created. But meantime, it’s worth noting that the widely used .tv (Tuvalu), .fm (Federated States of Micronesia), and .ai (Anguilla) are *also* country code domains.

***

The story of what’s going on with Automattic, the owner of the blogging platform WordPress.com, and WP Engine, which provides hosting and other services for businesses using WordPress, is hella confusing. It’s also worrying: WordPress, which is open source content management software overseen by the WordPress Foundation, powers a little over 40% of the Internet’s top ten million websites and more than 60% of sites overall (including this one).

At Heise Online, Kornelius Kindermann offers one of the clearer explanations: Automattic, whose CEO, Matthew Mullenweg is also a director of the WordPress Foundation and a co-creator of the software, wants WP Engine, which has been taken over by the investment company Silver Lake, to pay “trademark royalties” of 8% to the WordPress Foundation to support the software. WP Engine doesn’t wanna. Kindermann estimates the sum involved at $35 million, After the news of all that broke, 159 employees have announced they are leaving Automattic.

The more important point that, like the users of the encrypted services governments want to compromise, the owners of .io domains, or, ultimately, the Chagos Islanders themselves, WP Engine’s customers, some of them businesses worth millions, are hostages of uncertainty surrounding the decisions of others. Open source software is supposed to give users greater control. But as always, complexity brings experts and financial opportunities, and once there’s money everyone wants some of it.

Illustrations: View of the Chagos Archipelago taken during ISS Expedition 60 (NASA, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Blown

“This is a public place. Everyone has the right to be left in peace,” Jane (Vanessa Redgrave) tells Thomas (David Hemmings), whom she’s just spotted photographing her with her lover in the 1966 film Blow-Up, by Michelangelo Antonioni. The movie, set in London, proceeds as a mystery in which Thomas’s only tangible evidence is a grainy, blown-up shot of a blob that may be a murdered body.

Today, Thomas would probably be wielding a latest-model smartphone instead of a single lens reflex film camera. He would not bother to hide behind a tree. And Jane would probably never notice, much less challenge Thomas to explain his clearly-not-illegal, though creepy, behavior. Phones and cameras are everywhere. If you want to meet a lover and be sure no one’s photographing you, you don’t go to a public park, even one as empty as the film finds Maryon Park. Today’s 20-somethings grew up with that reality, and learned early to agree some gatherings are no-photography zones.

Even in the 1960s individuals had cameras, but taking high-quality images at a distance was the province of a small minority of experts; Antonioni’s photographer was a professional with his own darkroom and enlarging equipment. The first CCTV cameras went up in the 1960s; their proliferation became public policy issue in the 1980s, and was propagandized as “for your safety without much thought in the post-9/11 2000s. In the late 2010s, CCTV surveillance became democratized: my neighbor’s Ring camera means no one can leave an anonymous gift on their doorstep – or (without my consent) mine.

I suspect one reason we became largely complacent about ubiquitous cameras is that the images mostly remained unidentifiable, or at least unidentified. Facial recognition – especially the live variant police seem to feel they have the right to set up at will – is changing all that. Which all leads to this week, when Joseph Cox at 404 Media reports ($) (and Ars Technica summarizes) that two Harvard students have mashed up a pair of unremarkable $300 Meta Ray-Bans with the reverse image search service Pimeyes and a large language model to produce I-XRAY, an app that identifies in near-real time most of the people they pass on the street, including their name, home address, and phone number.

The students – AnhPhu Nguyen and Caine Ardayfio – are smart enough to realize the implications, imagining for Cox the scenario of a random male spotting a young woman and following her home. This news is breaking the same week that the San Francisco Standard and others are reporting that two men in San Francisco stood in front of a driverless Waymo taxi to block it from proceeding while demanding that the female passenger inside give them her phone number (we used to give such males the local phone number for time and temperature).

Nguyen and Ardayfio aren’t releasing the code they’ve written, but what two people can do, others with fewer ethics can recreate independently, as 30 years of Black Hat and Def Con have proved. This is a new level of democratizated surveillance. Today, giant databases like Clearview AI are largely only accessible to governments and law enforcement. But the data in them has been scraped from the web, like LLMs’ training data, and merged with commercial sources

This latest prospective threat to privacy has been created by the marriage of three technologies that were developed separately by different actors without regard to one another and, more important, without imagining how one might magnify the privacy risks of the others. A connected car with cameras could also run I-XRAY.

The San Francisco story is a good argument against allowing cars on the roads without steering wheels, pedals, and other controls or *something* to allow a passenger to take charge to protect their own safety. In Manhattan cars waiting at certain traffic lights often used to be approached by people who would wash the windshield and demand payment. Experienced drivers knew to hang back at red lights so they could roll forward past the oncoming would-be washer. How would you do this in a driverless car with no controls?

We’ve long known that people will prank autonomous cars. Coverage focused on the safety of the *cars* and the people and vehicles surrounding them, not the passengers. Calling a remote technical support line for help is never going to get a good enough response.

What ties these two cases together – besides (potentially) providing new ways to harass women – is the collision between new technologies and human nature. Plus, the merger of three decades’ worth of piled-up data and software that can make things happen in the physical world.

Arguably, we should have seen this coming, but the manufacturers of new technology have never been good at predicting what weird things their users will find to do with it. This mattered less when the worst outcome was using spreadsheet software to write letters. Today, that sort of imaginative failure is happening at scale in software that controls physical objects and penetrates the physical world. The risks are vastly greater and far more unsettling. It’s not that we can’t see the forest for the trees; it’s that we can’t see the potential for trees to aggregate into a forest.

Illustrations: Jane (Vanessa Redgrave) and her lover, being photographed by Thomas (David Hemmings) in Michelangelo Antonioni’s 1966 film, Blow-Up.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Crowdstricken

This time two weeks ago the media were filled with images from airports clogged with travelers unable to depart because of…a software failure. Not a cyberattack, and not, as in 2017, limited to a single airline’s IT systems failure.

The outage wasn’t just in airports: NHS hospitals couldn’t book appointments, the London Stock Exchange news service and UK TV channel Sky News stopped functioning, and much more. It was the biggest computer system outage not caused by an attack to date, a watershed moment like 1988’s Internet worm.

Experienced technology observers quickly predicted: “bungled software update”. There are prior examples aplenty. In February, an AT&T outage lasted more than 12 hours, spanned 50 US states, Puerto Rico, and the US Virgin Islands, and blocked an estimated 25,000 attempted calls to the 911 emergency service. Last week, the Federal Communications Commission attributed the cause to an employee’s addition of a “misconfigured network element” to expand capacity without following the established procedure of peer review. The resulting cascade of failures was an automated response designed to prevent a misconfigured device from propagating. AT&T has put new preventative controls in place, and FCC chair Jessica Rosenworcel said the agency is considering how to increase accountabiliy for failing to follow best practice.

Much of this history is recorded in Peter G. Neumann’s ongoing RISKS Forum mailing list. In 2014, an update Apple issued to fix a flaw in a health app blocked users of its then-new iPhone 6 from connecting. In 2004, a failed modem upgrade knocked Cox Communications subscribers offline. My first direct experience was in the 1990s, when for a day CompuServe UK subsccribers had to dial Germany to pick up our email.

In these previous cases, though, the individuals affected had a direct relationship with the screw-up company. What’s exceptional about Crowdstrike is that the directly affected “users” were its 29,000 huge customer businesses. It was those companies’ resulting failures that turned millions of us into hostages to technological misfortune.

What’s more, in those earlier outages only one company and their direct customers were involved, and understanding the problem was relatively simple. In the case of Crowdstrike, it was hard to pinpoint the source of the problem at first because the direct effects were scattered (only Windows PCs awake to receive Crowdstrike updates) and the indirect effects were widespread.

The technical explanation of what happened, simplified, goes like this: Crowdstrike issued an update to its Falcon security software to block malware it spotted exploiting a vulnerability in Windows. The updated Falcon software sparked system crashes as PCs reacted to protect themselves against potential low-level damage (like a circuit breaker in your house tripping to protect your wiring from overload). Crowdstrike realized the error and pushed out a corrected update 79 minutes later. That fixed machines that hadn’t yet installed the faulty update. The machines that had updated in those 79 minutes, however, were stuck in a doom loop, crashing every time they restarted. Hence the need for manual intervention to remove those files in order to reboot successfully.

Microsoft initially estimated that 8.5 million PCs were affected – but that’s probably a wild underestimate as the only machines it could count were those that had crash reporting turned on.

The root cause is still unclear. Crowdstrike has said it found a hole in its Content Validator Tool, which should have caught the flaw. Microsoft is complaining that a 2009 interoperability agreement forced on it by the EU required it to allow Crowdstrike’s software to operate at the very low level on Windows machines that pushed the systems to crash. It’s wrong, however, to blame companies for enabling automated updates; security protection has to respond to new threats in real time.

The first financial estimates are emerging. Delta Airlines estimates the outage, which borked its crew tracking system for a week, cost it $500 million. CEO Ed Bastian told CNN, “They haven’t offered us anything.” Delta has hired lawyer David Boies, whose high-profile history began with leading the successful 1990s US government prosecution of Microsoft, to file its lawsuit.

Delta will need to take a number. Massachusetts-based Plymouth County Retirement Association has already filed a class action suit on behalf of Crowdstrike shareholders in Texas federal court, where Crowdstrike is headquartered, for misrepresenting its software and its capabilities. Crowdstrike says the case lacks merit.

Lawsuits are likely the only way companies will get recompense unless they have insurance to cover supplier-caused system failures. Like all software manufacturers, Crowdstrike has disclaimed all liability in its terms of use.

In a social media post, Federal Trade Commission chair Lina Khan said that, “These incidents reveal how concentration can create fragile systems.”

Well, yes. Technology experts have long warned of the dangers of monocultures that make our world more brittle. The thing is, we’re stuck with them because of scale. There were good reasons why the dozens of early network and operating systems consolidated: it’s simpler and cheaper for hiring, maintenance, and even security. Making our world less brittle will require holding companies – especially those that become significant points of failure – to meet higher standards of professionalism, including product liability for software, and requiring their customers to boost their resilience.

As for Crowdstrike, it is doomed to become that worst of all things for a company: a case study at business schools everywhere.

Illustrations: XKCD’s Dependency comic, altered by Mary Branscombe to reflect Crowdstrike’s reality.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Microsoft can remember it for you wholesale

A new theory: somewhere in the Silicon Valley universe there’s a cadre of techies who have eidetic memories and they’re feeling them start to slip. Panic time.

That’s my best explanation for Microsoft’s latest wheeze, a new feature for its Copilot assistant that will take what’s variously called a “snapshot” or a “screenshot” of your computer (all three monitors?) every five seconds and store it for future reference. Microsoft hasn’t explained much about Recall’s inner technical workings, but according to the announcement, the data will be stored locally and will be searchable via semantic associations and some sort of “AI”. Microsoft also says the data will not be used to train AI models.

The general anger and dismay at this plan brings back, almost nostalgically, memories of the 1990s, when Microsoft was near-universally hated as the evil monopolist dominating computing. In 2008, when Google was ten years old, a BBC presenter asked me if I thought Google would ever be hated as much as Microsoft was (not then, no). In 2012, veteran journalist Charles Arthur published the book Digital Wars about how Microsoft had stagnated and lost its lead. And then suddenly, in the last few years, it’s back on top.

Possibilities occur that Microsoft doesn’t mention. For example: could software might be embedded into Windows to draw inferences from the data Recall saves? And could those inferences be forwarded to the company or used to target you with ads? That seems like a far more efficient way to invade users’ privacy than copying the data itself, if that’s what the company ultimately wants to do.

Lots of things on our computers already retain a “memory” of what we’ve been doing. Operating systems generate logs to help debug problems. Word processors retain a changelog, which powers the ability to undo mistakes. Web browsers have user-configurable histories; email software has archives; media players retain playlists. All of those are useful – but part of that usefulness is that they are contextual, limited, and either easily terminated by closing the relevant application or relatively easily edited to remove items that shouldn’t be kept.

It’s hard for almost everyone who isn’t Microsoft to understand the point of keeping everything by default. It seems like a feature only developers could love. I certainly would like Windows to be better at searching for stored files or my (Firefox) browser to be better at reloading that article I was reading yesterday. I have even longed for a personal version of Vannevar Bush’s Memex. As part of that, I might welcome a feature that let me hit a button to record the last five useful minutes of a meeting, or save a social media post to a local archive. But the key to that sort of memory expansion is curation, not remembering everything promiscuously. For most people, selective forgetting is how we survive the torrents of irrelevance hurled at us every day.

What Recall sounds most like is the lifelog science fiction writer Charlie Stross imagined in 2007 might be our future. Plummeting storage costs and expanding capacity, he reasoned, would make it possible to store *everything* in your pocket. Even then, there were (a very few) people doing that sort of thing, most notably Steve Mann, a University of Toronto professor who started wearing devices to comprhensively capture his life as a 1990s graduate student. Over the years, Mann has shrunk his personal gadget array from a laptop and peripherals to glasses and pocket devices. Many more people capture their surroundings now – but they do it on their phones. If Apple or Google were proposing a Recall feature for iOS or Android, the idea would seem a lot less weird.

The real issue is that there are many people who would like to be able to know what somone *else* has been doing on their computer at all times. Helicopter parents. Schools and teachers under government compulsion (see for example Prevent (PDF)). Employers. Border guards. Corporate spies. The Department of Work and Pensions. Authoritarian governments. Law enforcement and security agencies. Criminals. Domestic abusers… So developing any feature like this must include considering how to protect it against these threats. This does not appear to have happened.

Many others have written about the privacy issues in all this – the UK’s Information Commission’s Office is already investigating. At The Register, Richard Speed does a particularly good job of looking at some of the fine details. On Mastodon, Kevin Beaumont says inspection of the Copilot+ software suggests that Recall stores the text it extracts from all those snapshots into an easily copiable SQlite database.

But there’s still more. The kind of archive Recall appears to construct can teach an attacker how the target thinks: not just what passwords they choose but how they devise them.Those patterns can be highly valuable. Granted, few targets are worth that level of attention, but it happens, as Peter Davies, a technical director at eThales, has often warned.

Recall is not the only move – see also flawed-AI-with-everything – that suggests that the computer industry, like some politicians and governments, is badly losing touch with the public. Increasingly, what they want to do seems unrelated to what the rest of us want. If they think things like Recall are a good idea they need to read more Philip K. Dick. And then don’t invent the Torment Nexus.

Illustrations: Arnold Schwarzenegger seeking better memories in the 1990 film Total Recall.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon..

Irrevocable

One of the biggest advances in computing in my lifetime is the “Undo” button. Younger people will have no idea of this, but at one time if you accidentally deleted the piece you’d spent hours typing into your computer, it was just…gone forever.

This week, UK media reported on what seems to be an unusual but not unique case: a solicitor accidentally opened the wrong client’s divorce case on her computer screen and went on to apply for a final decree for the couple concerned. The court granted the divorce in a standardly automated 21 minutes, even though the specified couple had not yet agreed on a financial settlement. Despite acknowledging the error, the court now refuses to overturn the decree. UK lawyers of my acquaintance say that this obvious unfairness may be because granting the final decree sets in motion other processes that are difficult to reverse.

That triggers a memory of the time I accidentally clicked on “cancel” instead of “check in” on a flight reservation, and casually, routinely, clicked again to confirm. I then watched in horror as the airline website canceled the flight. The undo button in this case was to phone customer service. Minutes later, they reinstated the reservation and thereafter I checked in without incident. Undone!

Until the next day, when I arrived in the US and my name wasn’t on the manifest. The one time I couldn’t find my boarding pass… After a not-long wait that seeemd endless in a secondary holding area (which I used to text people to tell them where I was just in case) I explained the rogue cancellation and was let go. Whew! (And yes, I know: citizen, white, female privilege.)

“Ease of use” should include making it hard to make irrecoverable mistakes. And maybe a grace period before automated processes cascade.

The Guardian quotes family court division head Sir Andrew McFarlane explaining that the solicitor’s error was not easy to make: “Like many similar online processes, an operator may only get to the final screen where the final click of the mouse is made after traveling through a series of earlier screens,” Huh? If you think you have opened the right case, then those are the screens you would expect to see. Why wouldn’t you go ahead?

At the Law Gazette, John Hyde reports that the well-known law firm in question, Vardag, is backing the young lawyer who made the error, describing it as a “slip up with the drop down menu” on “the new divorce portal”, noting that similar errors had happened “a few times” and felt like a design error.

“Design errors” can do a lot of damage. Take paying a business or person via online banking. In the UK, until recently, you entered account name, number, and sort code, and confirmed to send. If you made a mistake, tough. If the account information was sent by a scammer instead of the recipient you thought, tough. It was only in 2020 that most banks began participating in “Confirmation of payee”, which verifies the account with the receiving bank and checks with you that the name is correct. In 2020, Which? estimated that confirming payee could have saved £320 million in bank transfer fraud since 2017.

Similarly, while many more important factors caused the Horizon scandal, software design played its part: subpostmasters could not review past transactions as they could on paper.

Many computerized processes are blocked unless precursor requirements have been completed and checked for compliance. A legally binding system seems like it similarly ought to incorporate checks to ensure that all necessary steps had been completed.

Arguably, software design is failing users. In ecommerce, user-hostile software design is deceptive, or “dark”, patterns, user interfaces built deliberately to manipulate users into buying/spending more than they intended. The clutter that makes Amazon unusable directs shoppers to its house brands.

User interface design is where I began writing about computers circa 1990. Windows 3 was new, and the industry was just discovering that continued growth depended on reaching past those who *liked* software to be difficult. I vividly recall being told by a usability person at then-market leader Lotus about the first time her company’s programmers watched ordinary people using their software. First one fails to complete task. “Well, that’s a stupid person.” Second one. “Well, that’s a stupid person, too.” Third one. “Where do you find these people?” But after watching a couple more, they got it.

In the law firm’s case, the designers likely said, “This system is just for expert users”. True, but what they’re expert in is law, not software. Hopefully the software will now be redesigned to reflect the rule that it should be as easy as possible to do the work but as hard as possible to make unrecoverable mistakes (the tolerance principle). It’s a simple idea that goes all the way back to Donald Norman’s classic 1988. book The Design of Everyday Things.

At a guess, if today’s “AI” automation systems become part of standard office work making mistakes will become easier rather than harder, partly because it makes systems more inscrutable. In addition, the systems being digitized are increasingly complex with more significant consequences reaching deep into people’s lives, and intended to serve the commissioning corporations’ short-term desires. It will not be paranoid to believe the world is stacked against us.

Illustrations: Cary Grant and Rosalind Russell as temporarily divorced newspapermen in His Girl Friday (1944).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Infallibile

It’s a peculiarity of the software industry that no one accepts product liability. If your word processor gibbers your manuscript, if your calculator can’t subtract, if your phone’s security hole results in your bank account’s being drained, if a chatbot produces entirely false results….it’s your problem, not the software company’s. As software starts driving cars, running electrical grids, and deciding who gets state benefits, the lack of liability will matter in new and dangerous ways. In his 2006 paper, The Economics of Information Security, Ross Anderson writes about the “moral-hazard effect” connection between liability and fraud: if you are not liable, you become lazy and careless. Hold that thought.

To it add: in the British courts, there is a legal presumption that computers are reliable. Suggestions that this law should be changed go back at least 15 years, but this week they gained new force. It sounds absurd if applied to today’s complex computer systems, but the law was framed with smaller mechanical devices such as watches and Breathalyzers in mind. It means, however, that someone – say a subpostmaster – accused of theft has to find a way to show the accounting system computer was not operating correctly.

Put those two factors together and you get the beginnings of the Post Office Horizon scandal, which currently occupies just about all of Britain following ITV’s New Year’s airing of the four-part drama Mr Bates vs the Post Office.

For those elsewhere: this is the Post Office Horizon case, which is thought to be one of the worst miscarriages of justice in British history. The vast majority of the country’s post offices are run by subpostmasters, each of whom runs their own business under a lengthy and detailed contract. Many, as I learned in 2004, operate their post office counters inside other businesses; most are news agents, but some share old police stations and hairdressers.

In 1999, the Post Office began rolling out the “Horizon” computer accounting system, which was developed by ICL, formerly a British company but by then owned by Fujitsu. Subpostmasters soon began complaining that the new system reported shortfalls where none existed. Under their contract, subpostmasters bore all liability for discrepancies. The Post Office accordingly demanded payment and prosecuted those from whom it was not forthcoming. Many lost their businesses, their reputations, their homes, and much of their lives, and some were criminally convicted.

In May 2009, Karl Flinders published the first of dozens of articles on the growing scandal. Perhaps most important: she located seven subpostmasters who were willing to be identified. Soon afterwards, Welsh former subpostmaster Alan Bates convened the Justice for Subpostmasters Alliance, which continues to press for exoneration and compensation for the many hundreds of victims.

Pieces of this saga were known, particularly after a 2015 BBC Panorama documentary. Following the drama’s airing, the UK government is planning legislation to exonerate all the Horizon victims and fast-track compensation. The program has also drawn new attention to the ongoing public inquiry, which…makes the Post Office look so much worse, as do the Panorama team’s revelations of its attempts to suppress the evidence they uncovered. The Metropolitan Police is investigating the Post Office for fraud.

Two elements stand out in this horrifying saga. First: each subpostmaster calling the help line for assistance was told they were the only one having trouble with the system. They were further isolated by being required to sign NDAs. Second: the Post Office insisted that the system was “robust” – that is, “doesn’t make mistakes”. The defendants were doubly screwed; only their accuser had access to the data that could prove their claim that the computer was flawed, and they had no view of the systemic pattern.

It’s extraordinary that the presumption of reliability has persisted this long, since “infallibility” is the claim the banks made when customers began reporting phantom withdrawals years ago, as Ross Anderson discussed in his 1993 paper Why Cryptosystems Fail (PDF). Thirty years later, no one should be trusting any computer system so blindly. Granted, in many cases, doing what the computer says is how you keep your job, but that shouldn’t apply to judges. Or CEOs.

At the Guardian, Alex Hern reports that legal and computer experts have been urging the government to update the law to remove the legal presumption of reliability, especially given the rise of machine learning systems whose probabilistic nature means they don’t behave predictably. We are not yet seeing calls for the imposition of software liability, though the Guardian reports there are suggestions that if the onoing public inquiry finds Fujitsu culpable for producing a faulty system the company should be required to repay the money it was paid for it. The point, experts tell me, is not that product liability would make these companies more willing to admit their mistakes, but that liability would make them and their suppliers more careful to ensure up front the quality of the systems they build and deploy.

The Post Office saga is a perfect example of Anderson’s moral hazard. The Post Office laid off its liability onto the subpostmasters but retained the right to conduct investigations and prosecutions. When the deck is so stacked, you have to expect a collapsed house of cards. And, as Chris Grey writes, the government’s refusal to give UK-resident EU citizens physical proof of status means it’s happening again.

Illustrations: Local post office.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.