Microsoft can remember it for you wholesale

A new theory: somewhere in the Silicon Valley universe there’s a cadre of techies who have eidetic memories and they’re feeling them start to slip. Panic time.

That’s my best explanation for Microsoft’s latest wheeze, a new feature for its Copilot assistant that will take what’s variously called a “snapshot” or a “screenshot” of your computer (all three monitors?) every five seconds and store it for future reference. Microsoft hasn’t explained much about Recall’s inner technical workings, but according to the announcement, the data will be stored locally and will be searchable via semantic associations and some sort of “AI”. Microsoft also says the data will not be used to train AI models.

The general anger and dismay at this plan brings back, almost nostalgically, memories of the 1990s, when Microsoft was near-universally hated as the evil monopolist dominating computing. In 2008, when Google was ten years old, a BBC presenter asked me if I thought Google would ever be hated as much as Microsoft was (not then, no). In 2012, veteran journalist Charles Arthur published the book Digital Wars about how Microsoft had stagnated and lost its lead. And then suddenly, in the last few years, it’s back on top.

Possibilities occur that Microsoft doesn’t mention. For example: could software might be embedded into Windows to draw inferences from the data Recall saves? And could those inferences be forwarded to the company or used to target you with ads? That seems like a far more efficient way to invade users’ privacy than copying the data itself, if that’s what the company ultimately wants to do.

Lots of things on our computers already retain a “memory” of what we’ve been doing. Operating systems generate logs to help debug problems. Word processors retain a changelog, which powers the ability to undo mistakes. Web browsers have user-configurable histories; email software has archives; media players retain playlists. All of those are useful – but part of that usefulness is that they are contextual, limited, and either easily terminated by closing the relevant application or relatively easily edited to remove items that shouldn’t be kept.

It’s hard for almost everyone who isn’t Microsoft to understand the point of keeping everything by default. It seems like a feature only developers could love. I certainly would like Windows to be better at searching for stored files or my (Firefox) browser to be better at reloading that article I was reading yesterday. I have even longed for a personal version of Vannevar Bush’s Memex. As part of that, I might welcome a feature that let me hit a button to record the last five useful minutes of a meeting, or save a social media post to a local archive. But the key to that sort of memory expansion is curation, not remembering everything promiscuously. For most people, selective forgetting is how we survive the torrents of irrelevance hurled at us every day.

What Recall sounds most like is the lifelog science fiction writer Charlie Stross imagined in 2007 might be our future. Plummeting storage costs and expanding capacity, he reasoned, would make it possible to store *everything* in your pocket. Even then, there were (a very few) people doing that sort of thing, most notably Steve Mann, a University of Toronto professor who started wearing devices to comprhensively capture his life as a 1990s graduate student. Over the years, Mann has shrunk his personal gadget array from a laptop and peripherals to glasses and pocket devices. Many more people capture their surroundings now – but they do it on their phones. If Apple or Google were proposing a Recall feature for iOS or Android, the idea would seem a lot less weird.

The real issue is that there are many people who would like to be able to know what somone *else* has been doing on their computer at all times. Helicopter parents. Schools and teachers under government compulsion (see for example Prevent (PDF)). Employers. Border guards. Corporate spies. The Department of Work and Pensions. Authoritarian governments. Law enforcement and security agencies. Criminals. Domestic abusers… So developing any feature like this must include considering how to protect it against these threats. This does not appear to have happened.

Many others have written about the privacy issues in all this – the UK’s Information Commission’s Office is already investigating. At The Register, Richard Speed does a particularly good job of looking at some of the fine details. On Mastodon, Kevin Beaumont says inspection of the Copilot+ software suggests that Recall stores the text it extracts from all those snapshots into an easily copiable SQlite database.

But there’s still more. The kind of archive Recall appears to construct can teach an attacker how the target thinks: not just what passwords they choose but how they devise them.Those patterns can be highly valuable. Granted, few targets are worth that level of attention, but it happens, as Peter Davies, a technical director at eThales, has often warned.

Recall is not the only move – see also flawed-AI-with-everything – that suggests that the computer industry, like some politicians and governments, is badly losing touch with the public. Increasingly, what they want to do seems unrelated to what the rest of us want. If they think things like Recall are a good idea they need to read more Philip K. Dick. And then don’t invent the Torment Nexus.

Illustrations: Arnold Schwarzenegger seeking better memories in the 1990 film Total Recall.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon..

Irrevocable

One of the biggest advances in computing in my lifetime is the “Undo” button. Younger people will have no idea of this, but at one time if you accidentally deleted the piece you’d spent hours typing into your computer, it was just…gone forever.

This week, UK media reported on what seems to be an unusual but not unique case: a solicitor accidentally opened the wrong client’s divorce case on her computer screen and went on to apply for a final decree for the couple concerned. The court granted the divorce in a standardly automated 21 minutes, even though the specified couple had not yet agreed on a financial settlement. Despite acknowledging the error, the court now refuses to overturn the decree. UK lawyers of my acquaintance say that this obvious unfairness may be because granting the final decree sets in motion other processes that are difficult to reverse.

That triggers a memory of the time I accidentally clicked on “cancel” instead of “check in” on a flight reservation, and casually, routinely, clicked again to confirm. I then watched in horror as the airline website canceled the flight. The undo button in this case was to phone customer service. Minutes later, they reinstated the reservation and thereafter I checked in without incident. Undone!

Until the next day, when I arrived in the US and my name wasn’t on the manifest. The one time I couldn’t find my boarding pass… After a not-long wait that seeemd endless in a secondary holding area (which I used to text people to tell them where I was just in case) I explained the rogue cancellation and was let go. Whew! (And yes, I know: citizen, white, female privilege.)

“Ease of use” should include making it hard to make irrecoverable mistakes. And maybe a grace period before automated processes cascade.

The Guardian quotes family court division head Sir Andrew McFarlane explaining that the solicitor’s error was not easy to make: “Like many similar online processes, an operator may only get to the final screen where the final click of the mouse is made after traveling through a series of earlier screens,” Huh? If you think you have opened the right case, then those are the screens you would expect to see. Why wouldn’t you go ahead?

At the Law Gazette, John Hyde reports that the well-known law firm in question, Vardag, is backing the young lawyer who made the error, describing it as a “slip up with the drop down menu” on “the new divorce portal”, noting that similar errors had happened “a few times” and felt like a design error.

“Design errors” can do a lot of damage. Take paying a business or person via online banking. In the UK, until recently, you entered account name, number, and sort code, and confirmed to send. If you made a mistake, tough. If the account information was sent by a scammer instead of the recipient you thought, tough. It was only in 2020 that most banks began participating in “Confirmation of payee”, which verifies the account with the receiving bank and checks with you that the name is correct. In 2020, Which? estimated that confirming payee could have saved £320 million in bank transfer fraud since 2017.

Similarly, while many more important factors caused the Horizon scandal, software design played its part: subpostmasters could not review past transactions as they could on paper.

Many computerized processes are blocked unless precursor requirements have been completed and checked for compliance. A legally binding system seems like it similarly ought to incorporate checks to ensure that all necessary steps had been completed.

Arguably, software design is failing users. In ecommerce, user-hostile software design is deceptive, or “dark”, patterns, user interfaces built deliberately to manipulate users into buying/spending more than they intended. The clutter that makes Amazon unusable directs shoppers to its house brands.

User interface design is where I began writing about computers circa 1990. Windows 3 was new, and the industry was just discovering that continued growth depended on reaching past those who *liked* software to be difficult. I vividly recall being told by a usability person at then-market leader Lotus about the first time her company’s programmers watched ordinary people using their software. First one fails to complete task. “Well, that’s a stupid person.” Second one. “Well, that’s a stupid person, too.” Third one. “Where do you find these people?” But after watching a couple more, they got it.

In the law firm’s case, the designers likely said, “This system is just for expert users”. True, but what they’re expert in is law, not software. Hopefully the software will now be redesigned to reflect the rule that it should be as easy as possible to do the work but as hard as possible to make unrecoverable mistakes (the tolerance principle). It’s a simple idea that goes all the way back to Donald Norman’s classic 1988. book The Design of Everyday Things.

At a guess, if today’s “AI” automation systems become part of standard office work making mistakes will become easier rather than harder, partly because it makes systems more inscrutable. In addition, the systems being digitized are increasingly complex with more significant consequences reaching deep into people’s lives, and intended to serve the commissioning corporations’ short-term desires. It will not be paranoid to believe the world is stacked against us.

Illustrations: Cary Grant and Rosalind Russell as temporarily divorced newspapermen in His Girl Friday (1944).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Infallibile

It’s a peculiarity of the software industry that no one accepts product liability. If your word processor gibbers your manuscript, if your calculator can’t subtract, if your phone’s security hole results in your bank account’s being drained, if a chatbot produces entirely false results….it’s your problem, not the software company’s. As software starts driving cars, running electrical grids, and deciding who gets state benefits, the lack of liability will matter in new and dangerous ways. In his 2006 paper, The Economics of Information Security, Ross Anderson writes about the “moral-hazard effect” connection between liability and fraud: if you are not liable, you become lazy and careless. Hold that thought.

To it add: in the British courts, there is a legal presumption that computers are reliable. Suggestions that this law should be changed go back at least 15 years, but this week they gained new force. It sounds absurd if applied to today’s complex computer systems, but the law was framed with smaller mechanical devices such as watches and Breathalyzers in mind. It means, however, that someone – say a subpostmaster – accused of theft has to find a way to show the accounting system computer was not operating correctly.

Put those two factors together and you get the beginnings of the Post Office Horizon scandal, which currently occupies just about all of Britain following ITV’s New Year’s airing of the four-part drama Mr Bates vs the Post Office.

For those elsewhere: this is the Post Office Horizon case, which is thought to be one of the worst miscarriages of justice in British history. The vast majority of the country’s post offices are run by subpostmasters, each of whom runs their own business under a lengthy and detailed contract. Many, as I learned in 2004, operate their post office counters inside other businesses; most are news agents, but some share old police stations and hairdressers.

In 1999, the Post Office began rolling out the “Horizon” computer accounting system, which was developed by ICL, formerly a British company but by then owned by Fujitsu. Subpostmasters soon began complaining that the new system reported shortfalls where none existed. Under their contract, subpostmasters bore all liability for discrepancies. The Post Office accordingly demanded payment and prosecuted those from whom it was not forthcoming. Many lost their businesses, their reputations, their homes, and much of their lives, and some were criminally convicted.

In May 2009, Karl Flinders published the first of dozens of articles on the growing scandal. Perhaps most important: she located seven subpostmasters who were willing to be identified. Soon afterwards, Welsh former subpostmaster Alan Bates convened the Justice for Subpostmasters Alliance, which continues to press for exoneration and compensation for the many hundreds of victims.

Pieces of this saga were known, particularly after a 2015 BBC Panorama documentary. Following the drama’s airing, the UK government is planning legislation to exonerate all the Horizon victims and fast-track compensation. The program has also drawn new attention to the ongoing public inquiry, which…makes the Post Office look so much worse, as do the Panorama team’s revelations of its attempts to suppress the evidence they uncovered. The Metropolitan Police is investigating the Post Office for fraud.

Two elements stand out in this horrifying saga. First: each subpostmaster calling the help line for assistance was told they were the only one having trouble with the system. They were further isolated by being required to sign NDAs. Second: the Post Office insisted that the system was “robust” – that is, “doesn’t make mistakes”. The defendants were doubly screwed; only their accuser had access to the data that could prove their claim that the computer was flawed, and they had no view of the systemic pattern.

It’s extraordinary that the presumption of reliability has persisted this long, since “infallibility” is the claim the banks made when customers began reporting phantom withdrawals years ago, as Ross Anderson discussed in his 1993 paper Why Cryptosystems Fail (PDF). Thirty years later, no one should be trusting any computer system so blindly. Granted, in many cases, doing what the computer says is how you keep your job, but that shouldn’t apply to judges. Or CEOs.

At the Guardian, Alex Hern reports that legal and computer experts have been urging the government to update the law to remove the legal presumption of reliability, especially given the rise of machine learning systems whose probabilistic nature means they don’t behave predictably. We are not yet seeing calls for the imposition of software liability, though the Guardian reports there are suggestions that if the onoing public inquiry finds Fujitsu culpable for producing a faulty system the company should be required to repay the money it was paid for it. The point, experts tell me, is not that product liability would make these companies more willing to admit their mistakes, but that liability would make them and their suppliers more careful to ensure up front the quality of the systems they build and deploy.

The Post Office saga is a perfect example of Anderson’s moral hazard. The Post Office laid off its liability onto the subpostmasters but retained the right to conduct investigations and prosecutions. When the deck is so stacked, you have to expect a collapsed house of cards. And, as Chris Grey writes, the government’s refusal to give UK-resident EU citizens physical proof of status means it’s happening again.

Illustrations: Local post office.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.