Pass the password

So this week I had to provide a new password for an online banking account. This is always fraught, even if you use a password manager. Half the time whatever you pick fails the site’s (often hidden) requirements – you didn’t stand on your head and spit a nickel, or they don’t allow llamas, or some other damn thing. This time, the specified failures startled me: it was too long, and special characters and spaces were not allowed. This is their *new* site, created in 2022.

They would have done well to wait for the just-released proposed password guidelines from the US National Institute for Standards and Technology. Most of these rules should have been standard long ago – like only requiring users to change passwords if there’s a breach. We can, however, hope they will be adopted now and set a consistent standard. To date, the fact that everyone has a different set of restrictions means that no secure strategy is universally valid – neither the strong passwords software generates nor the three random words the UK’s National Cyber Security Centre has long recommended.

The banking site we began with fails at least four of the nine proposed new rules: the minimum password length (six) is too short – it should be eight, preferably 15; all Unicode characters and printing ASCII characters should be acceptable, including spaces; maximum length should be at least 64 characters (the site says 16); there should be no composition rules mandating upper and lower case, numerals, and so on (which the site does). At least the site’s rules mean it won’t invisibly truncate the password so it’s impossible to guess how much of it has actually been recorded. On the plus side, a little surprisingly, the site did require me to choose my own password hint question. The fact that most sites use the same limited set of questions opens the way for reused answers across the web, effectively undoing the good of not reusing passwords in the first place.

This is another of those cases where the future was better in the past: for at least 20 years passwords have been about to be superseded by…something – smart cards, dongles, biometrics, picklists and typing patterns, images, lately, passkeys and authenticator apps. All have problems limiting their reach. Single signons are still offered by Facebook, Google, and others, but the privacy risks are (I hope) widely understood. The “this is the next password” crowd have always underestimated the complexity of replacing deeply embedded habits. We all hate passwords – but they are simple to understand and work across multiple devices. And we’re used to them. Encryption didn’t succeed with mainstream users until it became invisibly embedded inside popular services. I’ll guess that password replacements will only succeed if they are similarly invisible. Most are too complicated.

In all these years, despite some improvements, passwords have only gotten more frustrating. Most rules are still devised with desktop/laptop computers in mind – and yield passwords that are impossible to type on phones. No one takes a broad view of the many contexts in which we might have to enter passwords. Outmoded threat models are everywhere. Decades after the cybercafe era, sites, operating systems, and other software all still code as if shoulder surfing were still an important threat model. So we end up with sillinesses like struggling to type masked wifi passwords while sitting in coffee shops where they’re prominently displayed, and masks for one-time codes sent for two-factor authentication. So you fire up a text editor to check your typing and then copy and paste…

Meanwhile, the number of passwords we have to manage continues to escalate. In a recent extended Mastodon thread, digital services expert Jonathan Kamens documented his adventures updating the email addresses attached to 1,200-plus accounts. He explained: using a password manager makes it easy to create and forget new accounts by the hundred.

In a 2017 presentation, Columbia professor Steve Bellovin provides a rethink of all this, much of it drawn from his his 2016 book Thinking Security. Most of what we think we know about good password security, he argues, is based on outmoded threat models and assumptions – outmoded because the computing world has changed, but also because attackers have adapted to those security practices. Even the focus on creating *strong* passwords is outdated: password strength is entirely irrelevant to modern phishing attacks, to compromised servers and client hosts, and to attackers who can intercept communications or control a machine at either end (for example, via a keylogger). A lot depends if you’re a high-risk individual likely to be targeted personally by a well-resourced attacker, a political target, or, like most of us, a random target for ordinary criminals.

The most important thing, Bellovin says, is not to reuse passwords, so that if a site is cracked the damage is contained to that one site. Securing the email address used to reset passwords is also crucial. To that end, it can be helpful to use a separate and unpublished email address for the most sensitive sites and services – anything to do with money or health, for example.

So NIST’s new rules make some real sense, and if organizations can be persuaded to adopt them consistently all our lives might be a little bit less fraught. NIST is accepting comments through October 7.

Illustrations: XKCD on password strength.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Microsoft can remember it for you wholesale

A new theory: somewhere in the Silicon Valley universe there’s a cadre of techies who have eidetic memories and they’re feeling them start to slip. Panic time.

That’s my best explanation for Microsoft’s latest wheeze, a new feature for its Copilot assistant that will take what’s variously called a “snapshot” or a “screenshot” of your computer (all three monitors?) every five seconds and store it for future reference. Microsoft hasn’t explained much about Recall’s inner technical workings, but according to the announcement, the data will be stored locally and will be searchable via semantic associations and some sort of “AI”. Microsoft also says the data will not be used to train AI models.

The general anger and dismay at this plan brings back, almost nostalgically, memories of the 1990s, when Microsoft was near-universally hated as the evil monopolist dominating computing. In 2008, when Google was ten years old, a BBC presenter asked me if I thought Google would ever be hated as much as Microsoft was (not then, no). In 2012, veteran journalist Charles Arthur published the book Digital Wars about how Microsoft had stagnated and lost its lead. And then suddenly, in the last few years, it’s back on top.

Possibilities occur that Microsoft doesn’t mention. For example: could software might be embedded into Windows to draw inferences from the data Recall saves? And could those inferences be forwarded to the company or used to target you with ads? That seems like a far more efficient way to invade users’ privacy than copying the data itself, if that’s what the company ultimately wants to do.

Lots of things on our computers already retain a “memory” of what we’ve been doing. Operating systems generate logs to help debug problems. Word processors retain a changelog, which powers the ability to undo mistakes. Web browsers have user-configurable histories; email software has archives; media players retain playlists. All of those are useful – but part of that usefulness is that they are contextual, limited, and either easily terminated by closing the relevant application or relatively easily edited to remove items that shouldn’t be kept.

It’s hard for almost everyone who isn’t Microsoft to understand the point of keeping everything by default. It seems like a feature only developers could love. I certainly would like Windows to be better at searching for stored files or my (Firefox) browser to be better at reloading that article I was reading yesterday. I have even longed for a personal version of Vannevar Bush’s Memex. As part of that, I might welcome a feature that let me hit a button to record the last five useful minutes of a meeting, or save a social media post to a local archive. But the key to that sort of memory expansion is curation, not remembering everything promiscuously. For most people, selective forgetting is how we survive the torrents of irrelevance hurled at us every day.

What Recall sounds most like is the lifelog science fiction writer Charlie Stross imagined in 2007 might be our future. Plummeting storage costs and expanding capacity, he reasoned, would make it possible to store *everything* in your pocket. Even then, there were (a very few) people doing that sort of thing, most notably Steve Mann, a University of Toronto professor who started wearing devices to comprhensively capture his life as a 1990s graduate student. Over the years, Mann has shrunk his personal gadget array from a laptop and peripherals to glasses and pocket devices. Many more people capture their surroundings now – but they do it on their phones. If Apple or Google were proposing a Recall feature for iOS or Android, the idea would seem a lot less weird.

The real issue is that there are many people who would like to be able to know what somone *else* has been doing on their computer at all times. Helicopter parents. Schools and teachers under government compulsion (see for example Prevent (PDF)). Employers. Border guards. Corporate spies. The Department of Work and Pensions. Authoritarian governments. Law enforcement and security agencies. Criminals. Domestic abusers… So developing any feature like this must include considering how to protect it against these threats. This does not appear to have happened.

Many others have written about the privacy issues in all this – the UK’s Information Commission’s Office is already investigating. At The Register, Richard Speed does a particularly good job of looking at some of the fine details. On Mastodon, Kevin Beaumont says inspection of the Copilot+ software suggests that Recall stores the text it extracts from all those snapshots into an easily copiable SQlite database.

But there’s still more. The kind of archive Recall appears to construct can teach an attacker how the target thinks: not just what passwords they choose but how they devise them.Those patterns can be highly valuable. Granted, few targets are worth that level of attention, but it happens, as Peter Davies, a technical director at eThales, has often warned.

Recall is not the only move – see also flawed-AI-with-everything – that suggests that the computer industry, like some politicians and governments, is badly losing touch with the public. Increasingly, what they want to do seems unrelated to what the rest of us want. If they think things like Recall are a good idea they need to read more Philip K. Dick. And then don’t invent the Torment Nexus.

Illustrations: Arnold Schwarzenegger seeking better memories in the 1990 film Total Recall.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon..

Core values

Follow the money; follow the incentives.

Cybersecurity is an intractable problem for many of the same reasons climate change is: often the people paying the cost are not the people who derive the benefits. The foundation of the Workshop on the Economics of Information Security is often traced to the 2001 paper Why Information Security is Hard, by the late Ross Anderson. There were earlier hints, most notably in the 1999 paper Users Are Not the Enemy by Angela Sasse and Anne Adams.

Anderson’s paper directly examined and highlighted the influence of incentives on security behavior. Sasse’s paper was ostensibly about password policies and the need to consider human factors in designing them. But hidden underneath was the fact that the company department that called her in was not the IT team or the help desk team but accounting. Help desk costs to support users who forgot their passwords were rising so fast they threatened to swamp the company.

At the 23rd WEIS, held this week in Dallas (see also 2020), papers studied questions like which values drive people’s decisions when hit by ransomware attacks (Zinaida Benenson); whether the psychological phenomenon of delay discounting could be used to understand the security choices people make (Einar Snekkenes); and whether a labeling scheme would help get people to pay for security (L Jean Camp).

The latter study found that if you keep the label simple, people will actually pay for security. It’s a seemingly small but important point: throughout the history of personal computing, security competes with so many other imperatives that it’s rarely a factor in purchasing decisions. Among those other imperatives: cost, convenience, compatibility with others, and ease of use. But also: it remains near-impossible to evaluate how secure a product or provider is. Only the largest companies are in a position to ask detailed questions of cloud providers, for example,

Or, in an example provided by Chitra Marti, rare is the patient who can choose a hospital based on the security arrangements it has in place to protect its data. Marti asked a question I haven’t seen before: what is the role of market concentration in cybersecurity? To get at this, Marti looked at the decade’s experience of electronic medical records in hospitals since the big post-2008 recession push to digitize. Since 2010, more than 150 million records have been breached.

Of course, monoculture is a known problem in cybersecurity as it is in agriculture: if every machine runs the same software all machines are vulnerable to the same attacks. Similarly, the downsides of monopoly – poorer service, higher prices, lower quality – are well known. Marti’s study tying the two together found correlations in the software hospitals run and rarely change, even after a breach, though they do adopt new security measures. Hospitals choose software vendors for all sorts of reasons such as popularity, widspread use in their locality, or market leadership. The difficulty of deciding to change may be exacerbated by positive benefits to their existing choice that would be lost and outweigh the negatives.

These broader incentives help explain, as Richard Clayton set out, why distributed denial of service attacks remain so intractable. A key problem is “reflectors”, which amplify attacks by using spoofed IP addresses to send requests where the size of the response will dwarf the request. With this technique, a modest amount of outgoing traffic lands a flood on the chosen target (the one whose IP address has been spoofed). Fixing infrastructure to prevent these reflectors is tedious and only prevents damage to others. Plus, the provider involved may have to sacrifice the money they are paid to carry the traffic. For reasons like these, over the years the size of DDoS attacks has grown until only the largest anti-DDoS providers can cope with them. These realities are also why the early effort to push providers to fix their systems – RFC 2267 – failed. The incentives, in classic WEIS terms, are misaligned.

Clayton was able to use the traffic data he was already collecting to create a short list of the largest reflected amplified DDoS attacks each week and post it on a private Slack channel so providers could inspect their logs to trace it back to the source

At this point a surprising thing happened: the effort made a difference. Reflected amplified attacks dropped noticeably. The reasons, he and Ben Collier argue in their paper, have to do with the social connections among network engineers, the most senior of whom helped connect the early Internet and have decades-old personal relationships with their peers that have been sustained through forums such as NANOG and M3AAWG. This social capital and shared set of values kicked in when Clayton’s action lists moved the problem from abuse teams into the purview of network engineer s. Individual engineers began racing ahead; Amazon recently highlighted AWS engineer Tom Scholl’s work tracing back traffic and getting attacks stopped.

Clayton concluded by proposing “infrastructural capital” to cover the mix of human relationships and the position in the infrastructure that makes them matter. It’s a reminder that underneath those giant technology companies there still lurks the older ethos on which the Internet was founded, and humans whose incentives are entirely different from profit-making. And also: that sometimes intractable problems can be made less intractable.

Illustrations: WEIS waits for the eclipse.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

RIP Ross J. Anderson

RIP Ross J. Anderson, who died on March 28, at 67 and leaves behind a smoking giant crater in the fields of security engineering and digital rights activism. For the former, he was a professor of security engineering at Cambridge University and Edinburgh University, a Fellow of the Royal Society, and a recipient of the BCS Lovelace medal. His giant textbook Security Engineering is a classic. In digital rights activism, he founded the Foundation for Information Policy Research (see also tenth anniversary and 15th anniversary) and the UK Crypto mailing list, and, understanding that the important technology laws were being made at the EU level, he pushed for the formation of European Digital Rights to act as an umbrella organization for the national digital rights organizations springing up in many countries. He also was one of the pioneers in security economics, and founded the annual Workshop on the Economics of Information Security, convening on April 8 for the 23rd time.

One reason Anderson was so effective in the area of digital rights is that he had the ability to look forward and see the next challenge while it was still forming. Even more important, he had an extraordinary ability to explain complex concepts in an understandable manner. You can experience this for yourself at the YouTube channel where he posted a series of lectures on security engineering or by reading any of the massive list of papers available at his home page.

He had a passionate and deep-seated sense of injustice. In the 1980s and 1990s, when customers complained about phantom ATM withdrawals and the banks tried to claim their software was infallible, he not only conducted a detailed study but adopted fraud in payment systems as an ongoing research interest.

He was a crucial figure in the fight over encryption policy, opposing key escrow in the 1990s and “lawful access” in the 2020s, for the same reasons: the laws of mathematics say that there is no such thing as a hole only good guys can exploit. His name is on many key research papers in this area.

In the days since his death, numerous former students and activists have come forward with stories of his generosity and wit, his eternal curiosity to learn new things, and the breadth and depth of his knowledge. And also: the forthright manner that made him cantankerous.

I think I first encountered Ross at the 1990s Scrambling for Safety events organized by Privacy International. He was slow to trust journalists, shall we say, and it was ten years before I felt he’d accepted me. The turning point was a conference where we both arrived at the lunch counter at the same time. I happened to be out of cash, and he bought me a sandwich.

Privately, in those early days I sometimes referred to him as the “mad cryptographer” because interviews with him often led to what seemed like off-the-wall digressions. One such led to an improbable story about the US Embassy in Moscow being targeted with microwaves in an attempt at espionage. This, I found later, was true. Still, I felt best not to quote it in the interests of getting people to listen to what he was saying about crypto policy.

My favorite Ross memory, though, is this: one night shortly before Christmas maybe ten years ago – by this time we were friends – when I interviewed him over Skype for yet another piece. It was late in Britain, and I’m not sure he was fully sober. Before he would talk about security, knowing of my interest in folk music, he insisted on playing several tunes on the Scottish chamber pipes. He played well. Pipe music was another of his consuming interests, and he brought to it as much intensity and scholarship as he did to all his other passions.

Ross J. Anderson, b. September 15, 1956, d. March 28, 2024.

Anachronistics

“In my mind, computers and the Internet arrived at the same time,” my twenty-something companion said, delivering an entire mindset education in one sentence.

Just a minute or two earlier, she had asked in some surprise, “Did bulletin board systems predate the Internet?” Well, yes: BBSs were a software package running on a single back room computer with a modem users dialed into, whereas the Internet is this giant sprawling mess of millions of computers connected together…simple first, complex later.

Her confusion is understandable: from her perspective, computers and the Internet did arrive at the same time, since her first conscious encounters with them were simultaneous.

But still, speaking as someone who first programmed a (mainframe, with punch cards) computer in 1972 as a student, who got her first personal computer in 1982, and got online in 1991 by modem and 1999 by broadband and to whom the sequence of events is memorable: wow.

A 25-year-old today was born in 1999 (the year I got broadband). Her counterpart 15 years hence (born 2014, the year a smartphone replaced my personal digital assistant) may think smart phones and the Internet were simultaneous. And sometime around 2045 *her* counterpart born in 2020 (two years before ChatGPT was released) might think generative text and image systems were contemporaneous with the first computers.

I think this confusion must have something to do with the speed of change in a relatively narrow sector. I’m sure that even though they all entered my life simultaneously, by the time I was 25 I knew that radio preceded TV (because my parents grew up with radio), bicycles preceded cars, and that handwritten manuscripts predated printed books (because medieval manuscripts). But those transitions played out over multiple lifetimes, if not centuries, and all those memories were personal. Few of us reminisce about the mainframes of the 1960s because most of us didn’t have access to them.

And yet, understanding the timeline of earlier technologies probably mattered less than not understanding the sequence of events in information technology. Jumbling the arrival dates of the pieces of information technology means failing to understand dependencies. What currently passes for “AI” could not exist without being able to train models on giant piles of data that the Internet and the web made possible, and that took 20 years to build. Neural networks pioneer Geoff Hinton came up with the ideas for convolutional neural networks as long ago as the 1980s, but it took until the last decade for them to become workable. That’s because it took that long to build sufficiently powerful computers and to amass enough training data. How do you understand the ongoing battle between those who wish to protect privacy via data protection laws and those who want data to flow freely without hindrance if you do not understand what those masses of data are important for?

This isn’t the only such issue. A surprising number of people who should know better seem to believe that the solution to all our ills with social media is to destroy Section 230, apparently believing that if S230 allowed Big Tech to get big, it must be wrong. Instead, the reality is also that it allows small sites to exist and it is the legal framework that allows content moderation. Improve it by all means, but understand its true purpose first.

Reviewing movies and futurist projections such as Vannevar Bush’s 1946 essay As We May Think (PDF) and Alan Turing’s lecture, Computing Machinery and Intelligence? (PDF) doesn’t really help because so many ideas arrive long before they’re feasible. The crew in the original 1966 Star Trek series (to say nothing of secret agent Maxwell Smart in 1965) were talking over wireless personal communicators. A decade earlier, Arthur C. Clarke (in The Nine Billion Names of God) and Isaac Asimov (in The Last Question) were putting computers – albeit analog ones – in their stories. Asimov in particular imagined a sequence that now looks prescient, beginning with something like a mainframe, moving on to microcomputers, and finishing up with a vast fully interconnected network that can only be held in hyperspace. (OK, it took trillions of years, starting in 2061, but still..) Those writings undoubtedly inspired the technologists of the last 50 years when they decided what to invent.

This all led us to fakes: as the technology to create fake videos, images, and texts continues to improve, she wondered if we will ever be able to keep up. Just about every journalism site is asking some version of that question; they’re all awash in stories about new levels of fakery. My 25-year-old discussant believes the fakes will always be improving faster than our methods of detection – an arms race like computer security, to which I’ve compared problems of misinformation / disinformation before.

I’m more optimistic. I bet even a few years from now today’s versions of generative “AI” will look as primitive to us as the special effects in a 1963 episode of Dr Who or the magic lantern used to create the Knock apparitions do to generations raised on movies, TV, and computer-generated imagery. Humans are adaptable; we will find ways to identify what is authentic that aren’t obvious in the shock of the new. We might even go back to arguing in pubs.

Illustrations: Secret agent Maxwell Smart (Don Adams) talking on his shoe phone (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Nefarious

Torrentfreak is reporting that OCLC, owner of the WorldCat database of bibliographic records, is suing the “shadow library” search engine Anna’s Archive. The claim: that Anna’s Archive hacked into WorldCat, copied 2.2TB of records, and posted them publicly.

Shadow libraries are the text version of “pirate” sites. The best-known is probably Sci-Hub, which provides free access to hundreds of thousands of articles from (typically expensive) scientific journals. Others such as Library Genesis and sites on the dark web offer ebooks. Anna’s Archive indexes as many of these collections as it can find; it was set up in November 2022, shortly after the web domains belonging to the then-largest of these book libraries, Z-Library, were seized by the US Department of Justice. Z-Library has since been rebuilt on the dark web, though it remains under attack by publishers and law enforcement.

Anna’s Archive also includes some links to the unquestionably legal and long-running Gutenberg Project, which publishes titles in the public domain in a wide variety of formats.

The OCLC-Anna’s Archive case has a number of familiar elements that are variants of long-running themes, open versus gatekept being the most prominent. Like many such sites (post-Napster), Anna’s Archive does not host files itself. That’s no protection from the law; authorities in various countries from have nonetheless blocked or seized the domains belonging to such sites. But OCLC is not a publisher or rights holder, although it takes large swipes at Anna’s Archive for lawlessness and copyright infringement. Instead, it says Anna’s Archive hacked WorldCat, violating its terms and conditions, disrupting its business arrangements, and costing it $1.4 million and 10,000 employee hours in system remediation. Second, it complains that Anna’s Archive has posted the data in the aggregate for public download, and is “actively encouraging nefarious use of the data”. Other than the use of “nefarious”, there seems little to dispute about either claim; Anna’s Archive published the details in an October 2023 blog posting.

Anna’s Archive describes this process as “preserving” the world’s books for public access. OCLC describes it as “tortious inference” with its business. It wants the court to issue injunctive relief to make the scraping and use of the data stop, compensatory damages in excess of $75,000, punitive damages, costs, and whatever else the court sees fit. The sole named defendant is a US citizen, María A. Matienzo, thought to be resident near Seattle. If the identification and location are correct, that’s a high-risk situation to be in.

In the blog posting, Anna’s Archive writes that its initial goal was to answer the question of what percentage of the world’s published books are held in shadow libraries and create a to-do list of gaps to fill. To answer these questions, they began by scraping ISBNdb, the database of publications with ISBNs, which only came into use in 1970. When the overlap with the Internet Archive’s Open Library and the seized Z-library was less than they hoped, they turned to Worldcat. At that point, they openly say that security flaws in the fortuitously redesigned Worldcat website allowed them to grab more or less the comprehensive set of records. While scraping can be legal, exploiting security flaws to gain unauthorized access to a computer system likely violates the widely criticized Computer Fraud and Abuse Act (1986), which could be a felony. OCLC has, however, brought a civil case.

Anna’s Archive also searches the Internet Archive’s Open Library, founded in 2006. In 2009, co-creator Aaron Swartz told me that he believed the creation of Open Library pushed OCLC into opening up greater public access to the basic tier of its bibliographic data. The Open Library currently has its own legal troubles; it lost in court in August 2023 after Hachette sued it for copyright infringement. The Internet Archive is appealing; in the meantime it is required to remove on request of any member of the American Asociation of Publishers any book commercially available in electronic format.

OCLC began life as the Ohio Library College Library Center; its WorldCat database is a collaboration between it and its member libraries to create a shared database of bibliographic records and enable online cataloguing. The last time I wrote about it, in 2009, critics were complaining that libraries in general were failing to bring book data onto the open web. It has gotten a lot better in the years since, and many local libraries are now searchable online and enable their card holders to borrow from their holdings of ebooks over the web.

The fact that it’s now often possible to borrow ebooks from libraries should mean there’s less reason to use unauthorized sites. Nonetheless, these still appeal: they have the largest catalogues, the most convenient access, DRM-free files, and no time limits, so you can read them at your leisure using the full-featured reader you prefer.

In my 2009 piece, an OCLC spokesperson fretted about “over-exploitation”, which there would be no good way to maintain or update countless unknown scattered pockets of data, seemingly a solvable problem.

OCLC and its member libraries are all non-profit organizations ultimately funded by taxpayers. The data they collect has one overriding purpose: to facilitate public access to libraries’ holdings by showing who holds what books in which editions. What are “nefarious” uses? Arguably, the data they collect should be public by right. But that’s not the question the courts will decide.

Illustrations: The New York Public Library, built 1911 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Trust busted

It’s hard not to be agog at the ongoing troubles of Boeing. The covid-19 pandemic was the first break in our faith in the ready availability of travel; this is the second. As one of only two global manufacturers of commercial airplanes, Boeing’s problems are the industry’s problems.

I’ve often heard cybersecurity researchers talk with envy of the aviation industry. Usually, it’s because access to data is a perennial problem; many companies don’t want to admit they’ve been hacked or talk about it when they do. By contrast, they’ve said, the aviation industry recognized early that convincing people flying was safe was crucial to everyone’s success and every crash hurt everyone’s prospects, not just those of the competitor whose plane went down. The result was industry-wide adoption of strategies designed to maximize collaboration across the industry to improve safety: data sharing, no-fault reporting, and so on. That hard-won public trust has, we now see, allowed modern industry players to coast on their past reputations. With this added irony: while airlines and governments everywhere have focused on deterring terrorists, the risks are coming from within the industry.

I’m the right age to have rarely worried about aviation safety – young enough to have missed the crashes of the early years, old enough that my first flights were taken in childhood. Isaac Asimov, born in 1920, who said he refused to fly because passengers didn’t have a “sporting” chance of survival in a crash, was actually wrong; the survival rate for airplane crashes in over 90%. Many people feel safer when they feel in control. Yet, as Bruce Schneier has frequently said, you’re at greater risk on the drive to the airport than you are on the plane.

In fact, it’s an extraordinary privilege that most of us worry more about delays, lost luggage, bad food, and cramped seating than whether our flight will land safely. The 2018 crash of a Boeing 737 MAX 8 did little to dislodge this general sense of safety, even though 189 people died, and the same was true following the 2019 crash of the same plane, which killed another 156 people. Boeing tried to sell the idea that it was inadequately trained pilots working for substandard (read: not American or European) airlines, but the reality quickly became plain: the company had skimped on testing and training and its famed safety-first engineering-led culture had disintegrated under pressure to reward shareholders and executives.

We were able to tell ourselves that it was one model plane, and that changes followed, as Bloomberg investigative reporter Peter Robison documents in Flying Blind: The 737 MAX Tragedy and the Fall of Boeing. In particular, the US Congress undid the 2020 legal change that had let Boeing self-certify and restored the Federal Aviation Administration’s obligation of direct oversight, some executives were replaced, and a test pilot went to jail. However, Robison wrote for publication in 2021, many inside the industry, not just at Boeing, thought the FAA’s 20-month grounding of the MAX was “an overreaction”. You might think – as I did – that the airlines themselves would be strongly motivated not to fly planes that could put their safety record at risk, but Robison’s reporting is not comforting about that: the MAX, he writes, is “a moneymaker” for the airlines in that it saves 15% on fuel costs per flight.

Still, the problem seemed to be confined to one model of plane. Until, on January 5, the door plug blew out of a 737 MAX 9. A day later, the FAA grounded all planes of that model for safety inspections.

On January 13, a crack was found in a cockpit window of a 737-800 in Japan. On January 19, a cargo 747-8 caught fire leaving Miami. On January 24, Alaska Airlines reported finding many loose bolts during its fleetwide inspection of 737 Max 9s. Then on January 24, the nose wheel fell off a 757 departing Atlanta. Near-simultaneously, the Seattle Times reported that Boeing itself installed the door plug that blew out, not its supplier, Spirit Aerosystems. The online booking agent and price comparison site Kayak announced that increasing use of its aircraft-specific filter had led it to add separate options to avoid 737 MAX 8s and 9s.

The consensus that formed about the source of the troubles that led to the 2018-2019 crashes is holding: blame focuses on the change in company culture brought by the 1997 merger with McDonnell Douglas, valuing profits and shareholder payouts over engineering. Boeing is in for a period of self-reinvention in which its output will be greatly slowed. As airlines’ current fleets age, this will have to mean reduced capacity; there are only two major aircraft manufacturers in the world, and the other one – Airbus – is fully booked.

As Cory Doctorow writes, that’s only one constraint going forward, at least in the US: there aren’t enough pilots, air traffic controllers, or engine manufacturers. Anti-monopolist Matt Stoller proposes to nationalize and then break up Boeing, arguing that its size and importance mean only the state can backstop its failures. Ten years ago, when the US’s four big legacy airlines consolidated to three, it was easy to think passengers would pay in fees and lost comfort; now we know safety was on the line, too.

Illustrations: The Wright Brothers’ first heavier-than-air flight, in 1903 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Relativity

“Status: closed,” the website read. It gave the time as 10:30 p.m.

Except it wasn’t. It was 5:30 p.m., and the store was very much open. The website, instead of consulting the time zone the store – I mean, the store’s particular branch whose hours and address I had looked up – was in was taking the time from my laptop. Which I hadn’t bothered to switch to the US east coat from Britain because I can subtract five hours in my head and why bother?

Years ago, I remember writing a rant (which I now cannot find) about the “myness” of modern computers: My Computer, My Documents. My account. And so on, like a demented two-year-old who needed to learn to share. The notion that the time on my laptop determined whether or not the store was open had something of the same feel: the computational universe I inhabit is designed to revolve around me, and any dispute with reality is someone else’s problem.

Modern social media have hardened this approach. I say “modern” because back in the days of bulletin board systems, information services, and Usenet, postings were time- and date-stamped with when they were sent and specifying a time zone. Now, every post is labelled “2m” or “30s” or “1d”, so the actual date and time are hidden behind their relationship to “now”. It’s like those maps that rotate along with you so wherever you’re pointed physically is at the top. I guess it works for some people, but I find it disorienting; instead of the map orienting itself to me, I want to orient myself to the map. This seems to me my proper (infinitesimal) place in the universe.

All of this leads up to the revival of software agents. This was a Big Idea in the late 1990s/early 2000s, when it was commonplace to think that the era of having to make appointments and book train tickets was almost over. Instead, software agents configured with your preferences would do the negotiating for you. Discussions of this sort of thing died away as the technology never arrived. Generative AI has brought this idea back, at least to some extent, particularly in the financial area, where smart contracts can be used to set rules and then run automatically. I think only people who never have to worry about being able to afford anything will like this. But they may be the only ones the “market” cares about.

Somewhere during the time when software agents were originally mooted, I happened to sit at a conference dinner with the University of Maryland human-computer interaction expert Ben Shneiderman. There are, he said, two distinct schools of thought in software. In one, software is meant to adapt to the human using it – think of predictive text and smartphones as an example. In the other, software is consistent, and while using it may be repetitive, you always know that x command or action will produce y result. If I remember correctly, both Shneiderman and I were of the “want consistency” school.

Philosophically, though, these twin approaches have something in common with seeing the universe as if the sun went around the earth as against the earth going around the sun. The first of those makes our planet and, by extension, us far more important in the universe than we really are. The second cuts us down to size. No surprise, then, if the techbros who build these things, like the Catholic church in Galileo’s day, prefer the former.

***

Politico has started the year by warning that the UK is seeking to expand its surveillance regime even further by amending the 2016 Investigatory Powers Act. Unnoticed in the run-up to Christmas, the industry body techUK sent a letter to “express our concerns”. The short version: the bill expands the definition of “telecommunications operator” to include non-UK providers when operating outside the UK; allows the Home Office to require companies to seek permission before making changes to a privately and uniquely specified list of services; and the government wants to whip it through Parliament as fast as possible.

No, no, Politico reports the Home Office told the House of Lords, it supports innovation and isn’t threatening encryption. These are minor technical changes. But: “public safety”. With the ink barely dry on the Online Safety Act, here we go again.

***

As data breaches go, the one recently reported by 23andMe is alarming. By using passwords exposed in previous breaches (“credential stuffing”) to break into 14,000 accounts, attackers gained access to 6.9 million account profiles. The reason is reminiscent of the Cambridge Analytica scandal, where access to a few hundred thousand Facebook accounts was leveraged to obtain the data of millions: people turned on “DNA Relatives to allow themselves to be found by those searching for genetic relatives. The company, which afterwards turned on a requireme\nt for two-factor authentication, is fending off dozens of lawsuits by blaming the users for reusing passwords. According to Gizmodo, the legal messiness is considerable, as the company recently changed its terms and conditions to make arbitration more difficult and litigation almost impossible.

There’s nothing good to say about a data breach like this or a company that handles such sensitive data with such disdainx. But it’s yet one more reason why putting yourself at the center of the universe is bad hoodoo.

Illustrations: DNA strands (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Snowden at ten

As almost every media outlet has headlined this week, it is now ten years since Edward Snowden alerted the world to the real capabilities of the spy agencies, chiefly but not solely the US National Security Agency. What is the state of surveillance now? most of the stories ask.

Some samples: at the Open Rights Group executive director Jim Killock summarizes what Snowden revealed; Snowden is interviewed; the Guardian’s editor at the time, Alan Rusbridger, recounts events at the Guardian, which co-published Snowden’s discoveries with the Washington Post; journalist Heather Brooke warns of the increasing sneakiness of government surveillance; and Jessica Lyons Hardcastle outlines the impact. Finally, at The Atlantic, Ewen MacAskill, one of the Guardian journalists who worked on the Snowden stories, says only about 1% of Snowden’s documents were ever published.

As has been noted here recently, it seems as though everywhere you look surveillance is on the rise: at work, on privately controlled public streets, and everywhere online by both government and commercial actors. As Brooke writes and the Open Rights Group has frequently warned, surveillance that undermines the technical protections we rely on puts us all in danger.

The UK went on to pass the Investigatory Powers Act, which basically legalized what the security services were doing, but at least did add some oversight. US courts found that the NSA had acted illegally and in 2015 Congress made bulk collection of Americans’ phone records illegal. But, as Bruce Schneier has noted, Snowden’s cache of documents was aging even in 2013; now they’re just old. We have no idea what the secret services are doing now.

The impact in Europe was significant: in 2016 the EU adopted the General Data Protection Regulation. Until Snowden, data protection reform looked like it might wind up watering down data protection law in response to an unprecedented amount of lobbying by the technology companies. Snowden’s revelations raised the level of distrust and also gave Max Schrems some additional fuel in bringing his legal actions< against EU-US data deals and US corporate practices that leave EU citizens open to NSA snooping.

The really interesting question is this: what have we done *technically* in the last decade to limit government’s ability to spy on us at will?

Work on this started almost immediately. In early 2014, the World Wide Web Consortium and the Internet Engineering Task Force teamed up on a workshop called Strengthening the Internet Against Pervasive Monitoring (STRINT). Observing the proceedings led me to compare the size of the task ahead to boiling the ocean. The mood of the workshop was united: the NSA’s actions as outlined by Snowden constituted an attack on the Internet and everyone’s privacy, a view codified in RFC 7258, which outlined the plan to mitigate pervasive monitoring. The workshop also published an official report.

Digression for non-techies: “RFC” stands for “Request for Comments”. The thousands of RFCs since 1969 include technical specifications for Internet protocols, applications, services, and policies. The title conveys the process: they are published first as drafts and incorporate comments before being finalized.

The crucial point is that the discussion was about *passive* monitoring, the automatic, ubiquitous, and suspicionless collection of Internet data “just in case”. As has been said so many times about backdoors in encryption, the consequence of poking holes in security is to make everyone much more vulnerable to attacks by criminals and other bad actors.

So a lot of that workshop was about finding ways to make passive monitoring harder. Obviously, one method is to eliminate vulnerabilities, especially those the NSA planted. But it’s equally effective to make monitoring more expensive. Given the law of truly large numbers, even a tiny extra cost per user creates unaffordable friction. They called it a ten-year project, which takes us to…almost now.

Some things have definitely improved, largely through the expanded use of encryption to protect data in transit. On the web, Let’s Encrypt, now ten years old, makes it easy and cheap to obtain a certificate for any website. Search engines contribute by favoring encrypted (that is, HTTPS) web links over unencrypted ones (HTTP). Traffic between email servers has gone from being transmitted in cleartext to being almost all encrypted. Mainstream services like WhatsApp have added end-to-end encryption to the messaging used by billions. Other efforts have sought to reduce the use of fixed long-term identifiers such as MAC addresses that can make tracking individuals easier.

At the same time, even where there are data protection laws, corporate surveillance has expanded dramatically. And, as has long been obvious, governments, especially democratic governments, have little motivation to stop it. Data collection by corporate third parties does not appear in the public budget, does not expose the government to public outrage, and is available via subpoena any time government officials want. If you are a law enforcement or security service person, this is all win-win; the only data you can’t get is the data that isn’t collected.

In an essay reporting on the results of the work STRINT began as part of the ten-year assessment currently circulating in draft, STRINT convenor Stephen Farrell writes, “So while we got a lot right in our reaction to Snowden’s revelations, currently, we have a “worse” Internet.”

Illustrations: Edward Snowden, speaking to Glenn Greenwald in a screenshot from Laura Poitras’ film Prism from Praxis Films (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.