Pass the password

So this week I had to provide a new password for an online banking account. This is always fraught, even if you use a password manager. Half the time whatever you pick fails the site’s (often hidden) requirements – you didn’t stand on your head and spit a nickel, or they don’t allow llamas, or some other damn thing. This time, the specified failures startled me: it was too long, and special characters and spaces were not allowed. This is their *new* site, created in 2022.

They would have done well to wait for the just-released proposed password guidelines from the US National Institute for Standards and Technology. Most of these rules should have been standard long ago – like only requiring users to change passwords if there’s a breach. We can, however, hope they will be adopted now and set a consistent standard. To date, the fact that everyone has a different set of restrictions means that no secure strategy is universally valid – neither the strong passwords software generates nor the three random words the UK’s National Cyber Security Centre has long recommended.

The banking site we began with fails at least four of the nine proposed new rules: the minimum password length (six) is too short – it should be eight, preferably 15; all Unicode characters and printing ASCII characters should be acceptable, including spaces; maximum length should be at least 64 characters (the site says 16); there should be no composition rules mandating upper and lower case, numerals, and so on (which the site does). At least the site’s rules mean it won’t invisibly truncate the password so it’s impossible to guess how much of it has actually been recorded. On the plus side, a little surprisingly, the site did require me to choose my own password hint question. The fact that most sites use the same limited set of questions opens the way for reused answers across the web, effectively undoing the good of not reusing passwords in the first place.

This is another of those cases where the future was better in the past: for at least 20 years passwords have been about to be superseded by…something – smart cards, dongles, biometrics, picklists and typing patterns, images, lately, passkeys and authenticator apps. All have problems limiting their reach. Single signons are still offered by Facebook, Google, and others, but the privacy risks are (I hope) widely understood. The “this is the next password” crowd have always underestimated the complexity of replacing deeply embedded habits. We all hate passwords – but they are simple to understand and work across multiple devices. And we’re used to them. Encryption didn’t succeed with mainstream users until it became invisibly embedded inside popular services. I’ll guess that password replacements will only succeed if they are similarly invisible. Most are too complicated.

In all these years, despite some improvements, passwords have only gotten more frustrating. Most rules are still devised with desktop/laptop computers in mind – and yield passwords that are impossible to type on phones. No one takes a broad view of the many contexts in which we might have to enter passwords. Outmoded threat models are everywhere. Decades after the cybercafe era, sites, operating systems, and other software all still code as if shoulder surfing were still an important threat model. So we end up with sillinesses like struggling to type masked wifi passwords while sitting in coffee shops where they’re prominently displayed, and masks for one-time codes sent for two-factor authentication. So you fire up a text editor to check your typing and then copy and paste…

Meanwhile, the number of passwords we have to manage continues to escalate. In a recent extended Mastodon thread, digital services expert Jonathan Kamens documented his adventures updating the email addresses attached to 1,200-plus accounts. He explained: using a password manager makes it easy to create and forget new accounts by the hundred.

In a 2017 presentation, Columbia professor Steve Bellovin provides a rethink of all this, much of it drawn from his his 2016 book Thinking Security. Most of what we think we know about good password security, he argues, is based on outmoded threat models and assumptions – outmoded because the computing world has changed, but also because attackers have adapted to those security practices. Even the focus on creating *strong* passwords is outdated: password strength is entirely irrelevant to modern phishing attacks, to compromised servers and client hosts, and to attackers who can intercept communications or control a machine at either end (for example, via a keylogger). A lot depends if you’re a high-risk individual likely to be targeted personally by a well-resourced attacker, a political target, or, like most of us, a random target for ordinary criminals.

The most important thing, Bellovin says, is not to reuse passwords, so that if a site is cracked the damage is contained to that one site. Securing the email address used to reset passwords is also crucial. To that end, it can be helpful to use a separate and unpublished email address for the most sensitive sites and services – anything to do with money or health, for example.

So NIST’s new rules make some real sense, and if organizations can be persuaded to adopt them consistently all our lives might be a little bit less fraught. NIST is accepting comments through October 7.

Illustrations: XKCD on password strength.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

This perfect day

To anyone remembering the excitement over DNA testing just a few years ago, this week’s news about 23andMe comes as a surprise. At CNN, Allison Morrow reports that all seven board members have resigned to protest CEO Anne Wojcicki’s plan to take the company private by buying up all the shares she doesn’t already own at 40 cents each (closing price yesterday was 0.3301. The board wanted her to find a buyer offering a better price.

In January, Rolfe Winkler reported at the Wall Street Journal ($) that 23andMe is likely to run out of cash by next year. Its market cap has dropped from $6 billion to under $200 million. He and Morrow catalogue the company’s problems: it’s never made a profit nor had a sustainable business model.

The reasons are fairly simple: few repeat customers. With DNA testing, as Winkler writes, “Customers only need to take the test once, and few test-takers get life-altering health results.” 23andMe’s mooted revolution in health care instead was a fad. Now, the company is pivoting to sell subscriptions to weight loss drugs.

This strikes me as an extraordinarily dangerous moment: the struggling company’s sole unique asset is a pile of more than 10 million DNA samples whose owners have agreed they can be used for research. Many were alarmed when, in December 2023, hackers broke into 1.7 million accounts and gained access to 6.9 million customer profiles<, though. The company said the hacked data did not include DNA records but did include family trees and other links. We don't think of 23andMe as a social network. But the same affordances that enabled Cambridge Analytica to leverage a relatively small number of user profiles to create a mass of data derived from a much larger number of their Friends worked on 23andMe. Given the way genetics works, this risk should have been obvious.

In 2004, the year of Facebook’s birth, the Australian privacy campaigner Roger Clarke warned in Very Black “Little Black Books” that social networks had no business model other than to abuse their users’ data. 23andMe’s terms and conditions promise to protect user privacy. But in a sale what happens to the data?

The same might be asked about the data that would accrue from Oracle CEO Larry Ellison‘s surveillance-embracing proposals this week. Inevitably, commentators invoked George Orwell’s 1984. At Business Insider, Kenneth Niemeyer was first to report: “[Ellison] said AI will usher in a new era of surveillance that he gleefully said will ensure ‘citizens will be on their best behavior.'”

The all-AI-surveillance all-the-time idea could only be embraced “gleefully” by someone who doesn’t believe it will affect him.

Niemeyer:

“Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras.

“We’re going to have supervision,” Ellison said. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.”

Ellison is twenty-six years behind science fiction author David Brin, who proposed radical transparency in his 1998 non-fiction outing, The Transparent Society. But Brin saw reciprocity as an essential feature, believing it would protect privacy by making surveillance visible. Ellison is claiming that *inscrutable* surveillance will guarantee good behavior.

At 404 Media, Jason Koebler debunks Ellison point by point. Research and other evidence shows securing schools is unlikely to make them safer; body cameras don’t appear to improve police behavior; and all the technologies Ellison talks about have problems with accuracy and false positives. Indeed, the mayor of Chicago wants to end the city’s contract with ShotSpotter (now SoundThinking), saying it’s expensive and doesn’t cut crime; some research says it slows police 911 response. Worth noting Simon Spichak at Brain Facts, who finds with AI tools humans make worse decisions. So…not a good idea for police.

More disturbing is Koebler’s main point: most of the technology Ellison calls “future” is already here and failing to lower crime rates or solve its causes – while being very expensive. Ellison is already out of date.

The book Ellison’s fantasy evokes for me is the less-known This Perfect Day, by Ira Levin, written in 1970. The novel’s world is run by a massive computer (“Unicomp”) that decides all aspects of individuals’ lives: their job, spouse, how many children they can have. Enforcing all this are human counselors and permanent electronic bracelets individuals touch to ubiquitous scanners for permission.

Homogeneity rules: everyone is mixed race, there are only four boys’ and girls’ names, they eat “totalcakes”, drink cokes, wear identical clothing. For the rest, regularly administered drugs keep everyone healthy and docile. “Fight” is an abominable curse word. The controlled world over which Unicomp presides is therefore almost entirely benign: there is no war, crime, and little disease. It rains only at night.

Naturally, the novel’s hero rebels, joins a group of outcasts (“the Incurables”), and finds his way to the secret underground luxury bunker where a few “Programmers” help Unicomp’s inventor, Wei Li Chun, run the world to his specification. So to me, Ellison’s plan is all about installing himself as world ruler. Which, I mean, who could object except other billionaires?

Illustrations: The CCTV camera on George Orwell’s Portobello Road house.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The brittle state

We’re now almost a year on from Rishi Sunak’s AI Summit, which failed to accomplish any of its most likely goals: cement his position as the UK’s prime minister; establish the UK as a world leader in AI fearmongering; or get him the new life in Silicon Valley some commentators seemed to think he wanted.

Arguably, however, it has raised belief that computer systems are “intelligent” – that is, that they understand what they’re calculating. The chatbots based on large language models make that worse, because, as James Boyle cleverly wrote, for the first time in human history, “sentences do not imply sentience”. Mix in paranoia over the state of the world and you get some truly terrifying systems being put into situations where they can catastrophically damage people’s lives. We should know better by now.

The Open Rights Group (I’m still on its advisory council) is campaigning against the Home Office’s planned eVisa scheme. In the previouslies: between 1948 and 1971, people from Caribbean countries, many of whom had fought for Britain in World War II, were encouraged to help the UK rebuild the economy post-war. They are known as the “Windrush generation” after the first ship that brought them. As Commonwealth citizens, they didn’t need visas or documentation; they and their children had the automatic right to live and work here.

Until 1973, when the law changed; later arrivals needed visas. The snag was that earlier arrivals had no idea they had any reason to worry….until the day they discovered, when challenged, that they had no way to prove they were living here legally. That day came in 2017, when then-prime minister, Theresa May (who this week joined the House of Lords) introduced the hostile environment. Intended to push illegal immigrants to go home, this law moves the “border” deep into British life by requiring landlords, banks, and others to conduct status checks. The result was that some of the Windrush group – again, legal residents – were refused medical care, denied housing, or deported.

When Brexit became real, millions of Europeans resident in the UK were shoved into the same position: arrived legally, needing no documentation, but in future required to prove their status. This time, the UK issued them documents confirming their status as permanently settled.

Until December 31, 2024, when all those documents with no expiration date will abruptly expire because the Home Office has a new system that is entirely online. As ORG and the3million explain it, come January 1, 2025, about 4 million people will need online accounts to access the new system, which generates a code to give the bank or landlord temporary access to their status. The new system will apparently hit a variety of databases in real time to perform live checks.

Now, I get that the UK government doesn’t want anyone to be in the country for one second longer than they’re entitled to. But we don’t even have to say, “What could possibly go wrong?” because we already *know* what *has* gone wrong for the Windrush generation. Anyone who has to prove their status off the cuff in time-sensitive situations really needs proof they can show when the system fails.

A proposal like this can only come from an irrational belief in the perfection – or at least, perfectability – of computer systems. It assumes that Internet connections won’t be interrupted, that databases will return accurate information, and that everyone involved will have the necessary devices and digital literacy to operate it. Even without ORG’s and the3million’s analysis, these are bonkers things to believe – and they are made worse by a helpline that is only available during the UK work day.

There is a lot of this kind of credulity about, most of it connected with “AI”. AP News reports that US police departments are beginning to use chatbots to generate crime reports based on the audio from their body cams. And, says Ars Technica, the US state of Nevada will let AI decide unemployment benefit claims, potentially producing denials that can’t be undone by a court. BrainFacts reports that decision makers using “AI” systems are prone to automation bias – that is, they trust the machine to be right. Of course, that’s just good job security: you won’t be fired for following the machine, but you might for overriding it.

The underlying risk with all these systems, as a security experts might say, is complexity: more complex means being more susceptible to inexplicable failures. There is very little to go wrong with a piece of paper that plainly states your status, for values of “paper” including paper, QR codes downloaded to phones, or PDFs saved to a desktop/laptop. Much can go wrong with the system underlying that “paper”, but, crucially, when a static confirmation is saved offline, managing that underlying complexity can take place when the need is not urgent.

It ought to go without saying that computer systems with a profound impact on people’s lives should be backed up by redundant systems that can be used when they fail. Yet the world the powers that be apparently want to build is one that underlines their power to cause enormous stress for everyone else. Systems like eVisas are as brittle as just-in-time supply chains. And we saw what happens to those during the emergency phase of the covid pandemic.

Illustrations: Empty supermarket shelves in March 2020 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Beware the duck

Once upon a time, “convergence” was a buzzword. That was back in the days when audio was on stereo systems, television was on a TV, and “communications” happened on phones that weren’t computers. The word has disappeared back into its former usage pattern, but it could easily be revived to describe what’s happening to content as humans dive into using generative tools.

Said another way. Roughly this time last year, the annual technology/law/pop culture conference Gikii was awash in (generative) AI. That bubble is deflating, but in the experiments that nonetheless continue a new topic more worthy of attention is emerging: artificial content. It’s striking because what happens at this gathering, which mines all types of popular culture for cues for serious ideas, is often a good guide to what’s coming next in futurelaw.

That no one dared guess which of Zachary Cooper‘s pair of near-identicalaudio clips was AI-generated, and which human-performed was only a starting point. One had more static? Cooper’s main point: “If you can’t tell which clip is real, then you can’t decide which one gets copyright.” Right, because only human creations are eligible (although fake bands can still scam Spotify).

Cooper’s brief, wild tour of the “generative music underground” included using AI tools to create songs whose content is at odds with their genre, whole generated albums built by a human producer making thousands of tiny choices, and the new genre “gencore”, which exploits the qualities of generated sound (Cher and Autotune on steroids). Voice cloning, instrument cloning, audio production plugins, “throw in a bass and some drums”….

Ultimately, Cooper said, “The use of generative AI reveals nothing about the creative relationship to work; it destabilizes the international market by having different authorship thresholds; and there’s no means of auditing any of it.” Instead of uselessly trying to enforce different rights predicated on the use or non-use of a specific set of technologies, he said, we should tackle directly the challenges new modes of production pose to copyright. Precursor: the battles over sampling.

Soon afterwards, Michael Veale was showing us Civitai, an Idaho-based site offering open source generative AI tools, including fine-tuned models. “Civitai exists to democratize AI media creation,” the site explains. “Everything has a valid legal purpose,” Veale said, but the way capabilities can be retrained and chained together to create composites makes it hard to tell which tools, if any, should be taken down, even for creators (see also the puzzlement as Redditors try to work this out). Even environmental regulation can’t help, as one attendee suggested: unlike large language models, these smaller, fine-tuned models (as Jon Crowcroft and I surmised last year would be the future) are efficient; they can run on a phone.

Even without adding artificial content there is always an inherent conflict when digital meets an analog spectrum. This is why, Andy Phippen said, the threshold of 18 for buying alcohol and cigarettes turns into a real threshold of 25 at retail checkouts. Both software and humans fail at determining over-or-under-18, and retailers fear liability. Online age verification as promoted in the Online Safety Act will not work.

If these blurred lines strain the limits of current legal approaches, others expose gaps in the law. Andrea Matwyshyn, for example, has been studying parallels I’ve also noticed between early 20th century company towns and today’s tech behemoths’ anti-union, surveillance-happy working practices. As a result, she believes that regulatory authorities need to start considering closely the impact of data aggregation when companies merge and look for company town-like dynamics”.

Andelka Phillips parodied the overreach of app contracts by imagining the EULA attached to “ThoughtReader app”. A sample clause: “ThoughtReader may turn on its service at any time. By accepting this agreement, you are deemed to accept all monitoring of your thoughts.” Well, OK, then. (I also had a go at this here, 19 years ago.)

Emily Roach toured the history of fan fiction and the law to end up at Archive of Our Own, a “fan-created, fan-run, nonprofit, noncommercial archive for transformative fanworks, like fanfiction, fanart, fan videos, and podfic”, the idea being to ensure that the work fans pour their hearts into has a permanent home where it can’t be arbitrarily deleted by corporate owners. The rules are strict: not so much as a “buy me a coffee” tip link that could lead to a court-acceptable claim of commercial use.

History, the science fiction writer Charles Stross has said, is the science fiction writer’s secret weapon. Also at Gikii: Miranda Mowbray unearthed the 18th century “Digesting Duck” automaton built by Jacques de Vauconson. It was a marvel that appeared to ingest grain and defecate waste and that in its day inspired much speculation about the boundaries between real and mechanical life. Like the amazing ancient Greek automata before it, it was, of course, a purely mechanical fake – it stored the grain in a small compartment and released pellets from a different compartment – but today’s humans confused into thinking that sentences mean sentience could relate.

Illustrations: One onlooker’s rendering of his (incorrect) idea of the interior of Jacques de Vaucanson’s Digesting Duck (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Service Model

Service Model
By Adrian Tchaikovsky
Tor Publishing Group
ISBN: 978-1-250-29028-1

Charles is a highly sophisticated robot having a bad day. As a robot, “he” would not express it that way. Instead, he would say that he progresses through each item on his task list and notes its ongoing pointlessness. He checks his master’s travel schedule and finds no plans, Nonetheless, he completes his next tasks, laying out today’s travel clothes, dusting off yesterday’s unused set, and placing them back in the wardrobe, as he has every day for the 2,230 days since his master last left the house.

He goes on to ask House, the manor’s major-domo system, to check with the lady of the house’s maidservant for travel schedules, planned clothing, and other aspects of life. There has been no lady of the house, and therefore no maidservant, for 17 years and 12 days. An old subroutine suggests ways to improve efficiency by eliminating some of the many empty steps, but Charles has no instructions that would let him delete any of them, even when House reports errors. The morning routine continues. It’s tempting to recall Ray Bradbury’s short story “There Will Come Soft Rains”.

Until Charles and House jointly discover there are red stains on the car upholstery Charles has just cleaned…and on Charles’s hands, and on the master’s laid-out clothes, and on his bedclothes and on his throat where Charles has recently been shaving him with a straight razor…

The master has been murdered.

So begins Adrian Tchaikovsky’s post-apocalyptic science fiction novel Service Model.

Some time later – after a police investigation – Charles sets out to walk the long miles to report to Diagnostics, and perhaps thereafter to find a new master in need of a gentleman’s gentlebot. Charles would not say he “hoped”; he would say he awaits instructions, and that the resulting uncertainty is inefficiently consuming his resources.

His journey takes him through a landscape filled with other robots that have lost their purpose. Manor after manor along the road is dark or damaged; at one, a servant robot waits pointlessly to welcome guests who never come. The world, it seems, is stuck in recursive loops that cannot be overridden because the human staff required to do so have been…retired. At the Diagnostics center Charles finds more of the same: a queue of hundreds of robots waiting to be seen, stalled by the lack of a Grade Seven human to resolve the blockage.

Enter “the Wonk”, a faulty robot with no electronic link and a need to recharge at night and consume food, who sees Charles – now Uncharles, since he no longer serves the master who named him – as infected with the “protagonist virus” and wants him to join in searching for the mysterious Library, which is preserving human knowledge. Uncharles is more interested in finding humans he can serve.

Their further explorations of a post-apocalyptic world, thinly populated and filled with the rubble of cities, along with Uncharles’s efforts to understand his nature, form most of the rest of the book. Is Wonk’s protagonist virus even a real thing? He doubts that it is. And yet, he feels himself finding excuses to avoid taking on yet another pointless job.

The best part of all this is Tchaikovsky’s rendering of Cbarles/Uncharles’s thoughts about himself and his attempts to make sense of the increasingly absurd world around him. A long, long way into the book it’s still not obvious how it will end.