This perfect day

To anyone remembering the excitement over DNA testing just a few years ago, this week’s news about 23andMe comes as a surprise. At CNN, Allison Morrow reports that all seven board members have resigned to protest CEO Anne Wojcicki’s plan to take the company private by buying up all the shares she doesn’t already own at 40 cents each (closing price yesterday was 0.3301. The board wanted her to find a buyer offering a better price.

In January, Rolfe Winkler reported at the Wall Street Journal ($) that 23andMe is likely to run out of cash by next year. Its market cap has dropped from $6 billion to under $200 million. He and Morrow catalogue the company’s problems: it’s never made a profit nor had a sustainable business model.

The reasons are fairly simple: few repeat customers. With DNA testing, as Winkler writes, “Customers only need to take the test once, and few test-takers get life-altering health results.” 23andMe’s mooted revolution in health care instead was a fad. Now, the company is pivoting to sell subscriptions to weight loss drugs.

This strikes me as an extraordinarily dangerous moment: the struggling company’s sole unique asset is a pile of more than 10 million DNA samples whose owners have agreed they can be used for research. Many were alarmed when, in December 2023, hackers broke into 1.7 million accounts and gained access to 6.9 million customer profiles<, though. The company said the hacked data did not include DNA records but did include family trees and other links. We don't think of 23andMe as a social network. But the same affordances that enabled Cambridge Analytica to leverage a relatively small number of user profiles to create a mass of data derived from a much larger number of their Friends worked on 23andMe. Given the way genetics works, this risk should have been obvious.

In 2004, the year of Facebook’s birth, the Australian privacy campaigner Roger Clarke warned in Very Black “Little Black Books” that social networks had no business model other than to abuse their users’ data. 23andMe’s terms and conditions promise to protect user privacy. But in a sale what happens to the data?

The same might be asked about the data that would accrue from Oracle CEO Larry Ellison‘s surveillance-embracing proposals this week. Inevitably, commentators invoked George Orwell’s 1984. At Business Insider, Kenneth Niemeyer was first to report: “[Ellison] said AI will usher in a new era of surveillance that he gleefully said will ensure ‘citizens will be on their best behavior.'”

The all-AI-surveillance all-the-time idea could only be embraced “gleefully” by someone who doesn’t believe it will affect him.

Niemeyer:

“Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras.

“We’re going to have supervision,” Ellison said. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.”

Ellison is twenty-six years behind science fiction author David Brin, who proposed radical transparency in his 1998 non-fiction outing, The Transparent Society. But Brin saw reciprocity as an essential feature, believing it would protect privacy by making surveillance visible. Ellison is claiming that *inscrutable* surveillance will guarantee good behavior.

At 404 Media, Jason Koebler debunks Ellison point by point. Research and other evidence shows securing schools is unlikely to make them safer; body cameras don’t appear to improve police behavior; and all the technologies Ellison talks about have problems with accuracy and false positives. Indeed, the mayor of Chicago wants to end the city’s contract with ShotSpotter (now SoundThinking), saying it’s expensive and doesn’t cut crime; some research says it slows police 911 response. Worth noting Simon Spichak at Brain Facts, who finds with AI tools humans make worse decisions. So…not a good idea for police.

More disturbing is Koebler’s main point: most of the technology Ellison calls “future” is already here and failing to lower crime rates or solve its causes – while being very expensive. Ellison is already out of date.

The book Ellison’s fantasy evokes for me is the less-known This Perfect Day, by Ira Levin, written in 1970. The novel’s world is run by a massive computer (“Unicomp”) that decides all aspects of individuals’ lives: their job, spouse, how many children they can have. Enforcing all this are human counselors and permanent electronic bracelets individuals touch to ubiquitous scanners for permission.

Homogeneity rules: everyone is mixed race, there are only four boys’ and girls’ names, they eat “totalcakes”, drink cokes, wear identical clothing. For the rest, regularly administered drugs keep everyone healthy and docile. “Fight” is an abominable curse word. The controlled world over which Unicomp presides is therefore almost entirely benign: there is no war, crime, and little disease. It rains only at night.

Naturally, the novel’s hero rebels, joins a group of outcasts (“the Incurables”), and finds his way to the secret underground luxury bunker where a few “Programmers” help Unicomp’s inventor, Wei Li Chun, run the world to his specification. So to me, Ellison’s plan is all about installing himself as world ruler. Which, I mean, who could object except other billionaires?

Illustrations: The CCTV camera on George Orwell’s Portobello Road house.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The brittle state

We’re now almost a year on from Rishi Sunak’s AI Summit, which failed to accomplish any of its most likely goals: cement his position as the UK’s prime minister; establish the UK as a world leader in AI fearmongering; or get him the new life in Silicon Valley some commentators seemed to think he wanted.

Arguably, however, it has raised belief that computer systems are “intelligent” – that is, that they understand what they’re calculating. The chatbots based on large language models make that worse, because, as James Boyle cleverly wrote, for the first time in human history, “sentences do not imply sentience”. Mix in paranoia over the state of the world and you get some truly terrifying systems being put into situations where they can catastrophically damage people’s lives. We should know better by now.

The Open Rights Group (I’m still on its advisory council) is campaigning against the Home Office’s planned eVisa scheme. In the previouslies: between 1948 and 1971, people from Caribbean countries, many of whom had fought for Britain in World War II, were encouraged to help the UK rebuild the economy post-war. They are known as the “Windrush generation” after the first ship that brought them. As Commonwealth citizens, they didn’t need visas or documentation; they and their children had the automatic right to live and work here.

Until 1973, when the law changed; later arrivals needed visas. The snag was that earlier arrivals had no idea they had any reason to worry….until the day they discovered, when challenged, that they had no way to prove they were living here legally. That day came in 2017, when then-prime minister, Theresa May (who this week joined the House of Lords) introduced the hostile environment. Intended to push illegal immigrants to go home, this law moves the “border” deep into British life by requiring landlords, banks, and others to conduct status checks. The result was that some of the Windrush group – again, legal residents – were refused medical care, denied housing, or deported.

When Brexit became real, millions of Europeans resident in the UK were shoved into the same position: arrived legally, needing no documentation, but in future required to prove their status. This time, the UK issued them documents confirming their status as permanently settled.

Until December 31, 2024, when all those documents with no expiration date will abruptly expire because the Home Office has a new system that is entirely online. As ORG and the3million explain it, come January 1, 2025, about 4 million people will need online accounts to access the new system, which generates a code to give the bank or landlord temporary access to their status. The new system will apparently hit a variety of databases in real time to perform live checks.

Now, I get that the UK government doesn’t want anyone to be in the country for one second longer than they’re entitled to. But we don’t even have to say, “What could possibly go wrong?” because we already *know* what *has* gone wrong for the Windrush generation. Anyone who has to prove their status off the cuff in time-sensitive situations really needs proof they can show when the system fails.

A proposal like this can only come from an irrational belief in the perfection – or at least, perfectability – of computer systems. It assumes that Internet connections won’t be interrupted, that databases will return accurate information, and that everyone involved will have the necessary devices and digital literacy to operate it. Even without ORG’s and the3million’s analysis, these are bonkers things to believe – and they are made worse by a helpline that is only available during the UK work day.

There is a lot of this kind of credulity about, most of it connected with “AI”. AP News reports that US police departments are beginning to use chatbots to generate crime reports based on the audio from their body cams. And, says Ars Technica, the US state of Nevada will let AI decide unemployment benefit claims, potentially producing denials that can’t be undone by a court. BrainFacts reports that decision makers using “AI” systems are prone to automation bias – that is, they trust the machine to be right. Of course, that’s just good job security: you won’t be fired for following the machine, but you might for overriding it.

The underlying risk with all these systems, as a security experts might say, is complexity: more complex means being more susceptible to inexplicable failures. There is very little to go wrong with a piece of paper that plainly states your status, for values of “paper” including paper, QR codes downloaded to phones, or PDFs saved to a desktop/laptop. Much can go wrong with the system underlying that “paper”, but, crucially, when a static confirmation is saved offline, managing that underlying complexity can take place when the need is not urgent.

It ought to go without saying that computer systems with a profound impact on people’s lives should be backed up by redundant systems that can be used when they fail. Yet the world the powers that be apparently want to build is one that underlines their power to cause enormous stress for everyone else. Systems like eVisas are as brittle as just-in-time supply chains. And we saw what happens to those during the emergency phase of the covid pandemic.

Illustrations: Empty supermarket shelves in March 2020 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Beware the duck

Once upon a time, “convergence” was a buzzword. That was back in the days when audio was on stereo systems, television was on a TV, and “communications” happened on phones that weren’t computers. The word has disappeared back into its former usage pattern, but it could easily be revived to describe what’s happening to content as humans dive into using generative tools.

Said another way. Roughly this time last year, the annual technology/law/pop culture conference Gikii was awash in (generative) AI. That bubble is deflating, but in the experiments that nonetheless continue a new topic more worthy of attention is emerging: artificial content. It’s striking because what happens at this gathering, which mines all types of popular culture for cues for serious ideas, is often a good guide to what’s coming next in futurelaw.

That no one dared guess which of Zachary Cooper‘s pair of near-identicalaudio clips was AI-generated, and which human-performed was only a starting point. One had more static? Cooper’s main point: “If you can’t tell which clip is real, then you can’t decide which one gets copyright.” Right, because only human creations are eligible (although fake bands can still scam Spotify).

Cooper’s brief, wild tour of the “generative music underground” included using AI tools to create songs whose content is at odds with their genre, whole generated albums built by a human producer making thousands of tiny choices, and the new genre “gencore”, which exploits the qualities of generated sound (Cher and Autotune on steroids). Voice cloning, instrument cloning, audio production plugins, “throw in a bass and some drums”….

Ultimately, Cooper said, “The use of generative AI reveals nothing about the creative relationship to work; it destabilizes the international market by having different authorship thresholds; and there’s no means of auditing any of it.” Instead of uselessly trying to enforce different rights predicated on the use or non-use of a specific set of technologies, he said, we should tackle directly the challenges new modes of production pose to copyright. Precursor: the battles over sampling.

Soon afterwards, Michael Veale was showing us Civitai, an Idaho-based site offering open source generative AI tools, including fine-tuned models. “Civitai exists to democratize AI media creation,” the site explains. “Everything has a valid legal purpose,” Veale said, but the way capabilities can be retrained and chained together to create composites makes it hard to tell which tools, if any, should be taken down, even for creators (see also the puzzlement as Redditors try to work this out). Even environmental regulation can’t help, as one attendee suggested: unlike large language models, these smaller, fine-tuned models (as Jon Crowcroft and I surmised last year would be the future) are efficient; they can run on a phone.

Even without adding artificial content there is always an inherent conflict when digital meets an analog spectrum. This is why, Andy Phippen said, the threshold of 18 for buying alcohol and cigarettes turns into a real threshold of 25 at retail checkouts. Both software and humans fail at determining over-or-under-18, and retailers fear liability. Online age verification as promoted in the Online Safety Act will not work.

If these blurred lines strain the limits of current legal approaches, others expose gaps in the law. Andrea Matwyshyn, for example, has been studying parallels I’ve also noticed between early 20th century company towns and today’s tech behemoths’ anti-union, surveillance-happy working practices. As a result, she believes that regulatory authorities need to start considering closely the impact of data aggregation when companies merge and look for company town-like dynamics”.

Andelka Phillips parodied the overreach of app contracts by imagining the EULA attached to “ThoughtReader app”. A sample clause: “ThoughtReader may turn on its service at any time. By accepting this agreement, you are deemed to accept all monitoring of your thoughts.” Well, OK, then. (I also had a go at this here, 19 years ago.)

Emily Roach toured the history of fan fiction and the law to end up at Archive of Our Own, a “fan-created, fan-run, nonprofit, noncommercial archive for transformative fanworks, like fanfiction, fanart, fan videos, and podfic”, the idea being to ensure that the work fans pour their hearts into has a permanent home where it can’t be arbitrarily deleted by corporate owners. The rules are strict: not so much as a “buy me a coffee” tip link that could lead to a court-acceptable claim of commercial use.

History, the science fiction writer Charles Stross has said, is the science fiction writer’s secret weapon. Also at Gikii: Miranda Mowbray unearthed the 18th century “Digesting Duck” automaton built by Jacques de Vauconson. It was a marvel that appeared to ingest grain and defecate waste and that in its day inspired much speculation about the boundaries between real and mechanical life. Like the amazing ancient Greek automata before it, it was, of course, a purely mechanical fake – it stored the grain in a small compartment and released pellets from a different compartment – but today’s humans confused into thinking that sentences mean sentience could relate.

Illustrations: One onlooker’s rendering of his (incorrect) idea of the interior of Jacques de Vaucanson’s Digesting Duck (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Service Model

Service Model
By Adrian Tchaikovsky
Tor Publishing Group
ISBN: 978-1-250-29028-1

Charles is a highly sophisticated robot having a bad day. As a robot, “he” would not express it that way. Instead, he would say that he progresses through each item on his task list and notes its ongoing pointlessness. He checks his master’s travel schedule and finds no plans, Nonetheless, he completes his next tasks, laying out today’s travel clothes, dusting off yesterday’s unused set, and placing them back in the wardrobe, as he has every day for the 2,230 days since his master last left the house.

He goes on to ask House, the manor’s major-domo system, to check with the lady of the house’s maidservant for travel schedules, planned clothing, and other aspects of life. There has been no lady of the house, and therefore no maidservant, for 17 years and 12 days. An old subroutine suggests ways to improve efficiency by eliminating some of the many empty steps, but Charles has no instructions that would let him delete any of them, even when House reports errors. The morning routine continues. It’s tempting to recall Ray Bradbury’s short story “There Will Come Soft Rains”.

Until Charles and House jointly discover there are red stains on the car upholstery Charles has just cleaned…and on Charles’s hands, and on the master’s laid-out clothes, and on his bedclothes and on his throat where Charles has recently been shaving him with a straight razor…

The master has been murdered.

So begins Adrian Tchaikovsky’s post-apocalyptic science fiction novel Service Model.

Some time later – after a police investigation – Charles sets out to walk the long miles to report to Diagnostics, and perhaps thereafter to find a new master in need of a gentleman’s gentlebot. Charles would not say he “hoped”; he would say he awaits instructions, and that the resulting uncertainty is inefficiently consuming his resources.

His journey takes him through a landscape filled with other robots that have lost their purpose. Manor after manor along the road is dark or damaged; at one, a servant robot waits pointlessly to welcome guests who never come. The world, it seems, is stuck in recursive loops that cannot be overridden because the human staff required to do so have been…retired. At the Diagnostics center Charles finds more of the same: a queue of hundreds of robots waiting to be seen, stalled by the lack of a Grade Seven human to resolve the blockage.

Enter “the Wonk”, a faulty robot with no electronic link and a need to recharge at night and consume food, who sees Charles – now Uncharles, since he no longer serves the master who named him – as infected with the “protagonist virus” and wants him to join in searching for the mysterious Library, which is preserving human knowledge. Uncharles is more interested in finding humans he can serve.

Their further explorations of a post-apocalyptic world, thinly populated and filled with the rubble of cities, along with Uncharles’s efforts to understand his nature, form most of the rest of the book. Is Wonk’s protagonist virus even a real thing? He doubts that it is. And yet, he feels himself finding excuses to avoid taking on yet another pointless job.

The best part of all this is Tchaikovsky’s rendering of Cbarles/Uncharles’s thoughts about himself and his attempts to make sense of the increasingly absurd world around him. A long, long way into the book it’s still not obvious how it will end.

Sectioned

Social media seems to be having a late-1990s moment, raising flashbacks to the origins of platform liability and the passage of Section 230 of the Communications Decency Act (1996). It’s worth making clear at the outset: most of the people talking about S230 seem to have little understanding of what it is and does. It allows sites to moderate content without becoming liable for it. It is what enables all those trust and safety teams to implement sites’ restrictions on acceptable use. When someone wants to take an axe to it because there is vile content circulating, they have not understood this.

So, in one case this week a US appeals court is allowing a lawsuit to proceed that seeks to hold TikTok liable for users’ postings of the “blackout challenge”, the idea being to get an adrenaline rush by reviving from near-asphyxiation. Bloomberg reports that at least 20 children have died trying to accomplish this, at least 15 of them age 12 or younger (TikTok, like all social media, is supposed to be off-limits to under-13s). The people suing are the parents of one of those 20, a ten-year-old girl who died attempting the challenge.

The other case is that of Pavel Durov, CEO of the messaging service Telegram, who has been arrested in France as part of a criminal investigation. He has been formally charged with complicity in managing an online platform “in order to enable an illegal transaction in organized group”, and refusal to cooperate with law enforcement authorities and ordered not to leave France, with bail set at €5 million (is that enough to prevent the flight of a billionaire with four passports?).

While there have been many platform liability cases, there are relatively few examples of platform owners and operators being charged. The first was in 1997, back when “online” still had a hyphen; the German general manager of CompuServe, Felix Somm, was arrested in Bavaria on charges of “trafficking in pornography”. That is, German users of Columbus, Ohio-based CompuServe could access pornography and illegal material on the Internet through the service’s gateway. In 1998, Somm was convicted and given a two-year suspended sentence. In 1999 his conviction was overturned on appeal, partly, the judge wrote, because there was no technology at the time that would have enabled CompuServe to block the material.

The only other example I’m aware of came just this week, when an Arizona judge sentenced Michael Lacey, co-founder of the classified ads site Backpage.com, to five years in prison and fined him $3 million for money laundering. He still faces further charges for prostitution facilitation and money laundering; allegedly he profited from a scheme to promote prostitution on his site. Two other previously convicted Backpages executives were also sentenced this week to ten years in prison.

In Durov’s case, the key point appears to be his refusal to follow industry practice with respect to to reporting child sexual abuse material or cooperate with properly executed legal requests for information. You don’t have to be a criminal to want the social medium of your choice to protect your privacy from unwarranted government snooping – but equally, you don’t have to be innocent to be concerned if billionaire CEOs of large technology companies consider themselves above the law. (See also Elon Musk, whose X platform may be tossed out of Brazil right now.)

Some reports on the Durov case have focused on encryption, but the bigger issue appears to be failure to register to use encryption , as Signal has. More important, although Telegram is often talked about as encrypted, it’s really more like other social media, where groups are publicly visible, and only direct one-on-one messages are encrypted. But even then, they’re only encrypted if users opt in. Given that users notoriously tend to stick with default settings, that means that the percentage of users who turn that encryption on is probably tiny. So it’s not clear yet whether France is seeking to hold Durov responsible for the user-generated content on his platform (which S230 would protect in the US), or accusing him of being part of criminal activity relating to his platform (which it wouldn’t).

Returning to the Arizona case, in allowing the lawsuit to go ahead, the appeals court judgment says that S230 has “evolved away from its original intent”, and argues that because TikTok’s algorithm served up the challenge on the child’s “For You” page, the service can be held responsible. At TechDirt, Mike Masnick blasts this reasoning, saying that it overturns numerous other court rulings upholding S230, and uses the same reasoning as the 1995 decision in Stratton Oakmont v. Prodigy. That was the case that led directly to the passage of S230, introduced by then-Congressman Christopher Cox (R-CA) and Senator Ron Wyden (D-OR), who are still alive to answer questions about their intent. Rather than evolving away, we’ve evolved back full circle.

The rise of monopolistic Big Tech has tended to obscure the more important point about S230. As Cory Doctorow writes for EFF, killing S230 would kill the small federated communities (like Mastodon and Discord servers) and web boards that offer alternatives to increasing Big Tech’s pwoer. While S230 doesn’t apply outside the US (some Americans have difficulty understanding that other countries have different laws), its ethos is pervasive and the companies it’s enabled are everywhere. In the end, it’s like democracy: the alternatives are worse.

Illustrations: Drunken parrot in Putney (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

A three-hour tour

It should be easy for the UK’s Competition Authority to shut down the proposed merger of Vodafone and Three, two of the UK’s four major mobile network providers. Remaining as competition post-merger would be EE (owned by BT) and Virgin Media O2 (owned by the Spanish company Telefónica and the US-listed company Liberty Global).

The trade union Unite is correctly calling the likely consequences: higher prices, fewer choices, job losses, and poorer customer service. In response, Vodafone and Three are dangling a shiny object of temptation: investment in building 5G network.

Well, hogwash. I would say “Don’t do this” even if I weren’t a Three customer (who left Vodafone years ago). Let them agree to collaborate on building a sbared network and compete on quality and services, but not merge. See the US broadband market, where prices are high, speeds are low, and frustrated consumers rarely have more than one option and take heed.

***

It’s a relief to see some sanity arriving around generative AI. As a glance at the archives will show, I’ve never been a fan; last year Jon Crowcroft and I predicted the eventual demise of large language models due to model collapse. Now, David Gray Widder and Mar Hicks warn in a paper that although the generative AI bubble is deflating, its damage will persist: “…carbon can’t be put back in the ground, workers continue to need to fend off AI’s disciplining effects, and the poisonous effect on our information commons will be hard to undo.”

This week offers worked examples. Re disinformation, at The Verge Sarah Jeong describes the change in our relationship with photographs arriving with new smartphones’ ability to fake realistic images. At The Register, Dan Robinson reports that data centers and AI are causing a substantial rise in water use in the US state of Virginia.

As evidence of the deflating bubble, Widder and Hicks cite the recent Goldman Sachs report arguing that generative AI is unlikely ever to pay back its investment.

And yet: to exploit generative AI, companies and governments are reversing or delaying programs to lower carbon emissions. Also alarmingly, Widder and Hicks wonder if generative AI was always meant to fail and its promoters’ only real goals were to scoop up profits and use the inevitability narrative to make generative AI a vector for embedding infrastructural dependencies (for example, on cloud computing).

That outcome doesn’t have to have been a plan – or a conspiracy theory, just as robber barons don’t actually need to conspire in order to serve each other’s interests. It could just as well be a circumstances-led pivot. But companies that have put money into generative AI will want to scrounge whatever return they can get. So the idea that we will be left with infrastructure that’s a poor fit for our actual needs is a disturbing – and entirely possible – outcome.

***

It’s fascinating – and an example of how you never know where new technologies will lead – to learn that people are using DNA testing to prove they qualify for citizenship in other countries such as Ireland, where a single grandparent will get you in. In some cases, such as the children of unmarried Irish women who were transported to England, this use of DNA testing rights historic wrongs. For others, it opens new opportunities such as the right to live in the EU. Unfortunately, it’s easy to imagine that in countries where citizenship by birthright is a talking point for the right wing this type of DNA testing could be mooted as a requirement. I’d like to think that rounding up babies for deportation is beyond even the most bigoted authoritarians, but…

***

The controversial British technology entrepreneur Mike Lynch has died a billionaire’s death; his superyacht sank in a tornado off the coast of Sicily. I interviewed him for Salon in 2000, when he was newly Britain’s first software billionaire. It was the first time I heard of the theorem developed by Thomas Bayes, an 18th century minister and mathematician (which now is everywhere), and for a long time afterwards I wasn’t certain I’d correctly understood his comments about perception and philosophy. This was exacerbated by early experience with his software in 1996, when it was still a consumer desktop search product fronted by an annoying cartoon dog – I thought it unusably slow compared to pre-Google search engines. By 2000, Autonomy had pivoted to enterprise software, which seemed a better fit.

In 2011, Sharon Bertsch McGrayne‘s book, The Theory That Would Not Die, explained things more clearly. That year, Lynch hit a business peak by selling Autonomy to Hewlett-Packard for $11 billion. A year later, he left HP, and set up Invoke Capital to invest in companies with fundamental technology ideas that scale.

Soon afterwards, HP wrote down $8.8 billion and accused Lynch of accounting fraud. The last 12 years of his life were spent in courtrooms: first a UK civil case, decided for HP in 2022, which Lynch was appealing, then a fight against extradition, and finally a criminal trial in the US, where former Autonomy CFO Sushovan Hussein had already been sent to jail for five years. Lynch’s fatal yacht trip was to celebrate his acquittal.

Illustrations: A Customs and Border Protection scientist reads a DNA profile to determine the origin of a commodity (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The fear factor

Be careful what you allow the authorities to do to people you despise, because one day those same tools will be turned against you.

In the last few weeks, the shocking stabbing of three young girls at a dance class in Southport became the spark to ignite riots across the UK by people who apparently believed social media theories that the 17-year-old boy responsible was Muslim, a migrant, or a terrorist. With the boy a week from his 18th birthday, the courts ruled police could release his name in order to make clear he was not Muslim and born in Wales. It failed to stop the riots.

Police and the courts have acted quickly; almost 800 people have been arrested, 350 have been charged, and hundreds are in custody. In a moving development, on a night when more than 100 riots were predicted, tens of thousands of ordinary citizens thronged city streets and formed protective human chains around refugee centers in order to block the extremists. The riots have quieted down, but police are still busy arresting newly-identified suspects. And the inevitable question is being asked: what do we do next to keep the streets safe and calm?

London mayor Sadiq Khan quickly called for a review of the Online Safety Act, saying he doesn’t believe it’s fit for purpose. Cabinet minister Nick Thomas-Symonds (Labour-Torfaen) has suggested the month-old government could change the law.

Meanwhile, prime minister Keir Starmer favours a wider rollout of live facial recognition to track thugs and prevent them from traveling to places where they plan to cause social unrest, copying systems the police use to prevent football hooligans from even boarding trains to matches. This proposal is startling because: before standing for Parliament Starmer was a human rights lawyer. One could reasonably expect him to know that facial recognition systems have a notorious history of inaccuracy due to biases baked into their algorithms via training data, and that in the UK there is no regulatory framework to provide oversight. Silkie Carlo, the director of Big Brother Watch immediately called the proposal “alarming” and “ineffective”, warning that it turns people into “walking ID cards”.

As the former head of Liberty, Shami Chakrabarti used to say when ID cards were last proposed, moves like these fundamentally change the relationship between the citizen and the state. Such a profound change deserves more thought than a reflex fear reaction in a crisis. As Ciaran Thapar argues at the Guardian, today’s violence has many causes, beginning with the decay of public services for youth, mental health, and , and it’s those causes that need to be addressed. Thapar invokes his memories of how his community overcame the “open, violent racism” of the 1980s Thatcher years in making his recommendations.

Much of the discussion of the riots has blamed social media for propagating hate speech and disinformation, along with calls for rethinking the Online Safety Act. This is also frustrating. First of all, the OSA, which was passed in 2023, isn’t even fully implemented yet. When last seen, Ofcom, the regulator designated to enforce it, was in the throes of recruiting people by the dozen, working out what sites will be in scope (about 150,000, they said), and developing guidelines. Until we see the shape of the regulation in practice, it’s too early to say the act needs expansion.

Second, hate speech and incitement to violence are already illegal under other UK laws. Just this week, a woman was jailed for 15 months for a comment to a Facebook group with 5,100 members that advocated violence against mosques and the people inside them. The OSA was not needed to prosecute her.

And third, while Elon Musk and Mark Zuckerberg definitely deserve to have anger thrown their way, focusing solely on the ills of social media makes no sense given the decades that right-wing newspapers have spent sowing division and hatred. Even before Musk, Twitter often acted as a democratization of the kind of angry, hate-filled coverage long seen in the Daily Mail (and others). These are the wedges that created the divisions that malicious actors can now exploit by disseminating disinformation, a process systematically explained by Renee DiResta in her new book, Invisible Rulers.

The FBI’s investigation of the January 6, 2021 insurrection at the US Capitol provides a good exemplar for how modern investigations can exploit new technologies. Law enforcement applied facial recognition to CCTV footage and massive databases, and studied social media feeds, location data and cellphone tracking, and other data. As Charlie Warzel and Stuart A. Thompson wrote at the New York Times in 2021, even though most of us agree with the goal of catching and punishing insurrectionists and rioters, the data “remains vulnerable to use and abuse” against protests of other types – such as this year’s pro-Palestinian encampments.

The same argument applies in the UK. Few want violence in the streets. But the unilateral imposition of live facial recognition, among other tracking technologies, can’t be allowed. There must be limits and safeguards. ID cards issued in wartime could be withdrawn when peace came; surveillance technologies, once put in place, tend to become permanent.

Illustrations: The CCTV camera at 22 Portobello Road, where George Orwell once lived.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Gather ye lawsuits while ye may

Most of us howled with laughter this week when the news broke that Elon Musk is suing companies for refusing to advertise on his exTwitter platform. To be precise, Musk is suing the World Federation of Advertisers, Unilever, Mars, CVS, and Ørsted in a Texas court.

How could Musk, who styles himself a “free speech absolutist”, possibly force companies to advertise on his site? This is pure First Amendment stuff: both the right to free speech (or to remain silent) and freedom of assembly. It adds to the nuttiness of it all that last November Musk was telling advertisers to “go fuck yourselves” if they threatened him with a boycott. Now he’s mad because they responded in kind.

Does the richest man in the world even need advertisers to finance his toy?

At Techdirt, Mike Masnick catalogues the “so much stupid here”.

The WFA initiative that offends Musk is the Global Alliance for Responsible Media, which develops guidelines for content moderation – things like a standard definition for “hate speech” to help sites operate consistent and transparent policies and reassure advertisers that their logos don’t appear next to horrors like the livestreamed shooting in Christchurch, New Zealand. GARM’s site says: membership is voluntary, following its guidelines is voluntary, it does not provide a rating service, and it is apolitical.

Pre-Musk, Twitter was a member. After Musk took over, he pulled exTwitter out of it – but rejoined a month ago. Now, Musk claims that refusing to advertise on his site might be a criminal matter under RICO. So he’s suing himself? Blink.

Enter US Republicans, who are convinced that content moderation exists only to punish conservative speech. On July 10, House Judiciary Committee, under the leadership of Jim Jordan (R-OH), released an interim report on its ongoing investigation of GARM.

The report says GARM appears to “have anti-democratic views of fundamental American freedoms” and likens its work to restraint of trade Among specific examples, it says GARM’s recommended that its members stop advertising on exTwitter, threatened Spotify when podcaster Joe Rogan told his massive audience that young, healthy people don’t need to be vaccinated against covid, and considered blocking news sites such as Fox News, Breitbart, and The Daily Wire. In addition, the report says, GARM advised its members to use fact-checking services like NewsGuard and the Global Disinformation Index “which disproportionately label right-of-center news sites as so-called misinformation”. Therefore, the report concludes, GARM’s work is “likely illegal under the antitrust laws”.

I don’t know what a court would have made of that argument – for one thing, GARM can’t force anyone to follow its guidelines. But now we’ll never know. Two days after Musk filed suit, the WFA announced it’s shuttering GARM immediately because it can’t afford to defend the lawsuit and keep operating even though it believes it’s complied with competition rules. Such is the role of bullies in our public life.

I suppose Musk can hope that advertisers decide it’s cheaper to buy space on his site than to fight the lawsuit?

But it’s not really a laughing matter. GARM is just one of a number of initiatives that’s come under attack as we head into the final three months of campaigning before the US presidential election. In June, Renee DiResta, author of the new book Invisible Rulers, announced that her contract as the research manager of the Stanford Internet Observatory was not being renewed. Founding director Alex Stamos was already gone. Stanford has said the Observatory will continue under new leadership, but no details have been published. The Washington Post says conspiracy theorists have called DiResta and Stamos part of a government-private censorship consortium.

Meanwhile, one of the Observatory’s projects, a joint effort with the University of Washington called the Election Integrity Partnership, has announced, in response to various lawsuits and attacks, that it will not work on the 2024 or future elections. At the same time, Meta is shutting down CrowdTangle next week, removing a research tool that journalists and academics use to study content on Facebook and Instagram. While CrowdTangle will be replaced with Meta Content Library, access will be limited to academics and non-profits, and those who’ve seen it say it’s missing useful data that was available through CrowdTangle.

The concern isn’t the future of any single initiative; it’s the pattern of these things winking out. As work like DiResta’s has shown, the flow of funds financing online political speech (including advertising) is dangerously opaque. We need access and transparency for those who study it, and in real time, not years after the event.

In this, as so much else, the US continues to clash with the EU, which it accused in December of breaching its rules with respect to disinformation, transparency, and extreme content. Last month, it formally charged Musk’s site for violating the Digital Services Act, for which Musk could be liable for a fine of up to 6% of exTwitter’s global revenue. Among the EU’s complaints is the lack of a searchable and reliable advertisement repository – again, an important element of the transparency we need. Its handling of disinformation and calls to violence during the current UK riots may be added to the investigation.

Musk will be suing *us*, next.

Illustrations: A cartoon caricature of Christina Rossetti by her brother Dante Gabriel Rossetti 1862, showing her having a tantrum after reading The Times’ review of her poetry (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Invisible Rulers

Invisible Rulers: The People Who Turn Lies Into Reality
by Renée DiResta
Hachette
ISBN: 978-1-54170337-7

For the last week, while violence has erupted in British cities, commentators asked, among other things: what has social media contributed to the inflammation? Often, the focus lands on specific famous people such as Elon Musk, who told exTwitter that the UK is heading for civil war (which basically shows he knows nothing about the UK).

It’s a particularly apt moment to read Renée DiResta‘s new book, Invisible Rulers: The People Who Turn Lies Into Reality. Until June, DiResta was the technical director of the Stanford Internet Observatory, which studies misinformation and disinformation online.

In her book, DiResta, like James Ball in The Other Pandemic and Naomi Klein in Doppelganger, traces how misinformation and disinformation propagate online. Where Ball examined his subject from the inside out (having spent his teenaged years on 4chan) and Klein is drawn from the outside in, DiResta’s study is structural. How do crowds work? What makes propaganda successful? Who drives engagement? What turns online engagement into real world violence?

One reason these questions are difficult to answer is the lack of transparency regarding the money flowing to influencers, who may have audiences in the millions. The trust they build with their communities on one subject, like gardening or tennis statistics, extends to other topics when they stray. Someone making how-to knitting videos one day expresses concern about their community’s response to a new virus, finds engagement, and, eventually, through algorithmic boosting, greater profit in sticking to that topic instead. The result, she writes, is “bespoke realities” that are shaped by recommendation engines and emerge from competition among state actors, terrorists, ideologues, activists, and ordinary people. Then add generative AI: “We can now mass-produce unreality.”

DiResta’s work on this began in 2014, when she was checking vaccination rates in the preschools she was looking at for her year-old son in the light of rising rates of whooping cough in California. Why, she wondered, were there all these anti-vaccine groups on Facebook, and what went on in them? When she joined to find out, she discovered a nest of evangelists promoting lies to little opposition, a common pattern she calls “asymmetry of passion”. The campaign group she helped found succeeded in getting a change in the law, but she also saw that the future lay in online battlegrounds shaping public opinion. When she presented her discoveries to the Centers for Disease Control, however, they dismissed it as “just some people online”. This insouciance would, as she documents in a later chapter, come back to bite during the covid emergency, when the mechanisms already built whirred into action to discredit science and its institutions.

Asymmetry of passion makes those holding extreme opinions seem more numerous than they are. The addition of boosting algorithms and “charismatic leaders” such as Musk or Robert F. Kennedy, Jr (your mileage may vary) adds to this effect. DiResta does a good job of showing how shifts within groups – anti-vaxx groups that also fear chemtrails and embrace flat earth, flat earth groups that shift to QAnon – lead eventually from “asking questions” to “take action”. See also today’s UK.

Like most of us, DiResta is less clear on potential solutions. She gives some thought to the idea of prebunking, but more to requiring transparency: platforms around content moderation decisions, influencers around their payment for commercial and political speech, and governments around their engagement with social media platforms. She also recommends giving users better tools and introducing some friction to force a little more thought before posting.

The Observatory’s future is unclear, as several other key staff have left; Stanford told The Verge in June that the Observatory would continue under new leadership. It is just one of several election integrity monitors whose future is cloudy; in March Facebook announced it would shut down research tool CrowdTangle on August 14. DiResta’s book is an important part of its legacy.

Crowdstricken

This time two weeks ago the media were filled with images from airports clogged with travelers unable to depart because of…a software failure. Not a cyberattack, and not, as in 2017, limited to a single airline’s IT systems failure.

The outage wasn’t just in airports: NHS hospitals couldn’t book appointments, the London Stock Exchange news service and UK TV channel Sky News stopped functioning, and much more. It was the biggest computer system outage not caused by an attack to date, a watershed moment like 1988’s Internet worm.

Experienced technology observers quickly predicted: “bungled software update”. There are prior examples aplenty. In February, an AT&T outage lasted more than 12 hours, spanned 50 US states, Puerto Rico, and the US Virgin Islands, and blocked an estimated 25,000 attempted calls to the 911 emergency service. Last week, the Federal Communications Commission attributed the cause to an employee’s addition of a “misconfigured network element” to expand capacity without following the established procedure of peer review. The resulting cascade of failures was an automated response designed to prevent a misconfigured device from propagating. AT&T has put new preventative controls in place, and FCC chair Jessica Rosenworcel said the agency is considering how to increase accountabiliy for failing to follow best practice.

Much of this history is recorded in Peter G. Neumann’s ongoing RISKS Forum mailing list. In 2014, an update Apple issued to fix a flaw in a health app blocked users of its then-new iPhone 6 from connecting. In 2004, a failed modem upgrade knocked Cox Communications subscribers offline. My first direct experience was in the 1990s, when for a day CompuServe UK subsccribers had to dial Germany to pick up our email.

In these previous cases, though, the individuals affected had a direct relationship with the screw-up company. What’s exceptional about Crowdstrike is that the directly affected “users” were its 29,000 huge customer businesses. It was those companies’ resulting failures that turned millions of us into hostages to technological misfortune.

What’s more, in those earlier outages only one company and their direct customers were involved, and understanding the problem was relatively simple. In the case of Crowdstrike, it was hard to pinpoint the source of the problem at first because the direct effects were scattered (only Windows PCs awake to receive Crowdstrike updates) and the indirect effects were widespread.

The technical explanation of what happened, simplified, goes like this: Crowdstrike issued an update to its Falcon security software to block malware it spotted exploiting a vulnerability in Windows. The updated Falcon software sparked system crashes as PCs reacted to protect themselves against potential low-level damage (like a circuit breaker in your house tripping to protect your wiring from overload). Crowdstrike realized the error and pushed out a corrected update 79 minutes later. That fixed machines that hadn’t yet installed the faulty update. The machines that had updated in those 79 minutes, however, were stuck in a doom loop, crashing every time they restarted. Hence the need for manual intervention to remove those files in order to reboot successfully.

Microsoft initially estimated that 8.5 million PCs were affected – but that’s probably a wild underestimate as the only machines it could count were those that had crash reporting turned on.

The root cause is still unclear. Crowdstrike has said it found a hole in its Content Validator Tool, which should have caught the flaw. Microsoft is complaining that a 2009 interoperability agreement forced on it by the EU required it to allow Crowdstrike’s software to operate at the very low level on Windows machines that pushed the systems to crash. It’s wrong, however, to blame companies for enabling automated updates; security protection has to respond to new threats in real time.

The first financial estimates are emerging. Delta Airlines estimates the outage, which borked its crew tracking system for a week, cost it $500 million. CEO Ed Bastian told CNN, “They haven’t offered us anything.” Delta has hired lawyer David Boies, whose high-profile history began with leading the successful 1990s US government prosecution of Microsoft, to file its lawsuit.

Delta will need to take a number. Massachusetts-based Plymouth County Retirement Association has already filed a class action suit on behalf of Crowdstrike shareholders in Texas federal court, where Crowdstrike is headquartered, for misrepresenting its software and its capabilities. Crowdstrike says the case lacks merit.

Lawsuits are likely the only way companies will get recompense unless they have insurance to cover supplier-caused system failures. Like all software manufacturers, Crowdstrike has disclaimed all liability in its terms of use.

In a social media post, Federal Trade Commission chair Lina Khan said that, “These incidents reveal how concentration can create fragile systems.”

Well, yes. Technology experts have long warned of the dangers of monocultures that make our world more brittle. The thing is, we’re stuck with them because of scale. There were good reasons why the dozens of early network and operating systems consolidated: it’s simpler and cheaper for hiring, maintenance, and even security. Making our world less brittle will require holding companies – especially those that become significant points of failure – to meet higher standards of professionalism, including product liability for software, and requiring their customers to boost their resilience.

As for Crowdstrike, it is doomed to become that worst of all things for a company: a case study at business schools everywhere.

Illustrations: XKCD’s Dependency comic, altered by Mary Branscombe to reflect Crowdstrike’s reality.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.