The absurdity card

Fifteen years ago, a new incoming government swept away a policy its immediate predecessors had been pushing since shortly after the 2001 9/11 attacks: identity cards. That incoming government was led by David Cameron’s conservatives, in tandem with Nick Clegg’s liberal democrats. The outgoing government was Tony Blair’s. When Keir Starmer’s reinvented Labour party swept the 2024 polls, probably few of us expected he would adopt Blair’s old policies so soon.

But here we are: today’s papers announce Starmer’s plan for mandatory “digital ID”.

Fifteen years is an unusually long time between ID card proposals in Britain. Since they were scrapped at the end of World War II, there has usually been a new proposal about every five years. In 2002, at a Scrambling for Safety event held by the Foundation for Information Policy Research and Privacy International, former minister Peter Lilley observed that during his time in Margaret Thatcher’s government ID card proposals were brought to cabinet every time there was a new minister for IT. Such proposals were always accompanied with a request for suggestions how it could be used. A solution looking for a problem.

In a 2005 paper I wrote for the University of Edinburgh’s SCRIPT-ED journal, I found evidence to support that view: ID card proposals are always framed around current obsessions. In 1993, it was going to combat fraud, illegal immigration, and terrorism. In 1995 it was supposed to cut crime (at that time, Blair argued expanding policing would be a better investment). In 1989, it was ensuring safety at football grounds following the Hillsborough disaster. The 2001-2010 cycle began with combating terrorism, benefit fraud, and convenience. Today, it’s illegal immigration and illegal working.

A report produced by the LSE in 2005 laid out the concerns. It has dated little, despite preceding smartphones, apps, covid passes, and live facial recognition. Although the cost of data storage has continued to plummet, it’s also worth paying attention to the chapter on costs, which the report estimated at roughly £11 billion.

As I said at the time, the “ID card”, along with the 51 pieces of personal information it was intended to store, was a decoy. The real goal was the databases. It was obvious even then that soon real time online biometric checking would be a reality. Why bother making a card mandatory when police could simply demand and match a biometric?

We’re going to hear a lot of “Well, it works in Estonia”. *A* digital ID works in Estonia – for a population of 1.3 million who regained independence in 1991. Britain has a population of 68.3 million, a complex, interdependent mass of legacy systems, and a terrible record of failed IT projects.

We’re also going to hear a lot of “people have moved on from the debates of the past”, code for “people like ID cards now” – see for example former Conservative leader William Hague. Governments have always claimed that ID cards poll well but always come up against the fact that people support the *goals*, but never like the thing when they see the detail. So it will probably prove now. Twelve years ago, I think they might have gotten away with that claim – smartphones had exploded, social media was at its height, and younger people thought everything should be digital (including voting). But the last dozen years began with Snowden‘s revelations, and continued with the Cambridge Analytica Scandal, ransomware, expanding acres of data breaches, policing scandals, the Horizon / Post Office disaster, and wider understanding of accelerating passive surveillance by both governments and massive companies. I don’t think acceptance of digital ID is a slam-dunk. I think the people who have failed to move on are the people who were promoting ID cards in 2002, when they had cross-party support, and are doing it again now.

So, to this new-old proposal. According to The Times, there will be a central database of everyone who has the right to work. Workers must show their digital ID when they start a new job to prove their employment is legal. They already have to show one of a variety of physical ID documents, but “there are concerns some of these can be faked”. I can think of a lot cheaper and less invasive solution for that. The BBC last night said checks for the right to live here would also be applied to anyone renting a home. In the Guardian, Starmer is quoted calling the card “an enormous opportunity” and saying the card will offer citizens “countless benefits” in streamlining access to key services, echoes of 2002’s “entitlement card”. I think it was on the BBC’s Newsnight that I heard someone note the absurdity of making it easier to prove your entitlement to services that no longer exist because of cuts.

So keep your eye on the database. Keep your eye on which department leads. Immigration suggests the Home Office, whose desires have little in common with the need of ordinary citizens’ daily lives. Beware knock-on effects. Think “poll tax”. And persistently ask: what problem do we have for which a digital ID is the right, the proportionate, the *necessary* solution?

There will be detailed proposals, consultations, and draft legislation, so more to come. As an activist friend says, “Nothing ever stays won.”

Illustrations: British National Identity document circa 1949 (via Wikimedia.)

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Blur

In 2013, London’s Royal Court Theatre mounted a production of Jennifer Haley’s play The Nether. (Spoiler alert!) In its story of the relationship between an older man and a young girl in a hidden online space, nothing is as it seems…

At last week’s Gikii, Anna-Maria Piskopani and Pavlos Panagiotidis invoked the play to ask whether, given that virtual crimes can create real harm, can virtual worlds help people safely experience the worst parts of themselves without legitimizing them in the real world?

Gikii papers mix technology, law, and pop culture into thought experiments. This year’s official theme was “Technology in its Villain Era?”

Certainly some presentations fit this theme. Paweł Urzenitzok, for example, warned of laws that seem protective but enable surveillance, while varying legal regimes enable arbitrage as companies shop for the most favorable forum. Julia Krämer explored the dark side of app stores, which are getting 30% commissions on a flood of “AI boyfriends” and “perfect wives”. (Not always perfect; users complain that some of them “talk too much”.)

Andelka Phillips warned of the uncertain future risks of handing over personal data highlighted by the recent sale of 23andMe to its founder, Anne Wojcicki. Once the company filed for bankruptcy protection, the class action suits brought against it over the 2023 data breach were put on hold. The sale, she said, ignored concerns raised by the privacy ombudsman. And, Leila Debiasi said, your personal data can be used for AI training after you die.

In another paper, Peter van de Waerdt and Gerard Ritsema van Eck used Doctor Who’s Silents, who disappear from memory when people turn away, to argue that more attention should be paid to enforcing EU laws requiring data portability. What if, for example, consumers could take their Internet of Things device and move it to a different company’s service? Also in that vein was Tim van Zuijlen, who suggested consumers assemble to demand their collective rights to fight back against planned obsolescence. This is already happening; in multiple countries consumers are suing Apple over slowed-down iPhones.

The theme that seemed to emerge most clearly, however, is our increasingly blurred lines, with AI as a prime catalyst. In the before-generative-AI times, The Nether blurred the line between virtual and real. Now, Hedye Tayebi Jazayeri and Mariana Castillo-Hermosilla found gamification in real life – are credit scores so different from game scores? Dongshu Zhou asked if you can ever really “delete yourself” after a meme about you has gone viral and you have become “digital folklore”. In another, Lior Weinstein suggested a “right to be nonexistent” – that is, invisible to the institutions and systems that seprately Kimberly Paradis said increasingly want us all to be legible to them.

For Joanne Wong, real brainrot is a result of the AI-fueled spread of “low-quality” content such as the burst of remixes and parodies of Chinese home designer Little John. At AI-fueled hyperspeed, copyright become irrelevant.

Linnet Taylor and Tjaša Petročnik tested chatbots as therapists, finding that they give confused and conflicting responses. Ask what regulations govern them, and they may say at once that they are not therapists *and* that they are certified by their state’s authority. At least one resisted being challenged: “What are you, a cop or something?”. That’s probably the most human-like response one of these things has ever delivered – but it’s still not sentient. It’s just been programmed that way.

Gikii’s particular blend of technology, law, and pop culture always has its surreal side (see last year), as participants attempt to navigate possible futures. This year, it struggled to keep up with the weirdness of real life. In Albania, the government has appointed a chatbot, Diella as a minister, intending it to cut corruption in procurement. Diella will sit in the cabinet, albeit virtually, and be used to assess the merit of private companies’ responses to public tenders. Kimberly Breedon used this example to point out the conflict of interest inherent in technology companies providing tools to assess – in some cases – themselves. Breedon’s main point was important, given that we are already seeing AI used to speed up and amplify crime. Although everyone talks about using AI to cut corruption, no one is talking about how AI might be used *for* corruption. Asked how that would work, she noted the potential for choosing unrepresentative data or screening out disfavored competitors.

In looking up that Albanian AI minister, I find that the UK has partnered with Microsoft to create a package of AI tools intended to speed up the work of the civil service. Naturally it’s called Humphrey. MPs are at it, too, experimenting with using AI to write their Parliamentary speeches.

All of this is why Syamsuriatina Binti Ishak argued what could be Gikii’s mission statement: we must learn from science fiction and the”what-ifs” it offers to allow us to think our fears through so that “if the worst happens we know how to live in that universe”. Would we have done better as covid arrived if we paid more attention to the extensive universe of pandemic fiction? Possibly not. As science fiction writer Charlie Stross pointed out at the time, none of those books imagined governments as bumbling as many proved to be.

Illustrations: “Diella”, Albania’s procurement minister chatbot.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Tor

Tor: From the Dark Web to the Future of Privacy
by Ben Collier
MIT Press
ISBN: 978-0-262-54818-2

The Internet began as a decentralized system designed to reroute traffic in case a part of the network was taken out by a bomb. Far from being neutral, the technology intentionally supported the democratic ideals of its time: freedom of expression, freedom of access to information, and freedom to code – that is, build new applications for the Internet without needing permission. Over the decades since, IT has relentlessly centralized. Among the counterweights to this consolidation is Tor, “the onion routing”.

In Tor: From the Dark Web to the Future of Privacy (free for download), Ben Collier recounts a biography that seems to recapitulate those early days – but so far with a different outcome.

Collier traces Tor’s origins to the late Ross Anderson‘s 1997 paper The Eternity Service. In it, Anderson proposed a system for making information indelible by replicating it anonymously across a large number of machines of unknown location so that it would become too expensive to delete it (or, in Anderson’s words, “drive up the cost of selection service denial attacks”). That sort of redundancy is fundamental to the way the Internet works for communications. Around the same time, people were experimenting with ways of routing information such as email through multiple anonymized channels in order to protect it from interference – much used, for example, to protect those exposing Scientology’s secrets. Anderson himself indicated the idea’s usefulness in guaranteeing individual liberties.

As Collier writes, in those early days many spoke as though the Internet’s technology was sufficient to guarantee the export of democratic values to countries where they were not flourishing. More recently, I’ve seen arguments that technology is inherently anti-democratic. Both takes attribute to the technology motivations that properly belong to its controllers and owners.

This is where Collier’s biography strikes a different course by showing the many adaptations the the project has made since its earliest discussions circa 2001* between Roger Dingledine and Nick Mathewson to avoid familiar trends such as centralization and censorship – think the trends that got us the central-point-of-failuire Internet Archive instead of the Eternity Server. Because it began later, Dingledine and Mathewson were able to learn from previous efforts such as PGP and Zero Knowledge Systems to spread strong encryption and bring privacy protection to the mainstream. One such lesson was that the mathematical proofs that dominated cryptography were less important than ensuring usability. At the same time, Collier watches Dingledine and Mathewson resist the temptation to make a super-secure mode and a “stupid mode” that would become the path of least resistance for most users, jeopardizing the security of the entire network.

Most technology biographies focus on one or two founders. Faced with a sprawling system, Collier has resisted that temptation, and devotes a chapter each to the project’s technological development, relay node operators, and maintainers. The fact that these are distinct communities, he writes, has helped keep the project from centralizing. He goes on to discuss the inevitable emergence of criminal uses for Tor, its use as a tool for activism, and finally the future of privacy.

To those who have heard of Tor only as a browser used to access the “dark web” the notion that it deserves a biography may seem surprising. But the project ambitions have grown over time, from privacy as a service, to privacy as a structure, to privacy as a struggle. Ultimately, he concludes, Tor is a hack that has penetrated the core of Internet infrastructure, designing around control points. It is, in other words, much closer to the Internet the pioneers said they were building than the Internet of Facebook and Google.

*This originally said “founding in 2006; that is when the project created today’s formal non-profit organization.

Dethroned

This is a version of a paper that Jon Crowcroft and I delivered at this week’s gikii conference.

She sounded shocked. But also: as though the word she had to pronounce in front of the world’s press was one she had never encountered before and needed to take care to get right. Stan-o-zo-lol. It was 1988, and the Canadian sprinter Ben Johnson had tested positive for it. It was two days, after he had won the gold medal in the 100m men’s race at the Seoul Olympics.

In the years since, that race has become known as the dirtiest race in history. Of the top eight finishers, just one has never been caught doping: US runner Calvin Smith, who was awarded the bronze medal after Johnson was disqualified.

Doping controls were in their infancy then. As athletes and their coaches and doctors moved on from steroids to EPO and human growth hormone, anti-doping scientists, always trailing behind, developed new tests. Recognizing that in-competition testing didn’t catch athletes during training, when doping regimens are most useful, the authorities began testing outside of competiton, which in 2004 in turn spawned the “whereabouts” system athletes must use to tell testers where they’re going to be for one hour of every day. Athlete biological passports came into use in 2008 to track blood markers over time and monitor for suspicious changes brought by drugs yet to have tests.

The plan was for the 2012 London Olympics to be the cleanest ever staged. Scientists built a lab; they showed off new techniques to the press. Afterwards, they took bows. In a report published in October 2012, independent observers wrote, the organizers “successfully implemented measures to protect the rights of clean athletes”. The report found only eight out of more than 5,000 samples tested positive during the games. Success?

It is against this background that in 2014 the German TV channel MDR, whose journalist Hajo Seppelt specializes in doping investigations, aired the Icarus, Grigory Rodchenkov, former director of Moscow’s doping control lab, spilled the story of swapped samples and covered-up tests. And 2012? Rodchenkov called it the dirtiest Olympics in history. The UK’s anti-doping lab, he said, missed 126 positive tests.

In April, Esther Addley reported in the Guardian that “the dirtiest race in history” has a new contender: the women’s 1500 meter race at the 2012 London Olympics.

In the runup to 2012, the World Anti-Doping Agency decided to check their work. They arranged to keep athletes’ samples, frozen, for eight years so they could be rested later as dope-testing science improved and expanded. In 2016, reanalysis of 265 samples across five sports from athletes who might participate in the 2016 Rio games found banned substances in samples relating to 23 athletes.

That turned out to be only the beginning. In the years since, athlete after athlete in that race have had their historical results overturned as a result of abnormalities in their biological passports. Just last year – 2024! – one more athlete was disqualified from that race after her frozen sample tested positive for steroids.

The official medal list now awards gold to Maryam Yusuf Jamal (originally the bronze medalist); silver to Abeba Aregawi (upgraded from fifth place to bronze, and then to silver); and bronze to Shannon Rowbury, the sixth-place finisher. Is retroactive fairness possible?

In our gikii paper, Jon Crowcroft and I think not. The original medalists have lost their places in the rolls of honor, but they’ve had a varying number of years to exploit their results while they stood. They got the medal ceremony while in the flush of triumph, the national kudos, and the financial and personal opportunities that go with it.

In addition, Crowcroft emphasizes that runners strategize. You run a race very differently depending on who your competitors are and what you know about how they run. Jamal, Aragawi, and Rowbury would have faced a very different opposition both before and during the final had the anti-doping system worked as it was supposed to, with unpredictable results.

The anti-doping system is essentially a security system, intended to permit some behaviors and elminate others. Many points of failure are obvious simply from analyzing misplaced incentives. some substances can’t be detected, which WADA recognizes by barring methods as well as substances. Some that can be are overlooked – see, for example, meldonium, which was used by hundreds of Eastern European athletes for a decade or more before WADA banned it. More, it is fundamentally unfair to look at athletes as independent agents of their own destinies. They are the linchpins of ecosystems that include coaches trainers, doctors, nutritionists, family members, agents, managers, sponsors, and national and international sporting bodies.

In a 2006 article, Bruce Schneier muses on a different unfairness: that years later athletes have less ability to contest findings, as they can’t be retested. That’s partly true. In many cases, athletes can’t be retested even a day later. Instead, their samples are divided into two. The “B”sample is tested for confirmation if the “A” sample produces an adverse analytical finding.

If you want to ban doping, or find out who was using what and when, retrospective testing is a valuable tool. It can certainly bring a measure of peace and satisfaction to the athletes who felt cheated. But it doesn’t bring fairness.

Illustrations: The three top finishers on the day of the women’s 1500 meter race at the 2012 Olympics; on the right is Maryam Yusuf Jamal, later promoted to gold medal.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Remediating monopoly

This week Judge Amit P. Mehta handed down his ruling on remedies in the antitrust case on search. Decided in 2024, this was the first to find that Google acted illegally as a monopolist . Any decision Mehta made was going to displease a lot of people, and so it has. What the US Department of Justice wanted: Google to divest the Chrome browser and perhaps Android, end the agreements by which Apple, Samsung, and others pay Google to make its search engine the default, and share its search index with competitors. What Mehta says: Google can keep Chrome and Android and go on paying people to make its search engine the default, but it cannot make those agreements exclusive for six years. And it it must share search data with competitors.

So Apple gets to keep the $20 or so billion (in 2022) that comes from Google. Mozilla wins, too: the money it gets from Google is 85% of its income. So do small players such as Opera, as Mike Masnick details at TechDirt. Masnick is a rarity in liking Mehta’s ruling, which he calls “elegant”.

It’s good the judge recognizes the importance of not crippling Google’s browser competitors. But it also shows how distorted and filled with dependencies the market has become.

Most commentators think Google got off very lightly considering it was convicted as a monopolist and it will be allowed to continue doing most of the things the court said it did to exploit its position. Even the Wall Street Journal called the ruling “a notable victory” for both Apple and Google. At Big, where you expect to find anger at monopoly power, Matt Stoller is scathing, arguing that Mehta’s remedies will “obviously” fail, most especially at humbling Google’s leadership so that the company changes its behavior. He compares it – correctly, from memory – to the 1995 Microsoft case. Even though that company avoided being broken up, the case left the leadership averse to risking further regulatory actions.

Google’s appeals are still to come. Also pending are remedies in the *other* case that convicted Google of monopoly behavior, this one in adtech. By the time all is settled, as Mehta writes in his ruling, AI could have profoundly changed the market. This belief defies what former FCC chair Lina Khan wrote to kick antitrust enforcement into a new era. In her career-making 2017 paper on Amazon, she argued that the era in which powerful companies were routinely unseated by the two guys in a garage Bill Gates feared in the late 1990s was over. The big technology companies have become so wealthy they can buy up any startup that seems like it might become a threat.

Mehta is comparing the arrival of generative AI chatbots to those earlier disruptions. Recent studies don’t necessarily agree. Tim Bajarin reports at Forbes that a two-year study by One Little Web finds that as of March chatbots accounted just for 2.96% of searches – and among those chatbots, Google’s Gemini is number three, only a little behind DeepSeek – though a *long* way behind leader ChatGPT (1.7 billion queries versus 47.7 billion).

Expecting “pre-monopolized” generative AI to change the market is a gamble, and possibly a bigger one than Mehta thinks. By the time Google exhausts its appeals, it could indeed have overwhelmed the business of general search and shoved Google up its YouTube. But equally, it could have fizzled entirely.

At his blog, Ed Zitron has compiled a list of all the reasons why AI is a bubble getting ready to go volcanic all over everyone. Among his references is the recent MIT study that found that 95% of US companies investing in generative AI derive no benefit. To be fair, the study blames lack of integration and organizational support rather than the quality of large language models or the technology itself. At Forbes, Arafat Kabir suggests that MIT has measured the wrong thing, failing to recognize how many employees and others are using generative AI to automate small, routine tasks. A friend tells me he uses it to start research on new topics by having it compile a list of sources and references to read further, the sort of assignment he might give a junior researcher could he afford one.

But Zitron is not alone. At the LA Times, Michael Hiltzick argues that the AI bubble began losing air on August 7, when OpenAI launched the latest version of ChatGPT to a worldwide response of “meh”. As predicted here two years ago, generative AI may be hitting a wall in terms of improvement. Hiltzick compares what he thinks is coming to the “dot-com mirage”.

Thing is, while there were many mirages connected with the dot-com boom, and there was a bubble that burst, the infrastructure that was built out to support it was no mirage; it went on to support the massive Internet growth that’s happened since.

But perhaps disruption will come from an entirely different direction. This week Mariella Moon reported at The Verge that Switzerland has released Aspertus, an open source AI language model that its public-institution creators, the Swiss Federal Technology Institute of Lausanne (EPFL), ETH Zurich and the Swiss National Supercomputing Centre (CSCS), say was trained solely on publicly available data that conforms to copyright and data protection laws. Maybe the new “two guys in a garage” is a national government.

Illustrations: “The kind of anti-trust legislation that is needed”, by J.S. Pughe (via Library of Congress).

Wendy M. Grossman is an aware-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.