The bottom drawer

It only now occurs to me how weirdly archaic the UK government’s rhetoric around digital ID really is. Here’s prime minister Keir Starmer in India, quoted in the Daily Express (and many elsewheres):

“I don’t know how many times the rest of you have had to look in the bottom drawer for three bills when you want to get your kids into school or apply for this or apply for that – drives me to frustration.”

His image of the bottom drawer full of old bills is the bit. I asked an 82-year-old female friend: “What do you do if you have to supply a utility bill to confirm your address?” Her response: “I download one.”

Right. And she’s in the exact demographic geeks so often dismiss as technically incompetent. Starmer’s children are teenagers. Lots of people under 40 have never seen a paper statement.

Sure, many people can’t do that download, for various reasons. But they are the same people who will struggle with digital IDs, largely for the same reasons. So claiming people will want digital IDs because they’re more “convenient” is specious. The inconvenience isn’t in obtaining the necessary documentation. It lies in inconsistent, poorly designed submission processes – this format but not that, or requiring an in-person appointment. Digital IDs will provide many more opportunities for technical failure, as the system’s first targets, veterans, may soon find out.

A much cheaper solution for meeting the same goal would be interoperable systems that let you push a button to send the necessary confirmation direct to those who need it, like transferring a bank payment. This is, of course, close to the structure Mydex and researcher Derek McAuley have been working on for years, the idea being to invert today’s centralized databases to give us control of our own data. Instead, Starmer has rummaged in Tony Blair’s bottom drawer to pull out old ID proposals.

In an analysis published by the research organization Careful Industries, Rachel Coldicutt finds a clash: people do want a form of ID that would make life easier, but the government’s interest is in creating an ID that will make public services more efficient. Not the same.

Starmer himself has been in India this week, taking advantage to study its biometric ID system Aadhaar. Per Bloomberg, Starmer met with Infosys co-founder Nandan Nilekani, Aadhaar’s architect, because 16-year-old Aadhaar is a “massive success”.

According to the Financial Times, Aadhaar has 99% penetration in India, and “has also become the bedrock for India’s domestic online payments network, which has become the world’s largest, and enabled people to easily access capital markets, contributing to the country’s booming domestic investor base.” The FT also reports that Starmer claims Aadhaar has saved India $10 billion a year by reducing fraud and “leakages” in welfare schemes. In April, authentication using Aadhaar passed 150 billion transactions, and continues to expand through myriad sectors where its use was never envisioned. Visitors to India often come away impressed. However…

At Yale Insights, Ted O’Callahan tells the story of Aadhaar’s development. Given India’a massive numbers of rural poor with no way to identify themselves or access financial services, he writes, the project focused solely on identification.

Privacy International examines the gap between principle and practice. There have been myriad (and continuing) data breaches, many hit barriers to access, and mandatory enrollment for accessing many social protection schemes adds to preexisting exclusion.

In a posting at Open Democracy, Aman Sethi is even less impressed after studying Aadhaar for a decade. The claim of annual savings of $10 billion is not backed by evidence, he writes, and Aadhaar has brought “mass surveillance; a denial of services to the elderly, the impoverished and the infirm; compromised safety and security, and a fundamentally altered relationship between citizen and state.” As in Britain in 2003, when then-prime minister Tony Blair proposed the entitlement card, India cited benefit fraud as a key early justification for Aadhaar. Trying to get it through, Blair moved on to preventing illegal working and curbing identity theft. For Sethi, a British digital ID brings a society “where every one of us is a few failed biometrics away from being postmastered” (referring to the postmaster Horizon scandal).

In a recent paper for the Indian Journal of Law and Legal Research, Angelia Sajeev finds economic benefits but increased social costs. At the Christian Science Monitor, Riddhima Dave reports that many other countries that lack ID systems, particularly developing countries, are looking to India as a model. The law firm AM Legals warns of the spread of data sharing as Aadhaar has become ubiquitous, increasing privacy risks. Finally, at the Financial Times, John Thornhill noted in 2021 the system’s extraordinary mission creep: the “narrow remit” of 2009 to ease welfare payments and reduce fraud has sprawled throughout the public sector from school enrollment to hospital admissions, and into private companies.

Technology secretary Liz Kendall told Parliament this week that the digital ID will absolutely not be used for tracking. She is utterly powerless to promise that on behalf of the governments of the future.

If Starmer wants to learn from another country, he would do well to look at those problems and consider the opportunity costs. What has India been unable to do while pursuing Aadhaar? What could *we* do with the money and resources digital IDs will cost?

Illustrations: In 1980’s Yes, Minister (S01e04, “Big Brother”), minister Jim Hacker (Paul Eddington) tries to explain why his proposed National Integrated Database is not a “Big Brother”.

Update: Spelling of “Aadhaar” corrected.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Software is still forever

On October 14, a few months after the tenth anniversary of its launch, Microsoft will end support for Windows 10. That is, Microsoft will no longer issue feature or security updates or provide technical support, and everyone is supposed to either upgrade their computers to Windows 11 or, if Microsoft’s installer deems the hardware inadequate, replace them with newer models. People who “need more time”, in the company’s phrasing, can buy a year’s worth of security updates. Either way, Microsoft profits at our expense.

In 2014, Microsoft similarly end-of-lifed 13-year-old Windows XP. Then, many were unsympathetic to complaints about it; many thought it unreasonable to expect a company to maintain software for that long. Yet it was obvious even then that software lives on with or without support for far longer than people expect, and also that trashing millions of functional computers was stupidly wasteful. Microsoft is giving Windows 10 a *shorter* life, which is rather obviously the wrong direction for a planet drowning in electronic waste.

XP’s end came at a time when the computer industry was transitioning from adolescence to maturity. As long as personal computing was being constrained by the limited capabilities of hardware and research and development was improving them at a fast pace, a software company like Microsoft could count on frequent new sales. By 2014, that happy time had ended, and although computers continue to add power and speed, it’s not coming back. The same pattern has been repeated with phones, which no longer improve on an 18-month cycle as in the 2010s, and cameras.

For the vast majority, there’s no reason to replace their old machine unless a non-replaceable part is failing – and there should be less of that as manufacturers are forced to embrace repairability. Significantly, there’s less and less difference for many of us if we keep the old hardware and switch to Linux, eliminating Microsoft entirely.

Those fast-moving days were real obsolescence. What we have now is what we used to call “planned obsolescence”. That is, *forced* obsolescence that companies impose on us because it’s convenient and profitable for *them*.

This time round, people are more critical, not least because of the vast amounts of ewaste being generated. The Public Interest Research Group has written an open letter asking people to petition Microsoft to extend free support for Windows 10. As Ed Bott explains at ZDNet, you do have the option of kicking the can down the road by paying for updates for another three years.

The other antisocial side of terminating free security updates is that millions of those still-functional machines will remain in use, and will be increasingly insecure as new vulnerabilities are discovered and left unpatched.

Simultaneously, Windows is enshittifying; it’s harder to run Windows without a Microsoft login; avoid stupid gewgaws and unwanted news headlines, and turn off its “Copilot AI”. Tom Warren reports at The Verge that Microsoft wants to turn Copilot into an agent that can book restaurants and control its Edge browser. There are, it appears, ways to defeat all this in Windows 11, but for how long?

In a piece on solar technology, Doctorow outlines the process by which technology companies seize control once they can no longer rely on consumer demand to drive sales. They lock down their technology if they can, lock in customers, add advertising and block market entry claiming safety and/or security make it necessary. They write and lobby for legislation that enshrines their advantage. And they use technological changes to render past products obsolete. Many think this is the real story behind the insistence on forcing unwanted “AI” features into everything: it’s the one thing they can do to make their offerings sound new.

Seen in that light, the rush to build “AI” into everything becomes a rush to find a way to force people to buy new stuff. The problem is that – it feels like – most people don’t see much benefit in it, and go around turning off the AI features that are forced on them. Microsoft’s Recall feature, which takes a screen snapshot every few seconds, was so controversial at launch that the company rolled it back – for a while, anyway.

Carelessness about ewaste is everywhere, particularly with respect to the Internet of Things. This week: Logitech’s Pop smart home buttons. At least when Google ended support for older Nest thermostats they could go on working as “dumb” thermostats (which honestly seems like the best kind).

Ewaste is getting a whole lot worse when it desperately needs to be getting a whole lot better.

***

In the ongoing rollout of the Online Safety Act and age verification update, at 404 Media, Joseph Cox reports that Discord has become the first site reporting a hack of age verification data. Hackers have collected data pertaining to 70,000 users, including selfies, identity documents, email addresses, approximate residences, and so on, and are trying to extort Discord, which says the hackers breached one of its third-party vendors that handles age-related appeals. Security practitioners warned about this from the beginning.

In addition, Ofcom has launched a new consultation for the next round of Online Safety Act enforcement. Up next are livestreaming and algorithmic recommendations; the Open Rights Group has an explainer, as does lawyer Graham Smith. The consultation closes on October 20.

Illustrations: One use for old computers – movie stardom, as here in Brazil.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Undue process

To the best of my knowledge, Imgur is the first mainstream company to quit the UK in response to the Online Safety Act (though many US news sites remain unavailable due to 2018’s General Data Protection Regulation. Widely used to host pictures for reuse on web forums and social media, Imgur shut off UK connections on Tuesday. In a statement on Wednesday, the company said UK users can still exercise their data protection rights. That is, Imgur will reply within the statutory timeframe to requests for copies of our data or for the account to be deleted.

In this case, the push came from the Information Commissioner’s Office. In a statement, the ICO explains that on September 10 it notified Imgur’s owner, MediaLab AI of its provisional findings from its previously announced investigation into “how the company uses children’s information and its approach to age assurance”. The ICO proposed to fine Imgur. Imgur promptly shut down UK access. The ICO’s statement says departure changes nothing: “We have been clear that exiting the UK does not allow an organisation to avoid responsibility for any prior infringement of data protection law, and our investigation remains ongoing.”

The ICO calls Imgur’s departure “a commercial decision taken by the company”. While that’s true, EU and UK residents have dealt for years with unwanted cookie consent banners because companies subject to data protection laws have engaged in malicious compliance intended to spark a rebellion against the law. So: wash.

Many individual users stick to Imgur’s free tier, but it profits from subscriptions and advertising. MediaLab AI bought it in 2021, and uses it as a platform to mount advertising campaigns at scale for companies like Kraft-Heinz and Alienware.

Meanwhile, UK users’ Imgur accounts are effectively hostages. We don’t want lawless companies. We also don’t want bad laws – or laws that are badly drafted and worse implemented. Children’s data should be protected – but so should everyone’s. There remains something fundamentally wrong with having a service many depend upon yanked with no notice.

Companies’ threats to leave the market rather than comply with the law are often laughable – see for example Apple’s threat to leave the EU if it doesn’t repeal the Digital Markets Act. This is the rare occasion when a company has actually done it (although presumably they can turn access back on at any time). If there’s a lesson here, it may be that without EU membership Britain is now too small for foreign companies to bother complying with its laws.

***

Boundary disputes and due process are also the subject of a lawsuit launched in the US against Ofcom. At the end of August, 4chan and Kiwi Farms filed a complaint in a Washington, DC federal court against Ofcom, claiming the regulator is attempting to censor them and using the OSA to “target the free speech rights of Americans”.

We hear less about 4chan these days, but in his book The Other Pandemic, journalist James Ball traces much of the spread of QAnon and other conspiracy theories to the site. In his account, these memes start there, percolate through other social media, and become mainstream and monetized on YouTube. Kiwi Farms is equally notorious for targeted online and offline harassment.

The argument mooted by the plaintiffs’ lawyer Preston Byrne is that their conduct is lawful within the jurisdictions where they’re based and that UK and EU countries seeking to enforce their laws should do so through international treaties and courts. There’s some precedent to the first bit, albeit in a different context. In 2010. the New York State legislature and then the US Congress passed the Libel Tourism Protection Act. Under it, US courts are prevented from enforcing British libel judgments if the rulings would not stand in a US court. The UK went on to modify its libel laws in 2013.

Any country has the sovereignty to demand that companies active within its borders comply with its laws, even laws that are widely opposed, and to punish them if they don’t, which is another thing 4chan’s lawyers are complaining about. The question the Internet has raised since the beginning (see also the Apple case and, before it the 1996 case United States v. Thomas) is where the boundary is and how it can be enforced. 4chan is trying to argue that the penalties Ofcom provisionally intends to apply are part of a campaign of targeted harassment of US technology companies. Odd to see *4chan* adopting the technique long ago advocated by staid, old IBM: when under attack, wrap yourself in the American flag.

***

Finally, in the consigned-to-history category, AOL shut down dialup on September 30. I recall traveling with a file of all of the dialup numbers the even earlier service, CompuServe maintained around the world. It was, in its time, a godsend. (Then AOL bought up the service, its biggest competitor before the web, and shut it down, seemingly out of spite.) For this reason, my sympathies are with the 124,000 US users the US Census Bureau says still rely on dial-up – only a few thousand of them were paying for AOL, per CNBC – and the uncounted others elsewhere. It’s easy to forget when you’re surrounded by wifi and mobile connections that Internet access remains hard for many people.

Elsewhere this week: Childproofing the Internet, at Skeptical Inquirer.

Illustrations: Imgur’s new UK home page.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: The AI Con

The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
By Emily Bender and Alex Hanna
HarperCollins
ISBN: 978-0-06-341856-1

Enormous sums of money are sloshing around AI development. Amazon is handing $8 billion to Anthropic. Microsoft is adding $1 billion worth of Azure cloud computing to its existing massive stake in Open AI. And Nvidia is pouring $100 billion in the form of chips into Open AI’s project to build a gigantic data center, while Oracle is borrowing $100 billion in order to give OpenAI $300 billion worth of cloud computing. Current market *revenue* projections? 85 billion in 2029. So they’re all fighting for control over the Next Big Thing, which projections suggest will never pay off. Warnings that the AI bubble may be about to splatter us all are coming from Cory Doctorow and Ed Zitron – and the Daily Telegraph, The Atlantic, and the Wall Street Journal. Bain Capital says the industry needs another $800 billion in investment now and $2 trillion by 2030 to meet demand.

Many talk about the bubble and economic consequences if it bursts. Few talk about the opportunity costs as AI sucks money and resources away from other things that might be more valuable. In The AI Con, linguistics professor Emily Bender and DAIR Institute director of research Alex Hanna provide an exception. Bender is one of the four authors of the seminal 2021 paper On the Dangers of Stochastic Parrots, which arguably founded AI-skepticism.

In the book, the authors review much that’s familiar: the many layers of humans required to code, train, correct, and mind “AI”: the programmers, designers, data labelers, and raters, along with the humans waiting to take over when the AI fails. They also go into the water, energy, and labor demands of data centers and present approaches to AI.

Crucially, they avoid both doomerism and boosterism, which they understand as alternative sides of the same coin. Both the fully automated hellscape Doomers warn against and and the Boosters’ world governed by a benign synthetic intelligence ignore the very real harms taking place at present. Doomers promote “AI safety” using “fake scenarios” meant to frighten us. Think HAL in the movie 2001: A Space Odyssey or Nick Bostrum’s paperclip maximizer. Boosters rail against the constraints implicit in sustainability, trust and safety organizations within technology companies, and government regulation. We need, Bender and Hanna write, to move away from speculative risks and toward working on the real problems we have. Hype, they conclude, doesn’t have to be true to do harm.

The book ends with a chapter on how to resist hype. Among their strategies: persistently ask questions such as how a system is evaluated, who is harmed and who benefits, how the system was developed and with what kind of data and labor practices. Avoid language that humanizes the system – no “hallucinations” for errors. Advocate for transparency and accountability, and resist the industry’s claims that the technology is so new there is no way to regulate it. The technology may be new, but the principles are old. And, when necessary, just say no and resist the narrative that its progress is inevitable.

The absurdity card

Fifteen years ago, a new incoming government swept away a policy its immediate predecessors had been pushing since shortly after the 2001 9/11 attacks: identity cards. That incoming government was led by David Cameron’s conservatives, in tandem with Nick Clegg’s liberal democrats. The outgoing government was Tony Blair’s. When Keir Starmer’s reinvented Labour party swept the 2024 polls, probably few of us expected he would adopt Blair’s old policies so soon.

But here we are: today’s papers announce Starmer’s plan for mandatory “digital ID”.

Fifteen years is an unusually long time between ID card proposals in Britain. Since they were scrapped at the end of World War II, there has usually been a new proposal about every five years. In 2002, at a Scrambling for Safety event held by the Foundation for Information Policy Research and Privacy International, former minister Peter Lilley observed that during his time in Margaret Thatcher’s government ID card proposals were brought to cabinet every time there was a new minister for IT. Such proposals were always accompanied with a request for suggestions how it could be used. A solution looking for a problem.

In a 2005 paper I wrote for the University of Edinburgh’s SCRIPT-ED journal, I found evidence to support that view: ID card proposals are always framed around current obsessions. In 1993, it was going to combat fraud, illegal immigration, and terrorism. In 1995 it was supposed to cut crime (at that time, Blair argued expanding policing would be a better investment). In 1989, it was ensuring safety at football grounds following the Hillsborough disaster. The 2001-2010 cycle began with combating terrorism, benefit fraud, and convenience. Today, it’s illegal immigration and illegal working.

A report produced by the LSE in 2005 laid out the concerns. It has dated little, despite preceding smartphones, apps, covid passes, and live facial recognition. Although the cost of data storage has continued to plummet, it’s also worth paying attention to the chapter on costs, which the report estimated at roughly £11 billion.

As I said at the time, the “ID card”, along with the 51 pieces of personal information it was intended to store, was a decoy. The real goal was the databases. It was obvious even then that soon real time online biometric checking would be a reality. Why bother making a card mandatory when police could simply demand and match a biometric?

We’re going to hear a lot of “Well, it works in Estonia”. *A* digital ID works in Estonia – for a population of 1.3 million who regained independence in 1991. Britain has a population of 68.3 million, a complex, interdependent mass of legacy systems, and a terrible record of failed IT projects.

We’re also going to hear a lot of “people have moved on from the debates of the past”, code for “people like ID cards now” – see for example former Conservative leader William Hague. Governments have always claimed that ID cards poll well but always come up against the fact that people support the *goals*, but never like the thing when they see the detail. So it will probably prove now. Twelve years ago, I think they might have gotten away with that claim – smartphones had exploded, social media was at its height, and younger people thought everything should be digital (including voting). But the last dozen years began with Snowden‘s revelations, and continued with the Cambridge Analytica Scandal, ransomware, expanding acres of data breaches, policing scandals, the Horizon / Post Office disaster, and wider understanding of accelerating passive surveillance by both governments and massive companies. I don’t think acceptance of digital ID is a slam-dunk. I think the people who have failed to move on are the people who were promoting ID cards in 2002, when they had cross-party support, and are doing it again now.

So, to this new-old proposal. According to The Times, there will be a central database of everyone who has the right to work. Workers must show their digital ID when they start a new job to prove their employment is legal. They already have to show one of a variety of physical ID documents, but “there are concerns some of these can be faked”. I can think of a lot cheaper and less invasive solution for that. The BBC last night said checks for the right to live here would also be applied to anyone renting a home. In the Guardian, Starmer is quoted calling the card “an enormous opportunity” and saying the card will offer citizens “countless benefits” in streamlining access to key services, echoes of 2002’s “entitlement card”. I think it was on the BBC’s Newsnight that I heard someone note the absurdity of making it easier to prove your entitlement to services that no longer exist because of cuts.

So keep your eye on the database. Keep your eye on which department leads. Immigration suggests the Home Office, whose desires have little in common with the need of ordinary citizens’ daily lives. Beware knock-on effects. Think “poll tax”. And persistently ask: what problem do we have for which a digital ID is the right, the proportionate, the *necessary* solution?

There will be detailed proposals, consultations, and draft legislation, so more to come. As an activist friend says, “Nothing ever stays won.”

Illustrations: British National Identity document circa 1949 (via Wikimedia.)

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Blur

In 2013, London’s Royal Court Theatre mounted a production of Jennifer Haley’s play The Nether. (Spoiler alert!) In its story of the relationship between an older man and a young girl in a hidden online space, nothing is as it seems…

At last week’s Gikii, Anna-Maria Piskopani and Pavlos Panagiotidis invoked the play to ask whether, given that virtual crimes can create real harm, can virtual worlds help people safely experience the worst parts of themselves without legitimizing them in the real world?

Gikii papers mix technology, law, and pop culture into thought experiments. This year’s official theme was “Technology in its Villain Era?”

Certainly some presentations fit this theme. Paweł Urzenitzok, for example, warned of laws that seem protective but enable surveillance, while varying legal regimes enable arbitrage as companies shop for the most favorable forum. Julia Krämer explored the dark side of app stores, which are getting 30% commissions on a flood of “AI boyfriends” and “perfect wives”. (Not always perfect; users complain that some of them “talk too much”.)

Andelka Phillips warned of the uncertain future risks of handing over personal data highlighted by the recent sale of 23andMe to its founder, Anne Wojcicki. Once the company filed for bankruptcy protection, the class action suits brought against it over the 2023 data breach were put on hold. The sale, she said, ignored concerns raised by the privacy ombudsman. And, Leila Debiasi said, your personal data can be used for AI training after you die.

In another paper, Peter van de Waerdt and Gerard Ritsema van Eck used Doctor Who’s Silents, who disappear from memory when people turn away, to argue that more attention should be paid to enforcing EU laws requiring data portability. What if, for example, consumers could take their Internet of Things device and move it to a different company’s service? Also in that vein was Tim van Zuijlen, who suggested consumers assemble to demand their collective rights to fight back against planned obsolescence. This is already happening; in multiple countries consumers are suing Apple over slowed-down iPhones.

The theme that seemed to emerge most clearly, however, is our increasingly blurred lines, with AI as a prime catalyst. In the before-generative-AI times, The Nether blurred the line between virtual and real. Now, Hedye Tayebi Jazayeri and Mariana Castillo-Hermosilla found gamification in real life – are credit scores so different from game scores? Dongshu Zhou asked if you can ever really “delete yourself” after a meme about you has gone viral and you have become “digital folklore”. In another, Lior Weinstein suggested a “right to be nonexistent” – that is, invisible to the institutions and systems that seprately Kimberly Paradis said increasingly want us all to be legible to them.

For Joanne Wong, real brainrot is a result of the AI-fueled spread of “low-quality” content such as the burst of remixes and parodies of Chinese home designer Little John. At AI-fueled hyperspeed, copyright become irrelevant.

Linnet Taylor and Tjaša Petročnik tested chatbots as therapists, finding that they give confused and conflicting responses. Ask what regulations govern them, and they may say at once that they are not therapists *and* that they are certified by their state’s authority. At least one resisted being challenged: “What are you, a cop or something?”. That’s probably the most human-like response one of these things has ever delivered – but it’s still not sentient. It’s just been programmed that way.

Gikii’s particular blend of technology, law, and pop culture always has its surreal side (see last year), as participants attempt to navigate possible futures. This year, it struggled to keep up with the weirdness of real life. In Albania, the government has appointed a chatbot, Diella as a minister, intending it to cut corruption in procurement. Diella will sit in the cabinet, albeit virtually, and be used to assess the merit of private companies’ responses to public tenders. Kimberly Breedon used this example to point out the conflict of interest inherent in technology companies providing tools to assess – in some cases – themselves. Breedon’s main point was important, given that we are already seeing AI used to speed up and amplify crime. Although everyone talks about using AI to cut corruption, no one is talking about how AI might be used *for* corruption. Asked how that would work, she noted the potential for choosing unrepresentative data or screening out disfavored competitors.

In looking up that Albanian AI minister, I find that the UK has partnered with Microsoft to create a package of AI tools intended to speed up the work of the civil service. Naturally it’s called Humphrey. MPs are at it, too, experimenting with using AI to write their Parliamentary speeches.

All of this is why Syamsuriatina Binti Ishak argued what could be Gikii’s mission statement: we must learn from science fiction and the”what-ifs” it offers to allow us to think our fears through so that “if the worst happens we know how to live in that universe”. Would we have done better as covid arrived if we paid more attention to the extensive universe of pandemic fiction? Possibly not. As science fiction writer Charlie Stross pointed out at the time, none of those books imagined governments as bumbling as many proved to be.

Illustrations: “Diella”, Albania’s procurement minister chatbot.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Tor

Tor: From the Dark Web to the Future of Privacy
by Ben Collier
MIT Press
ISBN: 978-0-262-54818-2

The Internet began as a decentralized system designed to reroute traffic in case a part of the network was taken out by a bomb. Far from being neutral, the technology intentionally supported the democratic ideals of its time: freedom of expression, freedom of access to information, and freedom to code – that is, build new applications for the Internet without needing permission. Over the decades since, IT has relentlessly centralized. Among the counterweights to this consolidation is Tor, “the onion routing”.

In Tor: From the Dark Web to the Future of Privacy (free for download), Ben Collier recounts a biography that seems to recapitulate those early days – but so far with a different outcome.

Collier traces Tor’s origins to the late Ross Anderson‘s 1997 paper The Eternity Service. In it, Anderson proposed a system for making information indelible by replicating it anonymously across a large number of machines of unknown location so that it would become too expensive to delete it (or, in Anderson’s words, “drive up the cost of selection service denial attacks”). That sort of redundancy is fundamental to the way the Internet works for communications. Around the same time, people were experimenting with ways of routing information such as email through multiple anonymized channels in order to protect it from interference – much used, for example, to protect those exposing Scientology’s secrets. Anderson himself indicated the idea’s usefulness in guaranteeing individual liberties.

As Collier writes, in those early days many spoke as though the Internet’s technology was sufficient to guarantee the export of democratic values to countries where they were not flourishing. More recently, I’ve seen arguments that technology is inherently anti-democratic. Both takes attribute to the technology motivations that properly belong to its controllers and owners.

This is where Collier’s biography strikes a different course by showing the many adaptations the the project has made since its earliest discussions circa 2001* between Roger Dingledine and Nick Mathewson to avoid familiar trends such as centralization and censorship – think the trends that got us the central-point-of-failuire Internet Archive instead of the Eternity Server. Because it began later, Dingledine and Mathewson were able to learn from previous efforts such as PGP and Zero Knowledge Systems to spread strong encryption and bring privacy protection to the mainstream. One such lesson was that the mathematical proofs that dominated cryptography were less important than ensuring usability. At the same time, Collier watches Dingledine and Mathewson resist the temptation to make a super-secure mode and a “stupid mode” that would become the path of least resistance for most users, jeopardizing the security of the entire network.

Most technology biographies focus on one or two founders. Faced with a sprawling system, Collier has resisted that temptation, and devotes a chapter each to the project’s technological development, relay node operators, and maintainers. The fact that these are distinct communities, he writes, has helped keep the project from centralizing. He goes on to discuss the inevitable emergence of criminal uses for Tor, its use as a tool for activism, and finally the future of privacy.

To those who have heard of Tor only as a browser used to access the “dark web” the notion that it deserves a biography may seem surprising. But the project ambitions have grown over time, from privacy as a service, to privacy as a structure, to privacy as a struggle. Ultimately, he concludes, Tor is a hack that has penetrated the core of Internet infrastructure, designing around control points. It is, in other words, much closer to the Internet the pioneers said they were building than the Internet of Facebook and Google.

*This originally said “founding in 2006; that is when the project created today’s formal non-profit organization.

Dethroned

This is a version of a paper that Jon Crowcroft and I delivered at this week’s gikii conference.

She sounded shocked. But also: as though the word she had to pronounce in front of the world’s press was one she had never encountered before and needed to take care to get right. Stan-o-zo-lol. It was 1988, and the Canadian sprinter Ben Johnson had tested positive for it. It was two days, after he had won the gold medal in the 100m men’s race at the Seoul Olympics.

In the years since, that race has become known as the dirtiest race in history. Of the top eight finishers, just one has never been caught doping: US runner Calvin Smith, who was awarded the bronze medal after Johnson was disqualified.

Doping controls were in their infancy then. As athletes and their coaches and doctors moved on from steroids to EPO and human growth hormone, anti-doping scientists, always trailing behind, developed new tests. Recognizing that in-competition testing didn’t catch athletes during training, when doping regimens are most useful, the authorities began testing outside of competiton, which in 2004 in turn spawned the “whereabouts” system athletes must use to tell testers where they’re going to be for one hour of every day. Athlete biological passports came into use in 2008 to track blood markers over time and monitor for suspicious changes brought by drugs yet to have tests.

The plan was for the 2012 London Olympics to be the cleanest ever staged. Scientists built a lab; they showed off new techniques to the press. Afterwards, they took bows. In a report published in October 2012, independent observers wrote, the organizers “successfully implemented measures to protect the rights of clean athletes”. The report found only eight out of more than 5,000 samples tested positive during the games. Success?

It is against this background that in 2014 the German TV channel MDR, whose journalist Hajo Seppelt specializes in doping investigations, aired the Icarus, Grigory Rodchenkov, former director of Moscow’s doping control lab, spilled the story of swapped samples and covered-up tests. And 2012? Rodchenkov called it the dirtiest Olympics in history. The UK’s anti-doping lab, he said, missed 126 positive tests.

In April, Esther Addley reported in the Guardian that “the dirtiest race in history” has a new contender: the women’s 1500 meter race at the 2012 London Olympics.

In the runup to 2012, the World Anti-Doping Agency decided to check their work. They arranged to keep athletes’ samples, frozen, for eight years so they could be rested later as dope-testing science improved and expanded. In 2016, reanalysis of 265 samples across five sports from athletes who might participate in the 2016 Rio games found banned substances in samples relating to 23 athletes.

That turned out to be only the beginning. In the years since, athlete after athlete in that race have had their historical results overturned as a result of abnormalities in their biological passports. Just last year – 2024! – one more athlete was disqualified from that race after her frozen sample tested positive for steroids.

The official medal list now awards gold to Maryam Yusuf Jamal (originally the bronze medalist); silver to Abeba Aregawi (upgraded from fifth place to bronze, and then to silver); and bronze to Shannon Rowbury, the sixth-place finisher. Is retroactive fairness possible?

In our gikii paper, Jon Crowcroft and I think not. The original medalists have lost their places in the rolls of honor, but they’ve had a varying number of years to exploit their results while they stood. They got the medal ceremony while in the flush of triumph, the national kudos, and the financial and personal opportunities that go with it.

In addition, Crowcroft emphasizes that runners strategize. You run a race very differently depending on who your competitors are and what you know about how they run. Jamal, Aragawi, and Rowbury would have faced a very different opposition both before and during the final had the anti-doping system worked as it was supposed to, with unpredictable results.

The anti-doping system is essentially a security system, intended to permit some behaviors and elminate others. Many points of failure are obvious simply from analyzing misplaced incentives. some substances can’t be detected, which WADA recognizes by barring methods as well as substances. Some that can be are overlooked – see, for example, meldonium, which was used by hundreds of Eastern European athletes for a decade or more before WADA banned it. More, it is fundamentally unfair to look at athletes as independent agents of their own destinies. They are the linchpins of ecosystems that include coaches trainers, doctors, nutritionists, family members, agents, managers, sponsors, and national and international sporting bodies.

In a 2006 article, Bruce Schneier muses on a different unfairness: that years later athletes have less ability to contest findings, as they can’t be retested. That’s partly true. In many cases, athletes can’t be retested even a day later. Instead, their samples are divided into two. The “B”sample is tested for confirmation if the “A” sample produces an adverse analytical finding.

If you want to ban doping, or find out who was using what and when, retrospective testing is a valuable tool. It can certainly bring a measure of peace and satisfaction to the athletes who felt cheated. But it doesn’t bring fairness.

Illustrations: The three top finishers on the day of the women’s 1500 meter race at the 2012 Olympics; on the right is Maryam Yusuf Jamal, later promoted to gold medal.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Remediating monopoly

This week Judge Amit P. Mehta handed down his ruling on remedies in the antitrust case on search. Decided in 2024, this was the first to find that Google acted illegally as a monopolist . Any decision Mehta made was going to displease a lot of people, and so it has. What the US Department of Justice wanted: Google to divest the Chrome browser and perhaps Android, end the agreements by which Apple, Samsung, and others pay Google to make its search engine the default, and share its search index with competitors. What Mehta says: Google can keep Chrome and Android and go on paying people to make its search engine the default, but it cannot make those agreements exclusive for six years. And it it must share search data with competitors.

So Apple gets to keep the $20 or so billion (in 2022) that comes from Google. Mozilla wins, too: the money it gets from Google is 85% of its income. So do small players such as Opera, as Mike Masnick details at TechDirt. Masnick is a rarity in liking Mehta’s ruling, which he calls “elegant”.

It’s good the judge recognizes the importance of not crippling Google’s browser competitors. But it also shows how distorted and filled with dependencies the market has become.

Most commentators think Google got off very lightly considering it was convicted as a monopolist and it will be allowed to continue doing most of the things the court said it did to exploit its position. Even the Wall Street Journal called the ruling “a notable victory” for both Apple and Google. At Big, where you expect to find anger at monopoly power, Matt Stoller is scathing, arguing that Mehta’s remedies will “obviously” fail, most especially at humbling Google’s leadership so that the company changes its behavior. He compares it – correctly, from memory – to the 1995 Microsoft case. Even though that company avoided being broken up, the case left the leadership averse to risking further regulatory actions.

Google’s appeals are still to come. Also pending are remedies in the *other* case that convicted Google of monopoly behavior, this one in adtech. By the time all is settled, as Mehta writes in his ruling, AI could have profoundly changed the market. This belief defies what former FCC chair Lina Khan wrote to kick antitrust enforcement into a new era. In her career-making 2017 paper on Amazon, she argued that the era in which powerful companies were routinely unseated by the two guys in a garage Bill Gates feared in the late 1990s was over. The big technology companies have become so wealthy they can buy up any startup that seems like it might become a threat.

Mehta is comparing the arrival of generative AI chatbots to those earlier disruptions. Recent studies don’t necessarily agree. Tim Bajarin reports at Forbes that a two-year study by One Little Web finds that as of March chatbots accounted just for 2.96% of searches – and among those chatbots, Google’s Gemini is number three, only a little behind DeepSeek – though a *long* way behind leader ChatGPT (1.7 billion queries versus 47.7 billion).

Expecting “pre-monopolized” generative AI to change the market is a gamble, and possibly a bigger one than Mehta thinks. By the time Google exhausts its appeals, it could indeed have overwhelmed the business of general search and shoved Google up its YouTube. But equally, it could have fizzled entirely.

At his blog, Ed Zitron has compiled a list of all the reasons why AI is a bubble getting ready to go volcanic all over everyone. Among his references is the recent MIT study that found that 95% of US companies investing in generative AI derive no benefit. To be fair, the study blames lack of integration and organizational support rather than the quality of large language models or the technology itself. At Forbes, Arafat Kabir suggests that MIT has measured the wrong thing, failing to recognize how many employees and others are using generative AI to automate small, routine tasks. A friend tells me he uses it to start research on new topics by having it compile a list of sources and references to read further, the sort of assignment he might give a junior researcher could he afford one.

But Zitron is not alone. At the LA Times, Michael Hiltzick argues that the AI bubble began losing air on August 7, when OpenAI launched the latest version of ChatGPT to a worldwide response of “meh”. As predicted here two years ago, generative AI may be hitting a wall in terms of improvement. Hiltzick compares what he thinks is coming to the “dot-com mirage”.

Thing is, while there were many mirages connected with the dot-com boom, and there was a bubble that burst, the infrastructure that was built out to support it was no mirage; it went on to support the massive Internet growth that’s happened since.

But perhaps disruption will come from an entirely different direction. This week Mariella Moon reported at The Verge that Switzerland has released Aspertus, an open source AI language model that its public-institution creators, the Swiss Federal Technology Institute of Lausanne (EPFL), ETH Zurich and the Swiss National Supercomputing Centre (CSCS), say was trained solely on publicly available data that conforms to copyright and data protection laws. Maybe the new “two guys in a garage” is a national government.

Illustrations: “The kind of anti-trust legislation that is needed”, by J.S. Pughe (via Library of Congress).

Wendy M. Grossman is an aware-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Passing the Uncanny Valley

A couple of weeks ago, the Greenwich Skeptics in the Pub played host to Sophie Nightingale, who studies the psychology of AI deepfakes. The particular project she spoke about was an experiment in whether people can be trained to be better at distinguishing them from real images.

In Nightingale’s experiments, she carefully matched groups of real images to synthetic ones, first created by generative adversarial networks (GANs), later by diffusion models (GeeksforGeeks raters’ demographics.

Then the humans were given some training in what to look for to detect fakes and the experiment was rerun with new sets of faces. The bad news: the training made a little difference, but not much. She went on to do similar experiments with diffusion images.

Nightingale has gone on to do some cross-modal experiments, including audio as well as images, following the 2024 election incident in which New Hampshire voters received robocalls from a faked Joe Biden intended to discourage voters in the January 2024 primary. In the audio experiment, she played the test subjects very short snippets. Played for us in the pub, it was very hard to tell real from fake, and her experimental subjects did no better. I would expect longer clips to be more identifiable as fake. The Biden call succeeded in part because that type of fake had never been tried before. Now, voters, at least in New Hampshire, will know it’s possible that the call they’re getting is part of a newer type of disinformation campaign aimed at

In another experiment, she asked participants to rate the trustworthiness of the facial images they were shown, and was dismayed when they rated the synthetic faces slightly (7.7%) higher than the real ones. In the resulting paper for Journal of Vision, she hypothesizes that this may be because synthetic faces tend to look more like “average” faces, which tend to be rated higher in trustworthiness, even if they’re not the most attractive.

Overall, she concludes that both still images and voice have “passed the Uncanny Valley“, and video will soon follow. In the past, I’ve chosen optimism about this sort of thing, on the basis that earlier generations have been fooled by technological artifacts that couldn’t fool us now for a second. The Cottingley Fairies looks ridiculous after generations of knowledge of photography. On the other hand, Johannes Vermeer’s Girl with a Pearl Earring looks more real than modern deepfakes, even though the subject is generally described as imaginary. So it’s possible to think of it as a “deepfake”, painted in oils in the 17th century.

Fakes have always been with us. What generative AI has done to change this landscape is to democratize and scale their creation, just as it’s amping up the scale and speed of cyber attacks. It’s no longer necessary to be even barely competent; the tools keep getting easier.

Listening to Nightingale it seems most likely that work like that in progress by an audience member on identifying technological artifacts that identify fakes will prove to be the right way forward. If those differences can be reliably identified, they could be built into technological tools that can spot indicators we can’t perceive directly. If something like that can be embedded into devices – phones, eyeglasses, wristwatches, laptops – and spot and filter out fakes in real time, and we should be able to regain some ability to trust what we see.

There are some obvious problems with this hoped-for future. Some people will continue to seek to exploit fakes; some may prefer them. The most likely outcome will be an arms race like that surrounding email spam and other battles between malware producers and security people. Still, it’s the first approach that seems to offer a practical solution to coping with a vastly diminished ability to know what’s real and what isn’t.

***

On the Internet your home always leaves you, part 4,563. Twenty-two-year-old blogging site Typepad will disappear in a few weeks. To those of us who have read blogs ever since they began, this news is shocking, like someone’s decided to tear down an old community church. Yes, the congregation has shrunk and aged, and it’s drafty and built on creaking old technology (in Typepad’s case, Moveable Type), but it’s part of shared local history. Except it isn’t, because, as Wikipedia documents, corporate musical chairs means it’s now owned by private equity. Apparently it’s been closed to new signups since 2020, and its bloggers are now being told to move their sites before everything is deleted in September. It feels like the stars of the open web are winking out, one by one.

On the Internet everything is forever, but everything is also ephemeral. Ironically, the site’s marketing slug still reads: “Typepad is the reliable, flexible blogging platform that puts the publisher in control.”

Illustrations: “Girl with a Pearl Earring”, painted by Johannes Vermeer circa 1665.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.