The absurdity card

Fifteen years ago, a new incoming government swept away a policy its immediate predecessors had been pushing since shortly after the 2001 9/11 attacks: identity cards. That incoming government was led by David Cameron’s conservatives, in tandem with Nick Clegg’s liberal democrats. The outgoing government was Tony Blair’s. When Keir Starmer’s reinvented Labour party swept the 2024 polls, probably few of us expected he would adopt Blair’s old policies so soon.

But here we are: today’s papers announce Starmer’s plan for mandatory “digital ID”.

Fifteen years is an unusually long time between ID card proposals in Britain. Since they were scrapped at the end of World War II, there has usually been a new proposal about every five years. In 2002, at a Scrambling for Safety event held by the Foundation for Information Policy Research and Privacy International, former minister Peter Lilley observed that during his time in Margaret Thatcher’s government ID card proposals were brought to cabinet every time there was a new minister for IT. Such proposals were always accompanied with a request for suggestions how it could be used. A solution looking for a problem.

In a 2005 paper I wrote for the University of Edinburgh’s SCRIPT-ED journal, I found evidence to support that view: ID card proposals are always framed around current obsessions. In 1993, it was going to combat fraud, illegal immigration, and terrorism. In 1995 it was supposed to cut crime (at that time, Blair argued expanding policing would be a better investment). In 1989, it was ensuring safety at football grounds following the Hillsborough disaster. The 2001-2010 cycle began with combating terrorism, benefit fraud, and convenience. Today, it’s illegal immigration and illegal working.

A report produced by the LSE in 2005 laid out the concerns. It has dated little, despite preceding smartphones, apps, covid passes, and live facial recognition. Although the cost of data storage has continued to plummet, it’s also worth paying attention to the chapter on costs, which the report estimated at roughly £11 billion.

As I said at the time, the “ID card”, along with the 51 pieces of personal information it was intended to store, was a decoy. The real goal was the databases. It was obvious even then that soon real time online biometric checking would be a reality. Why bother making a card mandatory when police could simply demand and match a biometric?

We’re going to hear a lot of “Well, it works in Estonia”. *A* digital ID works in Estonia – for a population of 1.3 million who regained independence in 1991. Britain has a population of 68.3 million, a complex, interdependent mass of legacy systems, and a terrible record of failed IT projects.

We’re also going to hear a lot of “people have moved on from the debates of the past”, code for “people like ID cards now” – see for example former Conservative leader William Hague. Governments have always claimed that ID cards poll well but always come up against the fact that people support the *goals*, but never like the thing when they see the detail. So it will probably prove now. Twelve years ago, I think they might have gotten away with that claim – smartphones had exploded, social media was at its height, and younger people thought everything should be digital (including voting). But the last dozen years began with Snowden‘s revelations, and continued with the Cambridge Analytica Scandal, ransomware, expanding acres of data breaches, policing scandals, the Horizon / Post Office disaster, and wider understanding of accelerating passive surveillance by both governments and massive companies. I don’t think acceptance of digital ID is a slam-dunk. I think the people who have failed to move on are the people who were promoting ID cards in 2002, when they had cross-party support, and are doing it again now.

So, to this new-old proposal. According to The Times, there will be a central database of everyone who has the right to work. Workers must show their digital ID when they start a new job to prove their employment is legal. They already have to show one of a variety of physical ID documents, but “there are concerns some of these can be faked”. I can think of a lot cheaper and less invasive solution for that. The BBC last night said checks for the right to live here would also be applied to anyone renting a home. In the Guardian, Starmer is quoted calling the card “an enormous opportunity” and saying the card will offer citizens “countless benefits” in streamlining access to key services, echoes of 2002’s “entitlement card”. I think it was on the BBC’s Newsnight that I heard someone note the absurdity of making it easier to prove your entitlement to services that no longer exist because of cuts.

So keep your eye on the database. Keep your eye on which department leads. Immigration suggests the Home Office, whose desires have little in common with the need of ordinary citizens’ daily lives. Beware knock-on effects. Think “poll tax”. And persistently ask: what problem do we have for which a digital ID is the right, the proportionate, the *necessary* solution?

There will be detailed proposals, consultations, and draft legislation, so more to come. As an activist friend says, “Nothing ever stays won.”

Illustrations: British National Identity document circa 1949 (via Wikimedia.)

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Passing the Uncanny Valley

A couple of weeks ago, the Greenwich Skeptics in the Pub played host to Sophie Nightingale, who studies the psychology of AI deepfakes. The particular project she spoke about was an experiment in whether people can be trained to be better at distinguishing them from real images.

In Nightingale’s experiments, she carefully matched groups of real images to synthetic ones, first created by generative adversarial networks (GANs), later by diffusion models (GeeksforGeeks raters’ demographics.

Then the humans were given some training in what to look for to detect fakes and the experiment was rerun with new sets of faces. The bad news: the training made a little difference, but not much. She went on to do similar experiments with diffusion images.

Nightingale has gone on to do some cross-modal experiments, including audio as well as images, following the 2024 election incident in which New Hampshire voters received robocalls from a faked Joe Biden intended to discourage voters in the January 2024 primary. In the audio experiment, she played the test subjects very short snippets. Played for us in the pub, it was very hard to tell real from fake, and her experimental subjects did no better. I would expect longer clips to be more identifiable as fake. The Biden call succeeded in part because that type of fake had never been tried before. Now, voters, at least in New Hampshire, will know it’s possible that the call they’re getting is part of a newer type of disinformation campaign aimed at

In another experiment, she asked participants to rate the trustworthiness of the facial images they were shown, and was dismayed when they rated the synthetic faces slightly (7.7%) higher than the real ones. In the resulting paper for Journal of Vision, she hypothesizes that this may be because synthetic faces tend to look more like “average” faces, which tend to be rated higher in trustworthiness, even if they’re not the most attractive.

Overall, she concludes that both still images and voice have “passed the Uncanny Valley“, and video will soon follow. In the past, I’ve chosen optimism about this sort of thing, on the basis that earlier generations have been fooled by technological artifacts that couldn’t fool us now for a second. The Cottingley Fairies looks ridiculous after generations of knowledge of photography. On the other hand, Johannes Vermeer’s Girl with a Pearl Earring looks more real than modern deepfakes, even though the subject is generally described as imaginary. So it’s possible to think of it as a “deepfake”, painted in oils in the 17th century.

Fakes have always been with us. What generative AI has done to change this landscape is to democratize and scale their creation, just as it’s amping up the scale and speed of cyber attacks. It’s no longer necessary to be even barely competent; the tools keep getting easier.

Listening to Nightingale it seems most likely that work like that in progress by an audience member on identifying technological artifacts that identify fakes will prove to be the right way forward. If those differences can be reliably identified, they could be built into technological tools that can spot indicators we can’t perceive directly. If something like that can be embedded into devices – phones, eyeglasses, wristwatches, laptops – and spot and filter out fakes in real time, and we should be able to regain some ability to trust what we see.

There are some obvious problems with this hoped-for future. Some people will continue to seek to exploit fakes; some may prefer them. The most likely outcome will be an arms race like that surrounding email spam and other battles between malware producers and security people. Still, it’s the first approach that seems to offer a practical solution to coping with a vastly diminished ability to know what’s real and what isn’t.

***

On the Internet your home always leaves you, part 4,563. Twenty-two-year-old blogging site Typepad will disappear in a few weeks. To those of us who have read blogs ever since they began, this news is shocking, like someone’s decided to tear down an old community church. Yes, the congregation has shrunk and aged, and it’s drafty and built on creaking old technology (in Typepad’s case, Moveable Type), but it’s part of shared local history. Except it isn’t, because, as Wikipedia documents, corporate musical chairs means it’s now owned by private equity. Apparently it’s been closed to new signups since 2020, and its bloggers are now being told to move their sites before everything is deleted in September. It feels like the stars of the open web are winking out, one by one.

On the Internet everything is forever, but everything is also ephemeral. Ironically, the site’s marketing slug still reads: “Typepad is the reliable, flexible blogging platform that puts the publisher in control.”

Illustrations: “Girl with a Pearl Earring”, painted by Johannes Vermeer circa 1665.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Dangerous corner

This year’s Computers. Privacy, and Data Protection conference arrived at a crossroads moment. The European Commission, wanting to compete to “win the AI race”, is pursuing an agenda of simplification. Based on a recent report by former European Central Bank president Mario Draghi, it’s looking to streamline or roll back some of the regulation the EU is famous for.

Cue discussion of “The Brussels Effect”, derived from The California Effect, which sees compliance with regulation voluntarily shift towards the strictest regime. As Mireille Hildebrandt explained in her opening keynote, this phenomenon requires certain conditions. In the case of data protection legislation, that means three things: that companies will comply with the most stringent rules to ensure they are universally compliant, and that they want and need to compete in the EU. If you want your rules to dominate, it seems like a strategy. Except: China’s in-progress data protection regime may well be the strongest when it’s complete, but in that very different culture it will include no protection against the government. So maybe not a winning game?

Hildebrandt went on to prove with near-mathematical precision that an artificial general intelligence can never be compatible with the General Data Protection Regulation – AGI is “based on an incoherent conceptualization” and can’t be tested.

“Systems built with the goal of performing any task under any circumstances are fundamentally unsafe,” she said. “They cannot be designed for safety using fundamental engineering principles.”

AGI failing to meet existing legal restrictions seems minor in one way, since AGI doesn’t exist now, and probably never will. But as Hildebrandt noted, huge money is being poured into it nonetheless, and the spreading impact of that is unavoidable even if it fails.

The money also makes politicians take the idea seriously, which is the likely source of the EU’s talk of “simplification” instead of fundamental rights. Many fear that forthcoming simplification packages will reopen GDPR with a view to weakening the core principles of data minimization and purpose limitation. As one conference attendee asked, “Simplification for whom?”

In a panel on conflicting trends in AI governance, Shazeda Ahmed agreed: “There is no scientific basis around the idea of sentient AI, but it’s really influential in policy conversations. It takes advantage of fear and privileges technical knowledge.”

AI is having another impact technology companies may not have notidced yet: it is aligning the interests of the environmental movement and the privacy field.

Sustainability and privacy have often been played off against each other. Years ago, for example, there were fears that councils might inspect household garbage for elements that could have been recycled. Smart meters may or may not reduce electricity usage, but definitely pose privacy risks. Similarly, many proponents of smart cities stress the sustainability benefits but overlook the privacy impact of the ubiquitous sensors.

The threat generative AI poses to sustainability is well-documented by now. The threat the world’s burgeoning data centers pose to the transition to renewables is less often clearly stated and it’s worse than we might think. Claude Turmes, for example, highlighted the need to impose standards for data centers. Where an individual is financially incentivized to charge their electric vehicle at night and help even out the load on the grid, the owners of data centers don’t care. They just want the power they need – even if that means firing up coal plants to get it. Absent standards, he said, “There will be a whole generation of data centers that…use fossil gas and destroy the climate agenda.” Small nuclear power reactors, which many are suggesting, won’t be available for years. Worse,, he said, the data centers refuse to provide information to help public utilities plan despite their huge cosumption.

Even more alarming was the panel on the conversion of the food commons into data spaces. So far, most of what I had heard about agricultural data revolved around precision agriculture and its impact on farm workers, as explored in work (PDF) by Karen Levy, Solon Barocas, and Alexandra Mateescu. That was plenty disturbing, covering the loss of autonomy as sensors collect massive amounts of fine-grained information, everything from soil moisture to the distribution of seeds and fertilizer.

Much more alarming to see Monja Sauvagerd connect up in detail the large companies that are consolidating our food supply into a handful of platforms. Chinese government-owned Sinochem owns Syngenta; John Deere expanded by buying the machine learning company Blue River; and in 2016 Bayer bought Monsanto.

“They’re blurring the lines between seeds, agrichemicals, bio technology, and digital agriculture,” Sauvagerd said. So: a handful of firms in charge of our food supply are building power based on existing concentration. And, selling them cloud and computing infrastructure services, the array of big technology platforms that are already dangerously monopolistic. In this case, “privacy”, which has always seemed abstract, becomes a factor in deciding the future of our most profoundly physical system. What rights should farmers have to the data their farms generate?

In her speech, Hildebrandt called the goals of TESCREAL – transhumanism, extropianism, singularitarianism, cosmism, rationalist ideology, effective altruism, and long-termism – “paradise engineering”. She proposed three questions for assessing new technologies: What will it solve? What won’t it solve? What new problems will it create? We could add a fourth: while they’re engineering paradise, how do we live?

Illustrations: Brussels’ old railway hub, next to its former communications hub, the Maison de la Poste, now a conference center.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A short history of We Robot 2012-

On the eve of We Robot 2025, here are links to my summaries of previous years. 2014 is missing; I didn’t make it that year for family reasons. There was no conference in 2024 in order to move the event back to its original April schedule (covid caused its move to September in 2020). These are my personal impressions; nothing I say here should be taken as representing the conference, its founders, its speakers, or their institutions.

We Robot was co-founded by Michael Froomkin, Ryan Calo, and Ian Kerr to bring together lawyers and engineers to think early about the coming conflicts in robots, law, and policy.

2024 No conference.

2023 The end of cool. After struggling to design a drone delivery service that had any benefits over today’s cycling couriers, we find ourselves less impressed by robot that can do somersaults but not do anything useful.

2022 Insert a human. “Robots” are now “sociotechnical systems”.

Workshop day Coding ethics. The conference struggles to design an ethical robot.

2021 Plausible diversions. How will robots rehape human space?

Workshop day Is the juice worth the squeeze?. We think about how to regulate delivery robots, which will likely have no user-serviceable parts. Title from Woody Hartzog.

2020 (virtual) The zero on the phone. AI exploitation becomes much more visible.

2019 Math, monsters, and metaphors. The trolley problem is dissected; the true danger is less robots than the “pile of math that does some stuff”.

Workshop day The Algernon problem. New participants remind that robots/AI are carrying out the commands of distant owners.

2018 Deception. The conference tries to tease out what makes robots different and revisits Madeleine Clare Elish’s moral crumple zones after the first pedestrian death by self-driving car.

Workshop day Late, noisy, and wrong. Engineers Bill Smart and Cindy Grimm explain why sensors never capture what you think they’re capturing and how AI systems use their data.

2017 Have robot, will legislate. Discussion of risks this year focused on the intermediate sitaution, when automation and human norms clash.

2016 Humans all the way down. Madeline Clare Elish introduces “moral crumple zones”.

Workshop day: The lab and the world. Bill Smart uses conference attendees in formation to show why building a robot is difficult.

2015 Multiplicity. A robot pet dog begs its owner for an upgraded service subscription.

2014 Missed conference

2013 Cautiously apocalyptic. Diversity of approaches to regulation will be needed to handle the diversity of robots.

2012 A really fancy hammer with a gun. Unsentimental engineer Bill Smart provided the title.

wg

The risks of recklessness

In 1997, when the Internet was young and many fields were still an unbroken green, the United States Institute of Peace convened a conference on virtual diplomacy. In my writeup for the Telegraph, I saw that organizer Bob Schmitt had convened two communities – computer and diplomacy – who were both wondering how they could get the other to collaborate but had no common ground.

On balance, the computer folks, who saw a potential market as well as a chance to do some good, were probably more eager than the diplomats, who favored caution and understood that in their discipline speed was often a bad idea. They were also less attracted than one might think to the notion of virtual meetings despite the travel it would save. Sometimes, one told me, it’s the random conversations around the water cooler that make plain what’s really happening. Why is Brazil mad? In a virtual meeting, it may be harder to find out that it’s not the negotiations but the fact that their soccer team lost last night.

I thought at the time that the conference would be the first of many to tackle these issues. But as it’s turned out, I’ve never been at an event anything like it…until now, nearly 30 years later. This week, a group of diplomats and human rights advocates met, similarly, to consider how the cyber world is changing diplomacy and international relations.

The timing is unexpectedly fortuitous. This week’s revelation that someone added Atlantic editor-in-chief Jeffrey Goldberg to a Signal chat in which US cabinet officials discussed plans for an imminent military operation in Yemen shows the kinds of problems you get when you rely too much on computer mediation. In the usual setting, a Sensitive Compartmented Information Facility (SCIF), you can see exactly who’s there, and communications to anyone outside that room are entirely blocked. As a security clearance-carrying friend of mine said, if he’d made such a blunder he’d be in prison.

The Signal blunder was raised by almost every speaker. It highlights something diplomats think about a lot: who is or is not in the room. Today, as in 1997, behavioral cues are important; one diplomat estimated that meeting virtually costs you 50% to 60% of the communication you have when meeting face-to-face. There are benefits, too, of course, such as opening side channels to remote others who can advise on specific questions, or the ability to assemble a virtual team a small country could never afford to send in person.

These concerns have not changed since 1997. But it’s clear that today’s diplomats feel they have less choice about what new technology gets deployed and how than they did then, when the Internet’s most significant predecessor new technology was the global omnipresence of news network CNN, founded in 1980. Now, much of what control they had then is disappearing, both because human behavior overrides their careful, rulebound, friction-filled diplomatic channels and processes via shadow IT, but also because the biggest technology companies own so much of what we call “public” infrastructure.

Another key difference: many people don’t see the need for education to learn facts; it’s a particular problem for diplomats, who rely on historical data to show the world they aspire to build. And another: today a vastly wider array of actors, from private companies to individuals and groups of individuals, can create world events. And finally: in 1997 multinational companies were already challenging the hegemony of governments, but they were not yet richer and more powerful than countries.

Cue for a horror thought: what if Big Tech, which is increasingly interested in military markets, and whose products are increasingly embedded at the hearts of governments decide that peace is bad for business? Already they are allying with politicians to resist human rights principles, most notably privacy.

Which cues another 1997 memory: Nicholas Negroponte absurdly saying that the Internet would bring world peace by breaking down national borders. In 20 years, he said (that would be eight years ago) children would not know what nationalism is. Instead, on top of all today’s wars and internal conflicts, we’re getting virtual infrastructure attacks more powerful than bullets, and proactive agents fueled by large language models. And all fueled by the performative-outrage style of social media, which is becoming just how people speak, publicly and privately.

All this is more salient when you listen to diplomats and human rights activists as they are the ones who see up close the human lives lost. Meta’s name comes up most often, as in Myanmar and Ethiopia.

The mood was especially touchy because a couple of weeks ago a New Zealand diplomat was recalled after questioning US president Donald Trump’s understanding of history during a public panel in London – ironically in Chatham House under the Chatham House rule.

“You say the wrong thing on the wrong platform at the wrong time, and your career is gone,” one observed. Their people perimeter is gone, as it has been for so many of us for a decade or more. But more than most people, diplomats who don’t have trust have nothing. And so: “We’re in a time when a single message can up-end relationships.”

No surprise, then, that the last words reflected 1997’s conclusion: “Diplomacy is still a contact sport.”

Illustrations: Internet meme rewriting Wikipedia’s Alice and Bob page explaining man-in-the-middle attacks with the names Hegseth, Waltz, and Goldberg, referencing the Signal snafu.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Digital distrust

On Tuesday, at the UK Internet Governance Forum, a questioner asked this: “Why should I trust any technology the government deploys?”

She had come armed with a personal but generalizable anecdote. Since renewing her passport in 2017, at every UK airport the electronic gates routinely send her for rechecking to the human-staffed desk, even though the same passport works perfectly well in electronic gates at airports in other countries. A New Scientist article by Adam Vaughan that I can’t locate eventually explained: the Home Office had deployed the system knowing it wouldn’t work for “people with my skin type”. That is, as you’ve probably already guessed, dark.

She directed her question to Katherine Yesilirmak, director of strategy in the Responsible Tech Adoption Unit, formerly the Centre for Data Ethics and Innovation, a subsidiary of the Department for Skills, Innovation, and Technology.

Yesirlimak did her best, mentioning the problem of bias in training data, the variability of end users, fairness, governmental responsibility for understanding the technology it procures (since it builds very little itself these days) and so on. She is clearly up to date, even referring to the latest study finding that AIs used by human resources consistently prefer résumés with white and male-presenting names over non-white and female-presenting names. But Yesirlimak didn’t really answer the questioner’s fundamental conundrum. Why *should* she trust government systems when they are knowingly commissioned with flaws that exclude her? Well, why?

Pause to remember that 20 years ago, Jim Wayman, a pioneer in biometric identification told me, “People never have what you think they’re going to have where you think they’re going to have it.” Biometrics systems must be built to accommodate outliers – and it’s hard. For more, see Wayman’s potted history of third-party testing of modern biometric systems in the US (PDF).

Yesirlimak, whose LinkedIn profile indicates she’s been at the unit for a little under three years, noted that the government builds very little of its own technology these days. However, her group is partnering with analogues in other countries and international bodies to build tools and standards that she believes will help.

This panel was nominally about AI governance, but the connection that needed to be made was from what the questioner was describing – technology that makes some people second-class citizens – to digital exclusion, siloed in a different panel. Most people describe the “digital divide” as a binary statistical matter: 1.7 million households are not online, and 40% of households don’t meet the digital living standard, per the Liberal Democrat peer Timothy Clement-Jones, who ruefully noted the “serious gap in knowledge in Parliament” regarding digital inclusion.

Clement-Jones, who is the co-chair of the All Party Parliamentary Group on Artificial Intelligence, cited the House of Lords Communications and Digital Committee’s January 2024 report. Another statistic came from Helen Milner: 23% of people with long-term illness or disabilities are digitally excluded.

The report cites the annual consumer digital index Lloyds Bank releases each year; the last one found that Internet use is dropping among the over-60s, and for the first time the percentage of people offline in the previous three months had increased, to 4%. Fifteen percent of those offline are under 50, and overall about 4.7 million people can’t connect to wifi. Ofcom’s 2023 report found that 7% of households (disproportionately poor and/or elderly) have no Internet access, 20% of them because of cost.

“We should make sure the government always provides an analog alternative, especially as we move to digital IDs” Clement-Jones said. In 2010, when Martha Lane Fox was campaigning to get the last 10% online, one could push back: why should they have to be? Today, parying parking meters requires an app and, as Royal Holloway professor Lizzie Coles-Kemp noted, smartphones aren’t enough for some services.

Milner finds that a third of those offline already find it difficult to engage with the NHS, creating “two-tier public services”. Clement-Jones added another example: people in temporary housing have to reapply weekly online – but there is no Internet provision in temporary housing.

Worse, however, is thinking technology will magically fix intractable problems. In Coles-Kemp’s example, if someone can’t do their prescribed rehabilitation exercises at home because they lack space, support, or confidence, no app will fix it. In her work on inclusive security technologies, she has long pushed for systems to be less hostile to users in the name of preventing fraud: “We need to do more work on the difference between scammers and people who are desperate to get something done.”

In addition, Milner said, tackling digital exclusion has to be widely embraced – by the Department of Work and Pensions, for example – not just handed off to DSIT. Much comes down to designers who are unlike the people on whom their systems will be imposed and whose direct customers are administrators. “The emphasis needs to shift to the creators of these technologies – policy makers, programmers. How do algorithms make decisions? What is the impact on others of liking a piece of content?”

Concern about the “digital divide” has been with us since the beginning of the Internet. It seems to have been gradually forgotten as online has become mainstream. It shouldn’t be: digital exclusion makes all the other kinds of exclusion worse and adds anger and frustration to an already morbidly divided society.

Illustrations: Martha Lane Fox in 2011 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Choice

The first year it occurred to me that a key consideration in voting for the US president was the future composition of the Supreme Court was 1980: Reagan versus Carter. Reagan appointed the first female justice, Sandra Day O’Connor – and then gave us Antonin Scalia and Anthony Kennedy. Only O’Connor was appointed during Reagan’s first term, the one that woulda, shoulda, coulda been Carter’s second.

Watching American TV shows and movies from the 1980s and 1990s is increasingly sad. In some – notably Murphy Brown – a pregnant female character wrestles with deciding what to do. Even when not pregnant, those characters live inside the confidence of knowing they have choice.

At the time, Murphy (Candice Bergen) was pilloried for choosing single motherhood. (“Does [Dan Quayle] know she’s fictional?” a different sitcom asked, after the then-vice president critized her “lifestyle”.) Had Murphy opted instead for an abortion, I imagine she’d have been just as vilified rather than seen as acting “responsibly”. In US TV history, it may only be on Maude in 1972 that an American lead character, Maude (Bea Arthur), is shown actually going through with an abortion. Even in 2015 in an edgy comedy like You’re the Worst, that choice is given to the sidekick. It’s now impossible to watch any of those scenes without feeling the loss of agency.

In the news, pro-choice activists warned that overturning Roe v. Wade would bring deaths, and so it has, but not in the same way as they did in the illegal-abortion 1950s, when termination could be dangerous. Instead, women are dying because their health needs fall in the middle of a spectrum that has purely elective abortion at one end and purely involuntary miscarriage at the other. These are not distinguishable *physically*, but can be made into evil versus blameless morality tales (though watch that miscarrying mother, maybe she did something).

Even those who still have a choice may struggle to access it. Only one doctor performs abortions in Mississippi ; he also works in Alabama and Tennessee.

So this time women are dying or suffering from lack of care when doctors can’t be sure what they are allowed do under laws that are written by people with shockingly limited medical knowledge.

Such was the case of Amber Thurman, a 28-year-old Georgian medical assistant who died of septic shock after fetal tissue was incompletely expelled after a medication abortion, which she’d had to travel hundreds of miles to North Carolina to get. It’s a very rare complication, but her life could probably have been saved by prompt action – but the hospital had no policy in place for septic abortions under Georgia’s then-new law. There have been many more awful stories since – many not deaths but fraught survivals of avoidable complications.

If anti-abortion activists are serious about their desire to save the life of every unborn child, there are real and constructive things they can do. They could start by requiring hospitals to provide obstetrics units and states to imrpove provision for women’s health. According to March of Dimes, 5.5 million American women in are caught in the one-third of US counties it calls “maternity deserts”. Most affected are those in the states of North Dakota, South Dakota, Alaska, Oklahoma, and Nebraska. In Texas, which banned abortion after six weeks in 2021 and now prohibits it except to save the mother’s life, maternal mortality rose 56% between 2019 and 2022. Half of Texas counties, Stephanie Taladrid reported at The New Yorker in January, have no specialists in women’s health.

“Pro-life” could also mean pushing to support families. American parents have less access to parental leave than their counterparts in other developed countries. Or they could fight to redress other problems, like the high rate of Black maternal mortality.

Instead, the most likely response to the news that abortion rates have actually gone up in the US since the Dobbs decision is efforts to increase surveillance, criminalization, and restriction. In 2022, I imagined how this might play out in a cashless society, where linked systems could prevent a pregnant woman from paying for anything that might help her obtain an abortion: travel, drugs, even unhealthy foods,

This week, at The Intercept, Debbie Nathan reports on a case in which a police sniffer dog flagged an envelope that, opened under warrant, proved to contain abortion pills. It’s not clear, she writes, whether the sniffer dogs actually detect misopristol and mifepristone, or traces of contraband drugs, or just responding to an already-suspicious handler’s subtle cues, like Clever Hans. Using the US Postal Service’s database of images of envelopes, inspectors were able to identify other parcels from the same source and their recipients. A hostile administration could press for – in fact, Republican vice-presidential candidate JD Vance has already demanded – renewed enforcement of the not-dead-only-sleeping Comstock Act (1873), which criminalizes importing and mailing items “intended for producing abortion, or for any indecent or immoral use”.

There are so many other vital issues at stake in this election, but this one is personal. I spent my 20s traveling freely across the US to play folk music. Imagine that with today’s technology and states that see every woman of child-bearing age as a suspected criminal.

Illustrations: Murphy Brown (Candice Bergen) with baby son Avery (Haley Joel Osment).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Sectioned

Social media seems to be having a late-1990s moment, raising flashbacks to the origins of platform liability and the passage of Section 230 of the Communications Decency Act (1996). It’s worth making clear at the outset: most of the people talking about S230 seem to have little understanding of what it is and does. It allows sites to moderate content without becoming liable for it. It is what enables all those trust and safety teams to implement sites’ restrictions on acceptable use. When someone wants to take an axe to it because there is vile content circulating, they have not understood this.

So, in one case this week a US appeals court is allowing a lawsuit to proceed that seeks to hold TikTok liable for users’ postings of the “blackout challenge”, the idea being to get an adrenaline rush by reviving from near-asphyxiation. Bloomberg reports that at least 20 children have died trying to accomplish this, at least 15 of them age 12 or younger (TikTok, like all social media, is supposed to be off-limits to under-13s). The people suing are the parents of one of those 20, a ten-year-old girl who died attempting the challenge.

The other case is that of Pavel Durov, CEO of the messaging service Telegram, who has been arrested in France as part of a criminal investigation. He has been formally charged with complicity in managing an online platform “in order to enable an illegal transaction in organized group”, and refusal to cooperate with law enforcement authorities and ordered not to leave France, with bail set at €5 million (is that enough to prevent the flight of a billionaire with four passports?).

While there have been many platform liability cases, there are relatively few examples of platform owners and operators being charged. The first was in 1997, back when “online” still had a hyphen; the German general manager of CompuServe, Felix Somm, was arrested in Bavaria on charges of “trafficking in pornography”. That is, German users of Columbus, Ohio-based CompuServe could access pornography and illegal material on the Internet through the service’s gateway. In 1998, Somm was convicted and given a two-year suspended sentence. In 1999 his conviction was overturned on appeal, partly, the judge wrote, because there was no technology at the time that would have enabled CompuServe to block the material.

The only other example I’m aware of came just this week, when an Arizona judge sentenced Michael Lacey, co-founder of the classified ads site Backpage.com, to five years in prison and fined him $3 million for money laundering. He still faces further charges for prostitution facilitation and money laundering; allegedly he profited from a scheme to promote prostitution on his site. Two other previously convicted Backpages executives were also sentenced this week to ten years in prison.

In Durov’s case, the key point appears to be his refusal to follow industry practice with respect to to reporting child sexual abuse material or cooperate with properly executed legal requests for information. You don’t have to be a criminal to want the social medium of your choice to protect your privacy from unwarranted government snooping – but equally, you don’t have to be innocent to be concerned if billionaire CEOs of large technology companies consider themselves above the law. (See also Elon Musk, whose X platform may be tossed out of Brazil right now.)

Some reports on the Durov case have focused on encryption, but the bigger issue appears to be failure to register to use encryption , as Signal has. More important, although Telegram is often talked about as encrypted, it’s really more like other social media, where groups are publicly visible, and only direct one-on-one messages are encrypted. But even then, they’re only encrypted if users opt in. Given that users notoriously tend to stick with default settings, that means that the percentage of users who turn that encryption on is probably tiny. So it’s not clear yet whether France is seeking to hold Durov responsible for the user-generated content on his platform (which S230 would protect in the US), or accusing him of being part of criminal activity relating to his platform (which it wouldn’t).

Returning to the Arizona case, in allowing the lawsuit to go ahead, the appeals court judgment says that S230 has “evolved away from its original intent”, and argues that because TikTok’s algorithm served up the challenge on the child’s “For You” page, the service can be held responsible. At TechDirt, Mike Masnick blasts this reasoning, saying that it overturns numerous other court rulings upholding S230, and uses the same reasoning as the 1995 decision in Stratton Oakmont v. Prodigy. That was the case that led directly to the passage of S230, introduced by then-Congressman Christopher Cox (R-CA) and Senator Ron Wyden (D-OR), who are still alive to answer questions about their intent. Rather than evolving away, we’ve evolved back full circle.

The rise of monopolistic Big Tech has tended to obscure the more important point about S230. As Cory Doctorow writes for EFF, killing S230 would kill the small federated communities (like Mastodon and Discord servers) and web boards that offer alternatives to increasing Big Tech’s pwoer. While S230 doesn’t apply outside the US (some Americans have difficulty understanding that other countries have different laws), its ethos is pervasive and the companies it’s enabled are everywhere. In the end, it’s like democracy: the alternatives are worse.

Illustrations: Drunken parrot in Putney (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: A History of Fake Things on the Internet

A History of Fakes on the Internet
By Walter J. Scheirer
Stanford University Press
ISBN 2023017876

One of Agatha Christie’s richest sources of plots was the uncertainty of identity in England’s post-war social disruption. Before then, she tells us, anyone arriving to take up residence in a village brought a letter of introduction; afterwards, old-time residents had to take newcomers at their own valuation. Had she lived into the 21st century, the arriving Internet would have given her whole new levels of uncertainty to play with.

In his recent book A History of Fake Things on the Internet, University of Notre Dame professor Walter J. Scheirer describes creating and detecting online fakes as an ongoing arms race. Where many people project doomishly that we will soon lose the ability to distinguish fakery from reality, Scheirer is more optimistic. “We’ve had functional policies in the past; there is no good reason we can’t have them again,” he concludes, adding that to make this happen we need a better understanding of the media that support the fakes.

I have a lot of sympathy with this view; as I wrote recently, things that fool people when a medium is new are instantly recognizable as fake once they become experienced. We adapt. No one now would be fooled by the images that looked real in the early days of photography. Our perceptions become more sophisticated, and we learn to examine context. Early fakes often work simply because we don’t know yet that such fakes are possible. Once we do know, we exercise much greater caution before believing. Teens who’ve grown up applying filters to the photos and videos they upload to Instagram and TikTok, see images very differently than those of us who grew up with TV and film.

Schierer begins his story with the hacker counterculture that saw computers as a source of subversive opportunities. His own research into media forensics began with Photoshop. At the time, many, especially in the military, worried that nation-states would fake content in order to deceive and manipulate. What they found, in much greater volume, was memes and what Schierer calls “participatory fakery” – that is, the cultural outpouring of fakes for entertainment and self-expression, most of it harmless. Further chapters consider cheat codes in games, the slow conversion of hackers into security practitioners, adversarial algorithms and media forensics, shock-content sites, and generative AI.

Through it all, Schierer remains optimistic that the world we’re moving into “looks pretty good”. Yes, we are discovering hundreds of scientific papers with faked data, faked results, or faked images, but we also have new analysis tools to use to detect them and Retraction Watch to catalogue them. The same new tools that empower malicious people enable many more positive uses for storytelling, collaboration, and communication. Perhaps forgetting that the computer industry relentlessly ignores its own history, he writes that we should learn from the past and react to the present.

The mention of scientific papers raises an issue Schierer seems not to worry about: waste. Every retracted paper represents lost resources – public funding, scientists’ time and effort, and the same multiplied into the future for anyone who attempts to build on that paper. Figuring out how to automate reliable detection of chatbot-generated text does nothing to lessen the vast energy, water, and human resources that go into building and maintaining all those data centers and training models (see also filtering spam). Like Scheirer, I’m largely optimistic about our ability to adapt to a more slippery virtual reality. But the amount of wasted resources is depressing and, given climate change, dangerous.

Deja news

At the first event organized by the University of West London group Women Into Cybersecurity, a questioner asked how the debates around the Internet have changed since I wrote the original 1997 book net.wars..

Not much, I said. Some chapters have dated, but the main topics are constants: censorship, freedom of speech, child safety, copyright, access to information, digital divide, privacy, hacking, cybersecurity, and always, always, *always* access to encryption. Around 2010, there was a major change when the technology platforms became big enough to protect their users and business models by opposing government intrusion. That year Google launched the first version of its annual transparency report, for example. More recently, there’s been another shift: these companies have engorged to the point where they need not care much about their users or fear regulatory fines – the stage Ed Zitron calls the rot economy and Cory Doctorow dubs enshittification.

This is the landscape against which we’re gearing up for (yet) another round of recursion. April 25 saw the passage of amendments to the UK’s Investigatory Powers Act (2016). These are particularly charmless, as they expand the circumstances under which law enforcement can demand access to Internet Connection Records, allow the government to require “exceptional lawful access” (read: backdoored encryption) and require technology companies to get permission before issuing security updates. As Mark Nottingham blogs, no one should have this much power. In any event, the amendments reanimate bulk data surveillance and backdoored encryption.

Also winding through Parliament is the Data Protection and Digital Information bill. The IPA amendments threaten national security by demanding the power to weaken protective measures; the data bill threatens to undermine the adequacy decision under which the UK’s data protection law is deemed to meet the requirements of the EU’s General Data Protection Regulation. Experts have already put that adequacy at risk. If this government proceeds, as it gives every indication of doing, the next, presumably Labour, government may find itself awash in an economic catastrophe as British businesses become persona-non-data to their European counterparts.

The Open Rights Group warns that the data bill makes it easier for government, private companies, and political organizations to exploit our personal data while weakening subject access rights, accountability, and other safeguards. ORG is particularly concerned about the impact on elections, as the bill expands the range of actors who are allowed to process personal data revealing political opinions on a new “democratic engagement activities” basis.

If that weren’t enough, another amendment also gives the Department of Work and Pensions the power to monitor all bank accounts that receive payments, including the state pension – to reduce overpayments and other types of fraud, of course. And any bank account connected to those accounts, such as landlords, carers, parents, and partners. At Computer Weekly, Bill Goodwin suggests that the upshot could be to deter landlords from renting to anyone receiving state benefits or entitlements. The idea is that banks will use criteria we can’t access to flag up accounts for the DWP to inspect more closely, and over the mass of 20 million accounts there will be plenty of mistakes to go around. Safe prediction: there will be horror stories of people denied benefits without warning.

And in the EU… Techcrunch reports that the European Commission (always more surveillance-happy and less human rights-friendly than the European Parliament) is still pursuing its proposal to require messaging platforms to scan private communications for child sexual abuse material. Let’s do the math of truly large numbers: billions of messages, even a teeny-tiny percentage of inaccuracy, literally millions of false positives! On Thursday, a group of scientists and researchers sent an open letter pointing out exactly this. Automated detection technologies perform poorly, innocent images may occur in clusters (as when a parent sends photos to a doctor), and such a scheme requires weakening encryption, and in any case, better to focus on eliminating child abuse (taking CSAM along with it).

Finally, age verification, which has been pending in the UK ever since at least 2016, is becoming a worldwide obsession. At least eight US states and the EU have laws mandating age checks, and the Age Verification Providers Association is pushing to make the Internet “age-aware persistently”. Last month, the BSI convened a global summit to kick off the work of developing a worldwide standard. These moves are the latest push against online privacy; age checks will be applied to *everyone*, and while they could be designed to respect privacy and anonymity, the most likely is that they won’t be. In 2022, the French data protection regulator, CNIL, found that current age verification methods are both intrusive and easily circumvented. In the US, Casey Newton is watching a Texas case about access to online pornography and age verification that threatens to challenge First Amendment precedent in the Supreme Court.

Because the debates are so familiar – the arguments rarely change – it’s easy to overlook how profoundly all this could change the Internet. An age-aware Internet where all web use is identified and encrypted messaging services have shut down rather than compromise their users and every action is suspicious until judged harmless…those are the stakes.

Illustrations: Angel sensibly smashes the ring that makes vampires impervious (in Angel, “In the Dark” (S01e03)).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon