Digital distrust

On Tuesday, at the UK Internet Governance Forum, a questioner asked this: “Why should I trust any technology the government deploys?”

She had come armed with a personal but generalizable anecdote. Since renewing her passport in 2017, at every UK airport the electronic gates routinely send her for rechecking to the human-staffed desk, even though the same passport works perfectly well in electronic gates at airports in other countries. A New Scientist article by Adam Vaughan that I can’t locate eventually explained: the Home Office had deployed the system knowing it wouldn’t work for “people with my skin type”. That is, as you’ve probably already guessed, dark.

She directed her question to Katherine Yesilirmak, director of strategy in the Responsible Tech Adoption Unit, formerly the Centre for Data Ethics and Innovation, a subsidiary of the Department for Skills, Innovation, and Technology.

Yesirlimak did her best, mentioning the problem of bias in training data, the variability of end users, fairness, governmental responsibility for understanding the technology it procures (since it builds very little itself these days) and so on. She is clearly up to date, even referring to the latest study finding that AIs used by human resources consistently prefer résumés with white and male-presenting names over non-white and female-presenting names. But Yesirlimak didn’t really answer the questioner’s fundamental conundrum. Why *should* she trust government systems when they are knowingly commissioned with flaws that exclude her? Well, why?

Pause to remember that 20 years ago, Jim Wayman, a pioneer in biometric identification told me, “People never have what you think they’re going to have where you think they’re going to have it.” Biometrics systems must be built to accommodate outliers – and it’s hard. For more, see Wayman’s potted history of third-party testing of modern biometric systems in the US (PDF).

Yesirlimak, whose LinkedIn profile indicates she’s been at the unit for a little under three years, noted that the government builds very little of its own technology these days. However, her group is partnering with analogues in other countries and international bodies to build tools and standards that she believes will help.

This panel was nominally about AI governance, but the connection that needed to be made was from what the questioner was describing – technology that makes some people second-class citizens – to digital exclusion, siloed in a different panel. Most people describe the “digital divide” as a binary statistical matter: 1.7 million households are not online, and 40% of households don’t meet the digital living standard, per the Liberal Democrat peer Timothy Clement-Jones, who ruefully noted the “serious gap in knowledge in Parliament” regarding digital inclusion.

Clement-Jones, who is the co-chair of the All Party Parliamentary Group on Artificial Intelligence, cited the House of Lords Communications and Digital Committee’s January 2024 report. Another statistic came from Helen Milner: 23% of people with long-term illness or disabilities are digitally excluded.

The report cites the annual consumer digital index Lloyds Bank releases each year; the last one found that Internet use is dropping among the over-60s, and for the first time the percentage of people offline in the previous three months had increased, to 4%. Fifteen percent of those offline are under 50, and overall about 4.7 million people can’t connect to wifi. Ofcom’s 2023 report found that 7% of households (disproportionately poor and/or elderly) have no Internet access, 20% of them because of cost.

“We should make sure the government always provides an analog alternative, especially as we move to digital IDs” Clement-Jones said. In 2010, when Martha Lane Fox was campaigning to get the last 10% online, one could push back: why should they have to be? Today, parying parking meters requires an app and, as Royal Holloway professor Lizzie Coles-Kemp noted, smartphones aren’t enough for some services.

Milner finds that a third of those offline already find it difficult to engage with the NHS, creating “two-tier public services”. Clement-Jones added another example: people in temporary housing have to reapply weekly online – but there is no Internet provision in temporary housing.

Worse, however, is thinking technology will magically fix intractable problems. In Coles-Kemp’s example, if someone can’t do their prescribed rehabilitation exercises at home because they lack space, support, or confidence, no app will fix it. In her work on inclusive security technologies, she has long pushed for systems to be less hostile to users in the name of preventing fraud: “We need to do more work on the difference between scammers and people who are desperate to get something done.”

In addition, Milner said, tackling digital exclusion has to be widely embraced – by the Department of Work and Pensions, for example – not just handed off to DSIT. Much comes down to designers who are unlike the people on whom their systems will be imposed and whose direct customers are administrators. “The emphasis needs to shift to the creators of these technologies – policy makers, programmers. How do algorithms make decisions? What is the impact on others of liking a piece of content?”

Concern about the “digital divide” has been with us since the beginning of the Internet. It seems to have been gradually forgotten as online has become mainstream. It shouldn’t be: digital exclusion makes all the other kinds of exclusion worse and adds anger and frustration to an already morbidly divided society.

Illustrations: Martha Lane Fox in 2011 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Choice

The first year it occurred to me that a key consideration in voting for the US president was the future composition of the Supreme Court was 1980: Reagan versus Carter. Reagan appointed the first female justice, Sandra Day O’Connor – and then gave us Antonin Scalia and Anthony Kennedy. Only O’Connor was appointed during Reagan’s first term, the one that woulda, shoulda, coulda been Carter’s second.

Watching American TV shows and movies from the 1980s and 1990s is increasingly sad. In some – notably Murphy Brown – a pregnant female character wrestles with deciding what to do. Even when not pregnant, those characters live inside the confidence of knowing they have choice.

At the time, Murphy (Candice Bergen) was pilloried for choosing single motherhood. (“Does [Dan Quayle] know she’s fictional?” a different sitcom asked, after the then-vice president critized her “lifestyle”.) Had Murphy opted instead for an abortion, I imagine she’d have been just as vilified rather than seen as acting “responsibly”. In US TV history, it may only be on Maude in 1972 that an American lead character, Maude (Bea Arthur), is shown actually going through with an abortion. Even in 2015 in an edgy comedy like You’re the Worst, that choice is given to the sidekick. It’s now impossible to watch any of those scenes without feeling the loss of agency.

In the news, pro-choice activists warned that overturning Roe v. Wade would bring deaths, and so it has, but not in the same way as they did in the illegal-abortion 1950s, when termination could be dangerous. Instead, women are dying because their health needs fall in the middle of a spectrum that has purely elective abortion at one end and purely involuntary miscarriage at the other. These are not distinguishable *physically*, but can be made into evil versus blameless morality tales (though watch that miscarrying mother, maybe she did something).

Even those who still have a choice may struggle to access it. Only one doctor performs abortions in Mississippi ; he also works in Alabama and Tennessee.

So this time women are dying or suffering from lack of care when doctors can’t be sure what they are allowed do under laws that are written by people with shockingly limited medical knowledge.

Such was the case of Amber Thurman, a 28-year-old Georgian medical assistant who died of septic shock after fetal tissue was incompletely expelled after a medication abortion, which she’d had to travel hundreds of miles to North Carolina to get. It’s a very rare complication, but her life could probably have been saved by prompt action – but the hospital had no policy in place for septic abortions under Georgia’s then-new law. There have been many more awful stories since – many not deaths but fraught survivals of avoidable complications.

If anti-abortion activists are serious about their desire to save the life of every unborn child, there are real and constructive things they can do. They could start by requiring hospitals to provide obstetrics units and states to imrpove provision for women’s health. According to March of Dimes, 5.5 million American women in are caught in the one-third of US counties it calls “maternity deserts”. Most affected are those in the states of North Dakota, South Dakota, Alaska, Oklahoma, and Nebraska. In Texas, which banned abortion after six weeks in 2021 and now prohibits it except to save the mother’s life, maternal mortality rose 56% between 2019 and 2022. Half of Texas counties, Stephanie Taladrid reported at The New Yorker in January, have no specialists in women’s health.

“Pro-life” could also mean pushing to support families. American parents have less access to parental leave than their counterparts in other developed countries. Or they could fight to redress other problems, like the high rate of Black maternal mortality.

Instead, the most likely response to the news that abortion rates have actually gone up in the US since the Dobbs decision is efforts to increase surveillance, criminalization, and restriction. In 2022, I imagined how this might play out in a cashless society, where linked systems could prevent a pregnant woman from paying for anything that might help her obtain an abortion: travel, drugs, even unhealthy foods,

This week, at The Intercept, Debbie Nathan reports on a case in which a police sniffer dog flagged an envelope that, opened under warrant, proved to contain abortion pills. It’s not clear, she writes, whether the sniffer dogs actually detect misopristol and mifepristone, or traces of contraband drugs, or just responding to an already-suspicious handler’s subtle cues, like Clever Hans. Using the US Postal Service’s database of images of envelopes, inspectors were able to identify other parcels from the same source and their recipients. A hostile administration could press for – in fact, Republican vice-presidential candidate JD Vance has already demanded – renewed enforcement of the not-dead-only-sleeping Comstock Act (1873), which criminalizes importing and mailing items “intended for producing abortion, or for any indecent or immoral use”.

There are so many other vital issues at stake in this election, but this one is personal. I spent my 20s traveling freely across the US to play folk music. Imagine that with today’s technology and states that see every woman of child-bearing age as a suspected criminal.

Illustrations: Murphy Brown (Candice Bergen) with baby son Avery (Haley Joel Osment).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The brittle state

We’re now almost a year on from Rishi Sunak’s AI Summit, which failed to accomplish any of its most likely goals: cement his position as the UK’s prime minister; establish the UK as a world leader in AI fearmongering; or get him the new life in Silicon Valley some commentators seemed to think he wanted.

Arguably, however, it has raised belief that computer systems are “intelligent” – that is, that they understand what they’re calculating. The chatbots based on large language models make that worse, because, as James Boyle cleverly wrote, for the first time in human history, “sentences do not imply sentience”. Mix in paranoia over the state of the world and you get some truly terrifying systems being put into situations where they can catastrophically damage people’s lives. We should know better by now.

The Open Rights Group (I’m still on its advisory council) is campaigning against the Home Office’s planned eVisa scheme. In the previouslies: between 1948 and 1971, people from Caribbean countries, many of whom had fought for Britain in World War II, were encouraged to help the UK rebuild the economy post-war. They are known as the “Windrush generation” after the first ship that brought them. As Commonwealth citizens, they didn’t need visas or documentation; they and their children had the automatic right to live and work here.

Until 1973, when the law changed; later arrivals needed visas. The snag was that earlier arrivals had no idea they had any reason to worry….until the day they discovered, when challenged, that they had no way to prove they were living here legally. That day came in 2017, when then-prime minister, Theresa May (who this week joined the House of Lords) introduced the hostile environment. Intended to push illegal immigrants to go home, this law moves the “border” deep into British life by requiring landlords, banks, and others to conduct status checks. The result was that some of the Windrush group – again, legal residents – were refused medical care, denied housing, or deported.

When Brexit became real, millions of Europeans resident in the UK were shoved into the same position: arrived legally, needing no documentation, but in future required to prove their status. This time, the UK issued them documents confirming their status as permanently settled.

Until December 31, 2024, when all those documents with no expiration date will abruptly expire because the Home Office has a new system that is entirely online. As ORG and the3million explain it, come January 1, 2025, about 4 million people will need online accounts to access the new system, which generates a code to give the bank or landlord temporary access to their status. The new system will apparently hit a variety of databases in real time to perform live checks.

Now, I get that the UK government doesn’t want anyone to be in the country for one second longer than they’re entitled to. But we don’t even have to say, “What could possibly go wrong?” because we already *know* what *has* gone wrong for the Windrush generation. Anyone who has to prove their status off the cuff in time-sensitive situations really needs proof they can show when the system fails.

A proposal like this can only come from an irrational belief in the perfection – or at least, perfectability – of computer systems. It assumes that Internet connections won’t be interrupted, that databases will return accurate information, and that everyone involved will have the necessary devices and digital literacy to operate it. Even without ORG’s and the3million’s analysis, these are bonkers things to believe – and they are made worse by a helpline that is only available during the UK work day.

There is a lot of this kind of credulity about, most of it connected with “AI”. AP News reports that US police departments are beginning to use chatbots to generate crime reports based on the audio from their body cams. And, says Ars Technica, the US state of Nevada will let AI decide unemployment benefit claims, potentially producing denials that can’t be undone by a court. BrainFacts reports that decision makers using “AI” systems are prone to automation bias – that is, they trust the machine to be right. Of course, that’s just good job security: you won’t be fired for following the machine, but you might for overriding it.

The underlying risk with all these systems, as a security experts might say, is complexity: more complex means being more susceptible to inexplicable failures. There is very little to go wrong with a piece of paper that plainly states your status, for values of “paper” including paper, QR codes downloaded to phones, or PDFs saved to a desktop/laptop. Much can go wrong with the system underlying that “paper”, but, crucially, when a static confirmation is saved offline, managing that underlying complexity can take place when the need is not urgent.

It ought to go without saying that computer systems with a profound impact on people’s lives should be backed up by redundant systems that can be used when they fail. Yet the world the powers that be apparently want to build is one that underlines their power to cause enormous stress for everyone else. Systems like eVisas are as brittle as just-in-time supply chains. And we saw what happens to those during the emergency phase of the covid pandemic.

Illustrations: Empty supermarket shelves in March 2020 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: More Than a Glitch

More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech
By Meredith Broussard
MIT Press
ISBN: 978-0-262-04765-4

At the beginning of the 1985 movie Brazil, a family’s life is ruined when a fly gets stuck in a typewriter key so that the wrong man is carted away to prison. It’s a visual play on “computer bug”, so named after a moth got trapped in a computer at Harvard.

Based on her recent book More Than a Glitch, NYU associate professor Meredith Broussard, would call both the fly and the moth a “glitch”. In the movie, the error is catastrophic for Buttle-not-Tuttle and his family, but it’s a single, ephemeral mistake that can be prevented with insecticide and cross-checking. A “bug” is more complex and more significant: it’s “substantial”, “a more serious matter that makes software fail”. It “deserves attention”. It’s the difference between the lone rotten apple in a bushel full of good ones and a barrel that causes all the apples put it in to rot.

This distinction is Broussard’s prelude to her fundamental argument that the lack of fairness in computer systems is persistent, endemic, and structural. In the book, she examines numerous computer systems that are already out in the world causing trouble. After explaining the fundamentals of machine bias, she goes through a variety of sectors and applications to examine failures of fairness in each one. In education, proctoring software penalizes darker-skinned students by failing to identify them accurately, and algorithms used to estimate scores on tests canceled during the pandemic penalized exceptional students from unexpected backgrounds. In health, long-practiced “race correction” that derives from slavery preferences white patients for everything from painkillers to kidney transplants – and gets is embedded into new computer systems built to replicate existing practice. If computer developers don’t understand the way in which the world is prejudiced – and they don’t – how can the systems they create be more neutral than the precursors they replace? Broussard delves inside each system to show why, not just how, it doesn’t work as intended.

In other cases Broussard highlights, part of the problem is rigid inflexibility in back-end systems that need to exchange data. There’s little benefit in having 58 gender options if the underlying database only supports two choices. At a doctor’s office, Broussard is told she can only check one box for race; she prefer to check both “black” and “white” because in medical settings it may affect her treatment. The digital world remains only partially accessible. And, as Broussard discovered when she was diagnosed with breast cancer, even supposed AI successes like reading radiology films are overhyped. This section calls back to her 2018 book, Artificial Unintelligence, which did a good job of both explaining how machine learning and “AI” computer systems work and why a lot of the things the industry says work…really don’t (see also self-driving cars).

Broussard concludes by advocating for public interest technology and a rethink. New technology imitates the world it comes from; computers “predict the status quo”. Making change requires engineering technology so that it performs differently. It’s a tall order, and Broussard knows that. But wasn’t that the whole promise the technology founder made? That they could change the world to empower the rest of us?