Digital distrust

On Tuesday, at the UK Internet Governance Forum, a questioner asked this: “Why should I trust any technology the government deploys?”

She had come armed with a personal but generalizable anecdote. Since renewing her passport in 2017, at every UK airport the electronic gates routinely send her for rechecking to the human-staffed desk, even though the same passport works perfectly well in electronic gates at airports in other countries. A New Scientist article by Adam Vaughan that I can’t locate eventually explained: the Home Office had deployed the system knowing it wouldn’t work for “people with my skin type”. That is, as you’ve probably already guessed, dark.

She directed her question to Katherine Yesilirmak, director of strategy in the Responsible Tech Adoption Unit, formerly the Centre for Data Ethics and Innovation, a subsidiary of the Department for Skills, Innovation, and Technology.

Yesirlimak did her best, mentioning the problem of bias in training data, the variability of end users, fairness, governmental responsibility for understanding the technology it procures (since it builds very little itself these days) and so on. She is clearly up to date, even referring to the latest study finding that AIs used by human resources consistently prefer résumés with white and male-presenting names over non-white and female-presenting names. But Yesirlimak didn’t really answer the questioner’s fundamental conundrum. Why *should* she trust government systems when they are knowingly commissioned with flaws that exclude her? Well, why?

Pause to remember that 20 years ago, Jim Wayman, a pioneer in biometric identification told me, “People never have what you think they’re going to have where you think they’re going to have it.” Biometrics systems must be built to accommodate outliers – and it’s hard. For more, see Wayman’s potted history of third-party testing of modern biometric systems in the US (PDF).

Yesirlimak, whose LinkedIn profile indicates she’s been at the unit for a little under three years, noted that the government builds very little of its own technology these days. However, her group is partnering with analogues in other countries and international bodies to build tools and standards that she believes will help.

This panel was nominally about AI governance, but the connection that needed to be made was from what the questioner was describing – technology that makes some people second-class citizens – to digital exclusion, siloed in a different panel. Most people describe the “digital divide” as a binary statistical matter: 1.7 million households are not online, and 40% of households don’t meet the digital living standard, per the Liberal Democrat peer Timothy Clement-Jones, who ruefully noted the “serious gap in knowledge in Parliament” regarding digital inclusion.

Clement-Jones, who is the co-chair of the All Party Parliamentary Group on Artificial Intelligence, cited the House of Lords Communications and Digital Committee’s January 2024 report. Another statistic came from Helen Milner: 23% of people with long-term illness or disabilities are digitally excluded.

The report cites the annual consumer digital index Lloyds Bank releases each year; the last one found that Internet use is dropping among the over-60s, and for the first time the percentage of people offline in the previous three months had increased, to 4%. Fifteen percent of those offline are under 50, and overall about 4.7 million people can’t connect to wifi. Ofcom’s 2023 report found that 7% of households (disproportionately poor and/or elderly) have no Internet access, 20% of them because of cost.

“We should make sure the government always provides an analog alternative, especially as we move to digital IDs” Clement-Jones said. In 2010, when Martha Lane Fox was campaigning to get the last 10% online, one could push back: why should they have to be? Today, parying parking meters requires an app and, as Royal Holloway professor Lizzie Coles-Kemp noted, smartphones aren’t enough for some services.

Milner finds that a third of those offline already find it difficult to engage with the NHS, creating “two-tier public services”. Clement-Jones added another example: people in temporary housing have to reapply weekly online – but there is no Internet provision in temporary housing.

Worse, however, is thinking technology will magically fix intractable problems. In Coles-Kemp’s example, if someone can’t do their prescribed rehabilitation exercises at home because they lack space, support, or confidence, no app will fix it. In her work on inclusive security technologies, she has long pushed for systems to be less hostile to users in the name of preventing fraud: “We need to do more work on the difference between scammers and people who are desperate to get something done.”

In addition, Milner said, tackling digital exclusion has to be widely embraced – by the Department of Work and Pensions, for example – not just handed off to DSIT. Much comes down to designers who are unlike the people on whom their systems will be imposed and whose direct customers are administrators. “The emphasis needs to shift to the creators of these technologies – policy makers, programmers. How do algorithms make decisions? What is the impact on others of liking a piece of content?”

Concern about the “digital divide” has been with us since the beginning of the Internet. It seems to have been gradually forgotten as online has become mainstream. It shouldn’t be: digital exclusion makes all the other kinds of exclusion worse and adds anger and frustration to an already morbidly divided society.

Illustrations: Martha Lane Fox in 2011 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The safe place

For a long time, fear that technical decisions – new domain names ($)(, cooption of open standards or software, laws mandating data localization – would splinter the Internet. “Balkanize” was heard a lot.

A panel at the UK Internet Governance Forum a couple of weeks ago focused on this exact topic, and was mostly self-congratulatory. Which is when it occurred to me that the Internet may not *be* fragmented, but it *feels* fragmented. Almost every day I encounter some site I can’t reach: email goes into someone’s spam folder, the site or its content is off-limits because it’s been geofenced to conform with copyright or data protection laws, or the site mysteriously doesn’t load, with no explanation. The most likely explanation for the latter is censorship built into the Internet feed by the ISP or the establishment whose connection I’m using, but they don’t actually *say* that.

The ongoing attrition at Twitter is exacerbating this feeling, as the users I’ve followed for years continue to migrate elsewhere. At the moment, it takes accounts on several other services to keep track of everyone: definite fragmentation.

Here in the UK, this sense of fragmentation may be about to get a lot worse, as the long-heralded Online Safety bill – written and expanded until it’s become a “Frankenstein bill”, as Mark Scott and Annabelle Dickson report at Politico – hurtles toward passage. This week saw fruitless debates on amendments in the House of Lords, and it will presumably be back in the Commons shortly thereafter, where it could be passed into law by this fall.

A number of companies have warned that the bill, particularly if it passes with its provisions undermining end-to-end encryption intact, will drive them out of the country. I’m not sure British politicians are taking them seriously; so often such threats are idle. But in this case, I think they’re real, not least because post-Brexit Britain carries so much less global and commercial weight, a reality some politicians are in denial about. WhatsApp, Signal, and Apple have all said openly that they will not compromise the privacy of their masses of users elsewhere to suit the UK. Wikipedia has warned that including it in the requirement to age-verify its users will force it to withdraw rather than violate its principles about collecting as little information about users as possible. The irony is that the UK government itself runs on WhatsApp.

Wikipedia, Ian McRae, the director of market intelligence for prospective online safety regulator Ofcom, showed in a presentation at UKIGF, would be just one of the estimated 150,000 sites within the scope of the bill. Ofcom is ramping up to deal with the workload, an effort the agency expects to cost £169 million between now and 2025.

In a legal opinion commissioned by the Open Rights Group, barristers at Matrix Chambers find that clause 9(2) of the bill is unlawful. This, as Thomas Macaulay explains at The Next Web, is the clause that requires platforms to proactively remove illegal or “harmful” user-generated content. In fact: prior restraint. As ORG goes on to say, there is no requirement to tell users why their content has been blocked.

Until now, the impact of most badly-formulated British legislative proposals has been sort of abstract. Data retention, for example: you know that pervasive mass surveillance is a bad thing, but most of us don’t really expect to feel the impact personally. This is different. Some of my non-UK friends will only use Signal to communicate, and I doubt a day goes by that I don’t look something up on Wikipedia. I could use a VPN for that, but if the only way to use Signal is to have a non-UK phone? I can feel those losses already.

And if people think they dislike those ubiquitous cookie banners and consent clickthroughs, wait until they have to age-verify all over the place. Worst case: this bill will be an act of self-harm that one day will be as inexplicable to future generations as Brexit.

The UK is not the only one pursuing this path. Age verification in particular is catching on. The US states of Virginia, Mississippi, Louisiana, Arkansas, Texas, Montana, and Utah have all passed legislation requiring it; Pornhub now blocks users in Mississippi and Virginia. The likelihood is that many more countries will try to copy some or all of its provisions, just as Australia’s law requiring the big social media platforms to negotiate with news publishers is spawning copies in Canada and California.

This is where the real threat of the “splinternet” lies. Think of requiring 150,000 websites to implement age verification and proactively police content. Many of those sites, as the law firm Mischon de Reya writes may not even be based in the UK.

This means that any site located outside the UK – and perhaps even some that are based here – will be asking, “Is it worth it?” For a lot of them, it won’t be. Which means that however much the Internet retains its integrity, the British user experience will be the Internet as a sea of holes.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.