Digital distrust

On Tuesday, at the UK Internet Governance Forum, a questioner asked this: “Why should I trust any technology the government deploys?”

She had come armed with a personal but generalizable anecdote. Since renewing her passport in 2017, at every UK airport the electronic gates routinely send her for rechecking to the human-staffed desk, even though the same passport works perfectly well in electronic gates at airports in other countries. A New Scientist article by Adam Vaughan that I can’t locate eventually explained: the Home Office had deployed the system knowing it wouldn’t work for “people with my skin type”. That is, as you’ve probably already guessed, dark.

She directed her question to Katherine Yesilirmak, director of strategy in the Responsible Tech Adoption Unit, formerly the Centre for Data Ethics and Innovation, a subsidiary of the Department for Skills, Innovation, and Technology.

Yesirlimak did her best, mentioning the problem of bias in training data, the variability of end users, fairness, governmental responsibility for understanding the technology it procures (since it builds very little itself these days) and so on. She is clearly up to date, even referring to the latest study finding that AIs used by human resources consistently prefer résumés with white and male-presenting names over non-white and female-presenting names. But Yesirlimak didn’t really answer the questioner’s fundamental conundrum. Why *should* she trust government systems when they are knowingly commissioned with flaws that exclude her? Well, why?

Pause to remember that 20 years ago, Jim Wayman, a pioneer in biometric identification told me, “People never have what you think they’re going to have where you think they’re going to have it.” Biometrics systems must be built to accommodate outliers – and it’s hard. For more, see Wayman’s potted history of third-party testing of modern biometric systems in the US (PDF).

Yesirlimak, whose LinkedIn profile indicates she’s been at the unit for a little under three years, noted that the government builds very little of its own technology these days. However, her group is partnering with analogues in other countries and international bodies to build tools and standards that she believes will help.

This panel was nominally about AI governance, but the connection that needed to be made was from what the questioner was describing – technology that makes some people second-class citizens – to digital exclusion, siloed in a different panel. Most people describe the “digital divide” as a binary statistical matter: 1.7 million households are not online, and 40% of households don’t meet the digital living standard, per the Liberal Democrat peer Timothy Clement-Jones, who ruefully noted the “serious gap in knowledge in Parliament” regarding digital inclusion.

Clement-Jones, who is the co-chair of the All Party Parliamentary Group on Artificial Intelligence, cited the House of Lords Communications and Digital Committee’s January 2024 report. Another statistic came from Helen Milner: 23% of people with long-term illness or disabilities are digitally excluded.

The report cites the annual consumer digital index Lloyds Bank releases each year; the last one found that Internet use is dropping among the over-60s, and for the first time the percentage of people offline in the previous three months had increased, to 4%. Fifteen percent of those offline are under 50, and overall about 4.7 million people can’t connect to wifi. Ofcom’s 2023 report found that 7% of households (disproportionately poor and/or elderly) have no Internet access, 20% of them because of cost.

“We should make sure the government always provides an analog alternative, especially as we move to digital IDs” Clement-Jones said. In 2010, when Martha Lane Fox was campaigning to get the last 10% online, one could push back: why should they have to be? Today, parying parking meters requires an app and, as Royal Holloway professor Lizzie Coles-Kemp noted, smartphones aren’t enough for some services.

Milner finds that a third of those offline already find it difficult to engage with the NHS, creating “two-tier public services”. Clement-Jones added another example: people in temporary housing have to reapply weekly online – but there is no Internet provision in temporary housing.

Worse, however, is thinking technology will magically fix intractable problems. In Coles-Kemp’s example, if someone can’t do their prescribed rehabilitation exercises at home because they lack space, support, or confidence, no app will fix it. In her work on inclusive security technologies, she has long pushed for systems to be less hostile to users in the name of preventing fraud: “We need to do more work on the difference between scammers and people who are desperate to get something done.”

In addition, Milner said, tackling digital exclusion has to be widely embraced – by the Department of Work and Pensions, for example – not just handed off to DSIT. Much comes down to designers who are unlike the people on whom their systems will be imposed and whose direct customers are administrators. “The emphasis needs to shift to the creators of these technologies – policy makers, programmers. How do algorithms make decisions? What is the impact on others of liking a piece of content?”

Concern about the “digital divide” has been with us since the beginning of the Internet. It seems to have been gradually forgotten as online has become mainstream. It shouldn’t be: digital exclusion makes all the other kinds of exclusion worse and adds anger and frustration to an already morbidly divided society.

Illustrations: Martha Lane Fox in 2011 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: A History of Fake Things on the Internet

A History of Fakes on the Internet
By Walter J. Scheirer
Stanford University Press
ISBN 2023017876

One of Agatha Christie’s richest sources of plots was the uncertainty of identity in England’s post-war social disruption. Before then, she tells us, anyone arriving to take up residence in a village brought a letter of introduction; afterwards, old-time residents had to take newcomers at their own valuation. Had she lived into the 21st century, the arriving Internet would have given her whole new levels of uncertainty to play with.

In his recent book A History of Fake Things on the Internet, University of Notre Dame professor Walter J. Scheirer describes creating and detecting online fakes as an ongoing arms race. Where many people project doomishly that we will soon lose the ability to distinguish fakery from reality, Scheirer is more optimistic. “We’ve had functional policies in the past; there is no good reason we can’t have them again,” he concludes, adding that to make this happen we need a better understanding of the media that support the fakes.

I have a lot of sympathy with this view; as I wrote recently, things that fool people when a medium is new are instantly recognizable as fake once they become experienced. We adapt. No one now would be fooled by the images that looked real in the early days of photography. Our perceptions become more sophisticated, and we learn to examine context. Early fakes often work simply because we don’t know yet that such fakes are possible. Once we do know, we exercise much greater caution before believing. Teens who’ve grown up applying filters to the photos and videos they upload to Instagram and TikTok, see images very differently than those of us who grew up with TV and film.

Schierer begins his story with the hacker counterculture that saw computers as a source of subversive opportunities. His own research into media forensics began with Photoshop. At the time, many, especially in the military, worried that nation-states would fake content in order to deceive and manipulate. What they found, in much greater volume, was memes and what Schierer calls “participatory fakery” – that is, the cultural outpouring of fakes for entertainment and self-expression, most of it harmless. Further chapters consider cheat codes in games, the slow conversion of hackers into security practitioners, adversarial algorithms and media forensics, shock-content sites, and generative AI.

Through it all, Schierer remains optimistic that the world we’re moving into “looks pretty good”. Yes, we are discovering hundreds of scientific papers with faked data, faked results, or faked images, but we also have new analysis tools to use to detect them and Retraction Watch to catalogue them. The same new tools that empower malicious people enable many more positive uses for storytelling, collaboration, and communication. Perhaps forgetting that the computer industry relentlessly ignores its own history, he writes that we should learn from the past and react to the present.

The mention of scientific papers raises an issue Schierer seems not to worry about: waste. Every retracted paper represents lost resources – public funding, scientists’ time and effort, and the same multiplied into the future for anyone who attempts to build on that paper. Figuring out how to automate reliable detection of chatbot-generated text does nothing to lessen the vast energy, water, and human resources that go into building and maintaining all those data centers and training models (see also filtering spam). Like Scheirer, I’m largely optimistic about our ability to adapt to a more slippery virtual reality. But the amount of wasted resources is depressing and, given climate change, dangerous.

Facts are scarified

The recent doctored Palace photo has done almost as much as the arrival of generative AI to raise fears that in future we will completely lose the ability to identify fakes. The royal photo was sloppily composited – no AI needed – for reasons unknown (though Private Eye has a suggestion). A lot of conspiracy theorizing could be avoided if the palace would release the untouched original(s), but as things are, the photograph is a perfect example of how to provide the fuel for spreading nonsense to 400 million people.

The most interesting thing about the incident was discovering the rules media apply to retouching photos. AP specified, for example, that it does not use altered or digitally manipulated images. It allows cropping and minor adjustments to color and tone where necessary, but bans more substantial changes, even retouching to remove red eye. As Holly Hunter’s character says, trying to uphold standards in the 1987 movie Broadcast News (written by James Brooks), “We are not here to stage the news.”

The desire to make a family photo as appealing as possible is understandable; the motives behind spraying the world with misinformation are less clear and more varied. I’ve long argued here that for this reason combating misinformation and disinformation is similar to cybersecurity because of the complexity of the problem and the diversity of actors and agendas. At last year’s Disinformation Summit in Cambridge cybersecurity was, sadly, one of the missing communities.

Just a couple of weeks ago the BBC announced its adoption of C2PA for authenticating images, developed by a group of technology and media companies including the BBC, the New York Times, Microsoft, and Adobe. The BBC says that many media organizations are beginning to adopt C2PA, and even Meta is considering it. Edits must be signed, and create a chain of provenance all the way back to the original photo. In 2022, the BBC and the Royal Society co-hosted a workshop on digital provenance, following a Royal Society report, at which C2PA featured prominently.

That’s potentially a valuable approach for publishing and broadcast, where the conduit to the public is controlled by one of a relatively small number of organizations. And you can see why those organizations would want it: they need, and in many cases are struggling to retain, public trust. It is, however, too complex a process for the hundreds of millions of people with smartphone cameras posting images to social media, and unworkable for citizen journalists capturing newsworthy events in real time. Ancillary issue: sophisticated phone cameras try so hard to normalize the shots we take that they falsify the image at source. In 2020, Californians attempting to capture the orange color of their smoke-filled sky were defeated by autocorrection that turned it grey. So, many images are *originally* false.

In lengthy blog posting, Neal Krawitz analyzes difficulties with C2PA. He lists security flaws, but also is opposed to the “appeal to authority” approach, which he dubs a “logical fallacy”. In the context of the Internet, it’s worse than that; we already know what happens when a tiny handful of commercial companies (in this case, chiefly Adobe) become the gatekeeper for billions of people.

All of this was why I was glad to hear about work in progress at a workshop last week, led by Mansoor Ahmed-Rengers, a PhD candidate studying system security: Human-Oriented Proof Standard (HOPrS). The basic idea is to build an “Internet-wide, decentralised, creator-centric and scalable standard that allows creators to prove the veracity of their content and allows viewers to verify this with a simple ‘tick’.” Co-sponsoring the workshop was Open Origins, a project to distinguish between synthetic and human-created content.

It’s no accident that HOPrS’ mission statement echoes the ethos of the original Internet; as security researcher Jon Crowcroft explains, it’s part of long-running work on redecentralization. Among HOPrS’ goals, Ahmed-Rengers listed: minimal centralization; the ability for anyone to prove their content; Internet-wide scalability; open decision making; minimal disruption to workflow; and easy interpretability of proof/provenance. The project isn’t trying to cover all bases – that’s impossible. Given the variety of motivations for fakery, there will have to be a large ecosystem of approaches. Rather, HOPrS is focusing specifically on the threat model of an adversary determined to sow disinformation, giving journalists and citizens the tools they need to understand what they’re seeing.

Fakes are as old as humanity. In a brief digression, we were reminded that the early days of photography were full of fakery: the Cottingley Fairies, the Loch Ness monster, many dozens of spirit photographs. The Cottingley Fairies, cardboard cutouts photographed by Elsie Wright, 16, and Florence Griffiths, 9, were accepted as genuine by Sherlock Holmes creator Sir Arthur Conan Doyle, famously a believer in spiritualism. To today’s eyes, trained on millions of photographs, they instantly read as fake. Or take Ireland’s Knock apparitions, flat, unmoving, and, philosophy professor David Berman explained in 1979, magic lantern projections. Our generation, who’ve grown up with movies and TV, would I think have instantly recognized that as fake, too. Which I believe tells us something: yes, we need tools, but we ourselves will get better at detecting fakery, as unlikely as it seems right now. The speed with which the royal photo was dissected showed how much we’ve learned just since generative AI became available.

Illustrations: The first of the Cottingley Fairies photographs (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Irreparable harm, part II

Time to revisit the doping case of Karmila Valieva, the 15-year-old Russian figure skater who was allowed to compete in the February 2022 Winter Olympics despite testing positive for the banned substance trimetazidine on the grounds that banning her from competition would cause her “irreparable harm”. The harm was never defined, but presumably went something like careers are short, Valieva was a multiple champion and the top prospect for gold, and she could be later disqualified but couldn’t retroactively compete. An adult would have been disqualified there and then, but 15-year-old was made a “protected person” in the World Anti-Doping Agency’s 2021 code.

Two years on, the Court for Arbitration of Sport has confirmed she is banned for four years, backdated to December 2021, and will be stripped of the results, prizes, medals, and awards she has won in the interim. CAS, the ultimate authority in such cases, did not buy her defense that her grandfather, who is prescribed trimetazidine, inadvertently contaminated her food. She will be eligible to compete again in December 2026.

In a paper, Marcus Camposa,b, Jim Parrya, and Irena Martínková conclude that the WADA’s concept of “protected person” “transforms…potential victims into suspects”. As they write, the “protection” is more imagined than real, since minors are subjected to the same tests and the same sanctions as adults. While the code talks of punishing those involved in doping minors, to date the only person suffering consequences for Valieva’s positive test is Valieva, still a minor but no longer a “protected person”.

A reminder: besides her positive test for trimetazidine, widely used in Russia to prevent angina attacks, Valieva had therapeutic use exemptions for two other heart medications. How “protected” is that? Shouldn’t the people authorizing TUEs raise the alarm when a minor is being prescribed multiple drugs for a condition vastly more typical of middle-aged men?

According to the Anti-doping Database, doping is not particularly common in figure skating – but Russia has the most cases. WADA added trimetazidine to the banned list in 2014 as a metabolic modulator; if it helps athletes it’s by improving cardiovascular efficiency and therefore endurance. CNN compares it to meldonium, the drug that got tennis player Maria Sharapova banned in 2016.

In a statement, the World Anti-Doping Agency said it welcomed the ruling but that “The doping of children is unforgivable. Doctors, coaches or other support personnel who are found to have provided performance-enhancing substances to minors should face the full force of the World Anti-Doping Code. Indeed, WADA encourages governments to consider passing legislation – as some have done already – making the doping of minors a criminal offence.” That seems essential for real protection; otherwise the lowered sanctions imposed upon minors could be an incentive to take more risks doping them.

The difficulty is that underage athletes are simultaneously children and professional athletes competing as peers with adults. For the rules of the sport itself, of course the rules must be the same; 16-year-old Mirren Andreeva doesn’t get an extra serve or a larger tennis court to hit into. Hence 2014 bronze medalist Ashley Wagner’s response to an exTwitter poster calling the ruling irrational and cruel: “every athlete plays by the same rules”. But anti-doping protocols are different, involving issues of consent, medical privacy, and public shaming. For the rest of the field, it’s not fair to exempt minors from the doping rules that apply to everyone else; for the minor, who lacks agency and autonomy, it’s not fair if you don’t. This is only part of the complexity of designing an anti-doping system and applying it equally to minors, 40-something hundred-millionaire tennis players, and minimally funded athletes in minority sports who go back to their day jobs when the comptition ends.

Along with its statement, WADA launched the Operation Refuge report (PDF) on doping and minors. The most commonly identified doping substance for both girls and boys is the diuretic furosemide followed by methylphenidate (better known as the ADHD medication Ritalin). The most positive tests come from Russia, India, and China. The youngest child sanctioned for a doping violation was 12. The report goes on to highlight the trauma and isolation experienced by child athletes who test positive – one day a sporting hero, the next a criminal.

The alphabet soup of organizations in charge of Valieva’s case – the Russian Anti-Doping Agency, the International Skating Union, WADA, CAS – could hardly have made a bigger mess. The delays: it took six weeks to notify Valieva of her positive test, and two years to decide her case. Then, despite the expectation that disqualifying Valieva disqualifies her entire team, the ISU recalculated the standings, giving Russia the bronze medal, the US the gold, and Japan silver. The Canadian team, which placed fourth, is considering an appeal; Russia is preparing one. Ironically, according to this analysis by Martina Frammartino, the Russian bench is so strong that it could easily have won gold if Valeeva’s positive test had come through in time to replace her.

I’ve never believed that the anti-doping system was fit for purpose; considered as a security system, too many incentives are misaligned, as became clear in 2016, when the extent of Russian state-sponsored doping became public. This case shows the system at its worst.

Illustrations: Kamila Valieva in 2018 (via Luu at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon