The brittle state

We’re now almost a year on from Rishi Sunak’s AI Summit, which failed to accomplish any of its most likely goals: cement his position as the UK’s prime minister; establish the UK as a world leader in AI fearmongering; or get him the new life in Silicon Valley some commentators seemed to think he wanted.

Arguably, however, it has raised belief that computer systems are “intelligent” – that is, that they understand what they’re calculating. The chatbots based on large language models make that worse, because, as James Boyle cleverly wrote, for the first time in human history, “sentences do not imply sentience”. Mix in paranoia over the state of the world and you get some truly terrifying systems being put into situations where they can catastrophically damage people’s lives. We should know better by now.

The Open Rights Group (I’m still on its advisory council) is campaigning against the Home Office’s planned eVisa scheme. In the previouslies: between 1948 and 1971, people from Caribbean countries, many of whom had fought for Britain in World War II, were encouraged to help the UK rebuild the economy post-war. They are known as the “Windrush generation” after the first ship that brought them. As Commonwealth citizens, they didn’t need visas or documentation; they and their children had the automatic right to live and work here.

Until 1973, when the law changed; later arrivals needed visas. The snag was that earlier arrivals had no idea they had any reason to worry….until the day they discovered, when challenged, that they had no way to prove they were living here legally. That day came in 2017, when then-prime minister, Theresa May (who this week joined the House of Lords) introduced the hostile environment. Intended to push illegal immigrants to go home, this law moves the “border” deep into British life by requiring landlords, banks, and others to conduct status checks. The result was that some of the Windrush group – again, legal residents – were refused medical care, denied housing, or deported.

When Brexit became real, millions of Europeans resident in the UK were shoved into the same position: arrived legally, needing no documentation, but in future required to prove their status. This time, the UK issued them documents confirming their status as permanently settled.

Until December 31, 2024, when all those documents with no expiration date will abruptly expire because the Home Office has a new system that is entirely online. As ORG and the3million explain it, come January 1, 2025, about 4 million people will need online accounts to access the new system, which generates a code to give the bank or landlord temporary access to their status. The new system will apparently hit a variety of databases in real time to perform live checks.

Now, I get that the UK government doesn’t want anyone to be in the country for one second longer than they’re entitled to. But we don’t even have to say, “What could possibly go wrong?” because we already *know* what *has* gone wrong for the Windrush generation. Anyone who has to prove their status off the cuff in time-sensitive situations really needs proof they can show when the system fails.

A proposal like this can only come from an irrational belief in the perfection – or at least, perfectability – of computer systems. It assumes that Internet connections won’t be interrupted, that databases will return accurate information, and that everyone involved will have the necessary devices and digital literacy to operate it. Even without ORG’s and the3million’s analysis, these are bonkers things to believe – and they are made worse by a helpline that is only available during the UK work day.

There is a lot of this kind of credulity about, most of it connected with “AI”. AP News reports that US police departments are beginning to use chatbots to generate crime reports based on the audio from their body cams. And, says Ars Technica, the US state of Nevada will let AI decide unemployment benefit claims, potentially producing denials that can’t be undone by a court. BrainFacts reports that decision makers using “AI” systems are prone to automation bias – that is, they trust the machine to be right. Of course, that’s just good job security: you won’t be fired for following the machine, but you might for overriding it.

The underlying risk with all these systems, as a security experts might say, is complexity: more complex means being more susceptible to inexplicable failures. There is very little to go wrong with a piece of paper that plainly states your status, for values of “paper” including paper, QR codes downloaded to phones, or PDFs saved to a desktop/laptop. Much can go wrong with the system underlying that “paper”, but, crucially, when a static confirmation is saved offline, managing that underlying complexity can take place when the need is not urgent.

It ought to go without saying that computer systems with a profound impact on people’s lives should be backed up by redundant systems that can be used when they fail. Yet the world the powers that be apparently want to build is one that underlines their power to cause enormous stress for everyone else. Systems like eVisas are as brittle as just-in-time supply chains. And we saw what happens to those during the emergency phase of the covid pandemic.

Illustrations: Empty supermarket shelves in March 2020 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The fear factor

Be careful what you allow the authorities to do to people you despise, because one day those same tools will be turned against you.

In the last few weeks, the shocking stabbing of three young girls at a dance class in Southport became the spark to ignite riots across the UK by people who apparently believed social media theories that the 17-year-old boy responsible was Muslim, a migrant, or a terrorist. With the boy a week from his 18th birthday, the courts ruled police could release his name in order to make clear he was not Muslim and born in Wales. It failed to stop the riots.

Police and the courts have acted quickly; almost 800 people have been arrested, 350 have been charged, and hundreds are in custody. In a moving development, on a night when more than 100 riots were predicted, tens of thousands of ordinary citizens thronged city streets and formed protective human chains around refugee centers in order to block the extremists. The riots have quieted down, but police are still busy arresting newly-identified suspects. And the inevitable question is being asked: what do we do next to keep the streets safe and calm?

London mayor Sadiq Khan quickly called for a review of the Online Safety Act, saying he doesn’t believe it’s fit for purpose. Cabinet minister Nick Thomas-Symonds (Labour-Torfaen) has suggested the month-old government could change the law.

Meanwhile, prime minister Keir Starmer favours a wider rollout of live facial recognition to track thugs and prevent them from traveling to places where they plan to cause social unrest, copying systems the police use to prevent football hooligans from even boarding trains to matches. This proposal is startling because: before standing for Parliament Starmer was a human rights lawyer. One could reasonably expect him to know that facial recognition systems have a notorious history of inaccuracy due to biases baked into their algorithms via training data, and that in the UK there is no regulatory framework to provide oversight. Silkie Carlo, the director of Big Brother Watch immediately called the proposal “alarming” and “ineffective”, warning that it turns people into “walking ID cards”.

As the former head of Liberty, Shami Chakrabarti used to say when ID cards were last proposed, moves like these fundamentally change the relationship between the citizen and the state. Such a profound change deserves more thought than a reflex fear reaction in a crisis. As Ciaran Thapar argues at the Guardian, today’s violence has many causes, beginning with the decay of public services for youth, mental health, and , and it’s those causes that need to be addressed. Thapar invokes his memories of how his community overcame the “open, violent racism” of the 1980s Thatcher years in making his recommendations.

Much of the discussion of the riots has blamed social media for propagating hate speech and disinformation, along with calls for rethinking the Online Safety Act. This is also frustrating. First of all, the OSA, which was passed in 2023, isn’t even fully implemented yet. When last seen, Ofcom, the regulator designated to enforce it, was in the throes of recruiting people by the dozen, working out what sites will be in scope (about 150,000, they said), and developing guidelines. Until we see the shape of the regulation in practice, it’s too early to say the act needs expansion.

Second, hate speech and incitement to violence are already illegal under other UK laws. Just this week, a woman was jailed for 15 months for a comment to a Facebook group with 5,100 members that advocated violence against mosques and the people inside them. The OSA was not needed to prosecute her.

And third, while Elon Musk and Mark Zuckerberg definitely deserve to have anger thrown their way, focusing solely on the ills of social media makes no sense given the decades that right-wing newspapers have spent sowing division and hatred. Even before Musk, Twitter often acted as a democratization of the kind of angry, hate-filled coverage long seen in the Daily Mail (and others). These are the wedges that created the divisions that malicious actors can now exploit by disseminating disinformation, a process systematically explained by Renee DiResta in her new book, Invisible Rulers.

The FBI’s investigation of the January 6, 2021 insurrection at the US Capitol provides a good exemplar for how modern investigations can exploit new technologies. Law enforcement applied facial recognition to CCTV footage and massive databases, and studied social media feeds, location data and cellphone tracking, and other data. As Charlie Warzel and Stuart A. Thompson wrote at the New York Times in 2021, even though most of us agree with the goal of catching and punishing insurrectionists and rioters, the data “remains vulnerable to use and abuse” against protests of other types – such as this year’s pro-Palestinian encampments.

The same argument applies in the UK. Few want violence in the streets. But the unilateral imposition of live facial recognition, among other tracking technologies, can’t be allowed. There must be limits and safeguards. ID cards issued in wartime could be withdrawn when peace came; surveillance technologies, once put in place, tend to become permanent.

Illustrations: The CCTV camera at 22 Portobello Road, where George Orwell once lived.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Deja news

At the first event organized by the University of West London group Women Into Cybersecurity, a questioner asked how the debates around the Internet have changed since I wrote the original 1997 book net.wars..

Not much, I said. Some chapters have dated, but the main topics are constants: censorship, freedom of speech, child safety, copyright, access to information, digital divide, privacy, hacking, cybersecurity, and always, always, *always* access to encryption. Around 2010, there was a major change when the technology platforms became big enough to protect their users and business models by opposing government intrusion. That year Google launched the first version of its annual transparency report, for example. More recently, there’s been another shift: these companies have engorged to the point where they need not care much about their users or fear regulatory fines – the stage Ed Zitron calls the rot economy and Cory Doctorow dubs enshittification.

This is the landscape against which we’re gearing up for (yet) another round of recursion. April 25 saw the passage of amendments to the UK’s Investigatory Powers Act (2016). These are particularly charmless, as they expand the circumstances under which law enforcement can demand access to Internet Connection Records, allow the government to require “exceptional lawful access” (read: backdoored encryption) and require technology companies to get permission before issuing security updates. As Mark Nottingham blogs, no one should have this much power. In any event, the amendments reanimate bulk data surveillance and backdoored encryption.

Also winding through Parliament is the Data Protection and Digital Information bill. The IPA amendments threaten national security by demanding the power to weaken protective measures; the data bill threatens to undermine the adequacy decision under which the UK’s data protection law is deemed to meet the requirements of the EU’s General Data Protection Regulation. Experts have already put that adequacy at risk. If this government proceeds, as it gives every indication of doing, the next, presumably Labour, government may find itself awash in an economic catastrophe as British businesses become persona-non-data to their European counterparts.

The Open Rights Group warns that the data bill makes it easier for government, private companies, and political organizations to exploit our personal data while weakening subject access rights, accountability, and other safeguards. ORG is particularly concerned about the impact on elections, as the bill expands the range of actors who are allowed to process personal data revealing political opinions on a new “democratic engagement activities” basis.

If that weren’t enough, another amendment also gives the Department of Work and Pensions the power to monitor all bank accounts that receive payments, including the state pension – to reduce overpayments and other types of fraud, of course. And any bank account connected to those accounts, such as landlords, carers, parents, and partners. At Computer Weekly, Bill Goodwin suggests that the upshot could be to deter landlords from renting to anyone receiving state benefits or entitlements. The idea is that banks will use criteria we can’t access to flag up accounts for the DWP to inspect more closely, and over the mass of 20 million accounts there will be plenty of mistakes to go around. Safe prediction: there will be horror stories of people denied benefits without warning.

And in the EU… Techcrunch reports that the European Commission (always more surveillance-happy and less human rights-friendly than the European Parliament) is still pursuing its proposal to require messaging platforms to scan private communications for child sexual abuse material. Let’s do the math of truly large numbers: billions of messages, even a teeny-tiny percentage of inaccuracy, literally millions of false positives! On Thursday, a group of scientists and researchers sent an open letter pointing out exactly this. Automated detection technologies perform poorly, innocent images may occur in clusters (as when a parent sends photos to a doctor), and such a scheme requires weakening encryption, and in any case, better to focus on eliminating child abuse (taking CSAM along with it).

Finally, age verification, which has been pending in the UK ever since at least 2016, is becoming a worldwide obsession. At least eight US states and the EU have laws mandating age checks, and the Age Verification Providers Association is pushing to make the Internet “age-aware persistently”. Last month, the BSI convened a global summit to kick off the work of developing a worldwide standard. These moves are the latest push against online privacy; age checks will be applied to *everyone*, and while they could be designed to respect privacy and anonymity, the most likely is that they won’t be. In 2022, the French data protection regulator, CNIL, found that current age verification methods are both intrusive and easily circumvented. In the US, Casey Newton is watching a Texas case about access to online pornography and age verification that threatens to challenge First Amendment precedent in the Supreme Court.

Because the debates are so familiar – the arguments rarely change – it’s easy to overlook how profoundly all this could change the Internet. An age-aware Internet where all web use is identified and encrypted messaging services have shut down rather than compromise their users and every action is suspicious until judged harmless…those are the stakes.

Illustrations: Angel sensibly smashes the ring that makes vampires impervious (in Angel, “In the Dark” (S01e03)).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Borderlines

Think back to the year 2000. New York’s World Trade Center still stood. Personal digital assistants were a niche market. There were no smartphones (the iPhone arrived in 2006) or tablets (the iPad took until 2010). Social media was nascent; Facebook first opened in 2004. The Good Friday agreement was just two years old, and for many in Britain “terrorists” were still “Irish”. *That* was when the UK passed the Terrorism Act (2000).

Usually when someone says the law can’t keep up with technological change they mean that technology can preempt regulation at speed. What the documentary Phantom Parrot shows, however, is that technological change can profoundly alter the consequences of laws already on the books. The film’s worked example is Schedule 7 of the 2000 Terrorism Act, which empowers police to stop, question, search, and detain people passing through the UK’s borders. They do not need prior authority or suspicion, but may only stop and question people for the purpose of determining whether the individual may be or have been concerned in the commission, preparation, or instigation of acts of terrorism.

Today this law means that anyone ariving at the UK border may be compelled to unlock access to data charting their entire lives. The Hansard record of the debate on the bill shows clearly that lawmakers foresaw problems: the classification of protesters as terrorists, the uselessness of fighting terrorism by imprisoning the innocent (Jeremy Corbyn), the reversal of the presumption of innocence. But they could not foresee how far-reaching the powers the bill granted would become.

The film’s framing story begins in November 2016, when Muhammed Rabbani arrived at London’s Heathrow Airport from Doha and was stopped and questioned by police under Schedule 7. They took his phone and laptop and asked for his passwords. He refused to supply them. On previous occasions, when he had similarly refused, they’d let him go. This time, he was arrested. Under Schedule 7, the penalty for such a refusal can be up to three months in jail.

Rabbani is managing director of CAGE International, a human rights organization that began by focusing on prisoners seized under the war on terror and expanded its mission to cover “confronting other rule of law abuses taking place under UK counter-terrorism strategy”. Rabbani’s refusal to disclose his passwords was, he said later, because he was carrying 30,000 confidential documents relating to a client’s case. A lawyer can claim client confidentiality, but not NGOs. In 2018, the appeals court ruled the password demands were lawful.

In September 2017, Rabbani was convicted. He was g iven a 12-month conditional discharge and ordered to pay £620 in costs. As Rabbani says in the film, “The law made me a terrorist.” No one suspected him of being a terrorist or placing anyone in danger; but the judge made clear she had no choice under the law and so he nonetheless has been convicted of a terrorism offense. On appeal in 2018, his conviction was upheld. We see him collect his returned devices – five years on from his original detention.

Britain is not the only country that regards him with suspicion. Citing his conviction, in 2023 France banned him, and, he claims, Poland deported him.

Unsurprisingly, CAGE is on the first list of groups that may be dubbed “extremist” under the new definition of extremism released last week by communities secretary Michael Gove. The direct consequence of this designation is a ban on participation in public life – chiefly, meetings with central and local government. The expansion of the meaning of “extremist”, however, is alarming activists on all sides.

Director Kate Stonehill tells the story of Rabbani’s detention partly through interviews and partly through a reenactment using wireframe-style graphics and a synthesized voice that reads out questions and answers from the interview transcripts. A cello of doom provides background ominance. Laced through this narrative are others. A retired law enforcement office teaches a class to use extraction and analysis tools, in which we see how extensive the information available to them really is. Ali Al-Marri and his lawyer review his six years of solitary detention as an enemy combatant in Charleston, South Carolina. Lastly, Stonehill calls on Ryan Gallegher’s reporting, which exposed the titular Phantom Parrot, the program to exploit the data retained under Schedule 7. There are no records of how many downloads have been taken.

The retired law enforcement officer’s class is practically satire. While saying that he himself doesn’t want to be tracked for safety reasons, he tells students to grab all the data they can when they have the opportunity. They are in Texas: “Consent’s not even a problem.” Start thinking outside of the box, he tells them.

What the film does not stress is this: rights are largely suspended at all borders. In 2022, the UK extended Schedule 7 powers to include migrants and refugees arriving in boats.

The movie’s future is bleak. At the Chaos Computer Congress, a speaker warns that gait recognition, eye movement detection, and speech analysis (accents, emotion) and and other types of analysis will be much harder to escape and enable watchers to do far more with the ever-vaster stores of data collected from and about each of us.

“These powers are capable of being misused,” said Douglas Hogg in the 1999 Commons debate. “Most powers that are capable of being misused will be misused.” The bill passed 210-1.

Illustrations: Still shot from the wireframe reenactment of Rabbani’s questioning in Phantom Parrot.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

A surveillance state of mind

­”Do computers automatically favor authoritarianism?” a friend asked recently. Or, are they fundamentally anti-democratic?

Certainly, at the beginning, many thought that both the Internet and personal computers (think, for example, of Apple’s famed Super Bowl ad, “1984”) – would favor democratic ideals by embedding values such as openness, transparency, and collaborative policy-making in their design. Universal access to information and to networks of distribution was always going to have downsides, but on balance was going to be a Good Thing (actually, I still believe this). So, my friend was asking, were those hopes always fundamentally absurd, or were the problems of disinformation and widespread installation of surveillance technology always inevitable for reasons inherent in the technology itself?

Computers, like all technology, are what we make them. But one fundamental characteristic does seem to me unavoidable: they upend the distribution of data-related costs. In the physical world, more data always involved more expense: storing it required space, and copying or transmitting it took time, ink, paper, and personnel. In the computer world, more data is only marginally more expensive, and what costs remain have kept falling for 70 years. For most purposes, more digital data incurs minimal costs. The expenses of digital data only kick in when you curate it: selection and curation take time and personnel. So the easiest path with computer data is always to keep it. In that sense, computers inevitably favor surveillance.

The marketers at companies that collect data about this try to argue this is a public *good* because doing so enables them to offer personalized services that benefit us. Underneath, of course, there are too many economic incentives for them not to “share” – that is, sell – it onward, creating an ecosystem that sends our data careening all over the place, and where “personalization” becomes “surveillance” and then, potentially, “maleveillance”, which is definitely not in our interests.

At a 2011 workshop on data abuse, participants noted that the mantra of the day was “the data is there, we might as well use it”. At the time, there was a definite push from the industry to move from curbing data collection to regulating its use instead. But this is the problem: data is tempting. This week has provided a good example of just how tempting in the form of a provision in the UK’s criminal justice bill will allow police to use the database of driver’s license photos for facial recognition searches. “A permanent police lineup,” privacy campaigners are calling it.

As long ago as 1996, the essayist and former software engineer Ellen Ullman called out this sort of temptation, describing it as a system “infecting” its owner. Data tempts those with access to it to ask questions they couldn’t ask before. In many cases that’s good. Data enables Patrick Ball’s Human Rights Data Analysis Group to establish “who did what to whom” in cases of human rights abuse. But, in the downside in Ullman’s example, it undermines the trust between a secretary and her boss, who realizes he can use the system to monitor her work, despite prior decades of trust. In the UK police example, the downside is tempting the authorities to combine the country’s extensive network of CCTV images and the largest database of photographs of UK residents. “Crime scene investigations,” say police and ministers. “Chill protests,” the rest of us predict. In a story I’m writing for the sucessor to the Cybersalon anthology Twenty-Two Ideas About the Future, I imagined a future in which police have the power and technology to compel every camera in the country to join a national network they control. When it fails to solve an important crime of the day, they successfully argue it’s because the network’s availability was too limted.

The emphasis on personalization as a selling point for surveillance – if you turn it off you’ll get irrelevant ads! – reminds that studies of astrology starting in 1949 have found that people’s rating of their horoscopes varies directly with how personalized they perceive them to be. The horoscope they are told has been drawn up just for them by an astrologer gets much higher ratings than the horoscope they are told is generally true of people with their sun sign – even when it’s the *same* horoscope.

Personalization is the carrot businesses use to get us to feed our data into their business models; their privacy policies dictate the terms. Governments can simply compel disclosure as a requirement for a benefit we’re seeking – like the photo required to get a driver’s license,, passport, or travel pass. Or, under greater duress, to apply for or await a decision about asylum, or try to cross a border.

“There is no surveillance state,” then-Home Secretary Theresa May said in 2014. No, but if you put all the pieces in place, a future government of a malveillance state of mind can turn it on at will.

So, going back to my friend’s question. Yes, of course we can build the technology so that it favors democratic values instead of surveillance. But because of that fundamental characteristic that makes creating and retaining data the default and the business incentives currently exploiting the results, it requires effort and thought. It is easier to surveil. Malveillance, however, requires power and a trust-no-one state of mind. That’s hard to design out.

Illustrations: The CCTV camera at 22 Portobello Road, where George Orwell lived circa 1927.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The grown-ups

In an article this week in the Guardian, Adrian Chiles asks what decisions today’s parents are making that their kids will someday look back on in horror the way we look back on things from our childhood. Probably his best example is riding in cars without seatbelts (which I’m glad to say I survived). In contrast to his suggestion, I don’t actually think tomorrow’s parents will look back and think they shouldn’t have had smartphones, though it’s certainly true that last year a current parent MP (whose name I’ve lost) gave an impassioned speech opposing the UK’s just-passed Online Safety Act in which she said she had taken risks on the Internet as a teenager that she wouldn’t want her kids to take now.

Some of that, though, is that times change consequences. I knew plenty of teens who smoked marijuana in the 1970s. I knew no one whose parents found them severely ill from overdoing it. Last week, the parent of a current 15-year-old told me he’d found exactly that. His kid had made the classic error (see several 2010s sitcoms) of not understanding how slowly gummies act. Fortunately, marijuana won’t kill you, as the parent found to his great relief after some frenzied online searching. Even in 1972, it was known that consuming marijuana by ingestion (for example, in brownies) made it take effect more slowly. But the marijuana itself, by all accounts, was less potent. It was, in that sense, safer (although: illegal, with all the risks that involves).

The usual excuse for disturbing levels of past parental risk-taking is “We didn’t know any better”. A lot of times that’s true. When today’s parents of teenagers were 12 no one had smartphones; when today’s parents were teens their parents had grown up without Internet access at home; when my parents were teens they didn’t have TV. New risks arrive with every generation, and each new risk requires time to understand the consequences of getting it wrong.

That is, however, no excuse for some of the decisions adults are making about systems that affect all of us. Also this week and also at the Guardian, Akiko Hart, interim director of Liberty writes scathingly about government plans to expand the use of live facial recognition to track shoplifters. Under Project Pegasus, shops will use technology provided by Facewatch.

I first encountered Facewatch ten years ago at a conference on biometrics. Even then the company was already talking about “cloud-based crime reporting” in order to deter low-level crime. And even then there were questions about fairness. For how long would shoplifters remain on a list of people to watch closely? What redress was there going to be if the system got it wrong? Facewatch’s attitude seemed to be simply that what the company was doing wasn’t illegal because its customer companies were sharing information across their own branches. What Hart is describing, however, is much worse: a state-backed automated system that will see ten major retailers upload their CCTV images for matching against police databases. Policing minister Chris Philp hopes to expand this into a national shoplifting database including the UK’s 45 million passport photos. Hart suggests instead tackling poverty.

Quite apart from how awful all that is, what I’m interested in here is the increased embedding in public life of technology we already know is flawed and discriminatory. Since 2013, myriad investigations have found the algorithms that power facial recognition to have been trained on unrepresentative databases that make them increasingly inaccurate as the subjects diverge from “white male”.

There are endless examples of misidentification leading to false arrests. Last month, a man who was pulled over on the road in Georgia filed a lawsuit after being arrested and held for several days for a crime he didn’t commit in Louisiana, where he had never been.

In 2021, a story I’d missed, the University of Illinois at Urbana-Champaign announced it would discontinue using Proctorio, remote proctoring software that monitors students for cheating. The issue: the software frequently fails to recognize non-white faces. In a retail store, this might mean being followed until you leave. In an exam situation, this may mean being accused of cheating and having your results thrown out. A few months later, at Vice, Todd Feathers reported that a student researcher had studied the algorithm Proctorio was using and found its facial detection model failed to recognize black faces more than half the time. Late last year, the Dutch Institute of Human Rights found that using Proctorio could be discriminatory.

The point really isn’t this specific software or these specific cases. The point is more that we have a technology that we know is discriminatory and that we know would still violate human rights if it were accurate…and yet it keeps getting more and more deeply embedded in public systems. None of these systems are transparent enough to tell us what facial identification model they use, or publish benchmarks and test results.

So much of what net.wars is about is avoiding bad technology law that sticks. In this case, it’s bad technology that is becoming embedded in systems that one day will have to be ripped out, and we are entirely ignoring the risks. On that day, our children and their children will look at us, and say, “What were you thinking? You did know better.”

Illustrations: The CCTV camera on George Orwell’s house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Guarding the peace

Police are increasingly attempting to prevent crime by using social media targeting tools to shape public behavior, says a new report from the Scottish Institute for Policing Research (PDF) written by a team of academic researchers led by Ben Collier at the University of Edinburgh. There is no formal regulation of these efforts, and the report found many examples of what is genteelly calls “unethical practice”.

On the one hand, “behavioral change marketing” seems an undeniably clever use of new technological tools. If bad actors can use targeted ads to scam, foment division, and incite violence, why shouldn’t police use them to encourage the opposite? The tools don’t care whether you’re a Russian hacker targeting 70-plus white pensioners with anti-immigrant rhetoric or a charity trying to reach vulnerable people to offer help. Using them is a logical extension of the drive toward preventing, rather than solving, crime. Governments have long used PR techniques to influence the public, from benign health PSAs on broadcast media to Theresa May’s notorious , widely cricised, and unsuccessful 2013 campaign of van ads telling illegal immigrants to go home.

On the other hand, it sounds creepy as hell. Combining police power with poorly-evidenced assumptions about crime and behavior and risk and the manipulation and data gathering of surveillance capitalism…yikes.

The idea of influence policing derives at least in part from Cass R. Sunstein‘s and Richard H. Thaler‘s 2008 book Nudge. The “nudge theory” it promoted argued that the use of careful design (“choice architecture”) could push people into making more desirable decisions.

The basic contention seems unarguable; using design to push people toward decisions they might not make by themselves is the basis of many large platforms’ design decisions. Dark patterns are all about that.

Sunstein and Thaler published their theory at the post-financial crisis moment when governments were looking to reduce costs. As early as 2010, the UK’s Cabinet Office set up the Behavioural Insights Team to improve public compliance with government policies. The “Nudge Unit” has been copied widely across the world.

By 2013, it was being criticized for forcing job seekers to fill out a scientifically invalid psychometric test. In 2021, Observer columnist Sonia Sodha called its record “mixed”, deploring the expansion of nudge theory into complex, intractable social problems. In 2022, new research cast doubt on the whole idea that nudges have little effect on personal behavior.

The SIRP report cites the Government Communications Service, the outgrowth of decades of government work to gain public compliance with policy. The GCS itself notes its incorporation of marketing science and other approaches common in the commercial sector. Its 7,000 staff work in departments across government.

This has all grown up alongside the increasing adoption of digital marketing practices across the UK’s public sector, including the tax authorities (HMRC), the Department of Work and Pensions, and especially, the Home Office – and alongside the rise of sophisticated targeting tools for online advertising.

The report notes: “Police are able to develop ‘patchwork profiles’ built up of multiple categories provided by ad platforms and detailed location-based categories using the platform targeting categories to reach extremely specific groups.”

The report’s authors used the Meta Ad Library to study the ads, the audiences and profiles police targeted, and the cost. London’s Metropolitan Police, which a recent scathing report found endemically racist and misogynist, was an early adopter and is the heaviest studied user of digitally targeted ads on Meta.

Many of the cample campaigns these organizations run sound mostly harmless. Campaigns intended to curb domestic violence, for example, may aim at encouraging bystanders to come forward with concerns. Others focus on counter-radicalisation and security themes or, increasingly, preventing online harms and violence against women and girls.

As a particular example of the potential for abuse, the report calls out the Home Office Migrants on the Move campaign, a collaboration with a “migration behavior change” agency called Seefar. This targeted people in France seeking asylum in the UK and attempted to frighten them out of trying to cross the Channel in small boats. The targeting was highly specific, with many ads aimed at as few as 100 to 1,000 people, chosen for their language and recent travel in or through Brussels and Calais.

The report’s authors raise concerns: the harm implicit in frightening already-extremely vulnerable people, the potential for damaging their trust in authorities to help them, and the privacy implications of targeting such specific groups. In the report’s example, Arabic speakers in Brussels might see the Home Office ads but their French neighbors would not – and those Arabic speakers would be unlikely to be seeking asylum. The Home Office’s digital equivalent of May’s van ads, therefore, would be seen only by a selection of microtargeted individuals.

The report concludes: “We argue that this campaign is a central example of the potential for abuse of these methods, and the need for regulation.”

The report makes a number of recommendations including improved transparency, formalized regulation and oversight, better monitoring, and public engagement in designing campaigns. One key issue is coming up with better ways of evaluating the results. Surprise, surprise: counting clicks, which is what digital advertising largely sells as a metric, is not a useful way to measure social change.

All of these arguments make sense. Improving transparency in particular seems crucial, as does working with the relevant communities. Deterring crime doesn’t require tricks and secrecy; it requires collaboration and openness.

Illustrations: Theresa May’s notorious van ad telling illegal immigrants to go home.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon