The themes at this week’s Scrambling for Safety, hosted by the Foundation for Information Policy Research, are topical but not new since the original 1997 event: chat control; the online safety act; and AI in government decision making.
The EU proposal chat control would require platforms served with a detection order to scan people’s phones for both new and previously known child sexual abuse material – client-side scanning. Robin Wilton prefers to call this “preemptive monitoring” to clarify that it’s an attack.
Yet it’s not fit even for its stated purpose, as Claudia Peersman showed, based on research conducted at REPHRAIN. They set out to develop a human-centric evaluation framework for the AI tools needed at the scale chat control would require. Their main conclusion: AI tools are not ready to be deployed on end-to-end-encrypted private communications. This was also Ross Anderson‘s argument in his 2022 paper on chat control (PDF) showing why it won’t meet the stated goals. Peersman also noted an important oversight: none of the stakeholder groups consulted in developing these tools include the children they’re supposed to protect.
This led Jen Persson to ask: “What are we doing to young people?” Children may not understand encryption, she said, but they do know what privacy means to them, as numerous researchers have found. If violating children’s right to privacy by dismantling encryption means ignoring the UN Convention on the Rights of the Child, “What world are we leaving for them? How do we deal with a lack of privacy in trusted relationships?”
All this led Wilton to comment that if the technology doesn’t work, that’s hard evidence that it is neither “necessary” nor “proportionate”, as human rights law demands. Yet, Persson pointed out, legislators keep passing laws that technologists insist are unworkable. Studies in both France and Australia have found that there is no viable privacy-preserving age verification technology – but the UK’s Online Safety Act (2023) still requires it.
In both examples – and in introducing AI into government decision making – a key element is false positives, which swamp human adjudicators in any large-scale automated system. In outlining the practicality of the Online Safety Act, Graham Smith cited the recent case of Marieha Hussein, who carried a placard at a pro-Palestinian protest that depicted former prime minister Rishi Sunak and former home secretary Suella Braverman as coconuts. After two days of evidence, the judge concluded the placard was (allowed) political satire rather than (criminal) racial abuse. What automated system can understand that the same image means different things in different contexts? What human moderator has two days? Platforms will simply remove content that would never have led to a conviction in court.
Or, asked Monica Horten suggested, how does a platform identify the new offense of coercive control?
Lisa Sugiura, who campaigns to end violence against women and girls, had already noted that the same apps parents install so they can monitor their children (and are reluctant to give up later) are openly advertised with slogans like “Use this to check up on your cheating wife”. (See also Cindy Southworth, 2010, on stalker apps.) The dots connect into reports Persson heard at last week’s Safer Internet Forum that young women find it hard to refuse when potential partners want parental-style monitoring rights and then find it even harder to extricate themselves from abusive situations.
Design teams don’t count the cost of this sort of collateral damage, just as their companies have little liability for the human cost of false positives, and the narrow lens of child safety also ignores these wider costs. Yet they can be staggering: the 1990s US law requiring ISPs to facilitate wiretapping, CALEA, created the vulnerability that enabled widescale Chinese spying in 2024.
Wilton called laws that essentially treat all of us as suspects “a rule to make good people behave well, instead of preventing bad people from behaving badly”. Big organized crime cases like the Silk Road, Encrochat, and Sky ECC, relied on infiltration, not breaking encryption. Once upon a time, veterans know, there were four horsemen always cited by proponents of such laws: organized crime, drug dealers, terorrists, and child abusers. We hear little about the first three these days.
All of this will take new forms as the new government adopts AI in decision making with the same old hopes: increased efficiency, lowered costs. Government is not learning from the previous waves of technoutopianism, which brought us things like the Post Office Horizon scandal, said Gavin Freeguard. Under data protection law we were “data subjects”; now we are becoming “decision subjects” whose voices are not being heard.
There is some hope: Swee Leng Harris sees improvements in the reissued data bill, though she stresses that it’s important to remind people that the “cloud” is really material data centers that consume energy (and use water) at staggering rates (see also Kate Crawford’s book, Atlas of AI). It’s no help that UK ministers and civil servants move on to other jobs at pace, ensuring there is no accountability. As Sam Smith said, computers have made it possible to do things faster – but also to go wrong faster at a much larger scale.
Illustrations: Time magazine’s 1995 “Cyberporn” cover, the first children and online pornography scare, based on a fraudulent study.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.