In an article this week in the Guardian, Adrian Chiles asks what decisions today’s parents are making that their kids will someday look back on in horror the way we look back on things from our childhood. Probably his best example is riding in cars without seatbelts (which I’m glad to say I survived). In contrast to his suggestion, I don’t actually think tomorrow’s parents will look back and think they shouldn’t have had smartphones, though it’s certainly true that last year a current parent MP (whose name I’ve lost) gave an impassioned speech opposing the UK’s just-passed Online Safety Act in which she said she had taken risks on the Internet as a teenager that she wouldn’t want her kids to take now.
Some of that, though, is that times change consequences. I knew plenty of teens who smoked marijuana in the 1970s. I knew no one whose parents found them severely ill from overdoing it. Last week, the parent of a current 15-year-old told me he’d found exactly that. His kid had made the classic error (see several 2010s sitcoms) of not understanding how slowly gummies act. Fortunately, marijuana won’t kill you, as the parent found to his great relief after some frenzied online searching. Even in 1972, it was known that consuming marijuana by ingestion (for example, in brownies) made it take effect more slowly. But the marijuana itself, by all accounts, was less potent. It was, in that sense, safer (although: illegal, with all the risks that involves).
The usual excuse for disturbing levels of past parental risk-taking is “We didn’t know any better”. A lot of times that’s true. When today’s parents of teenagers were 12 no one had smartphones; when today’s parents were teens their parents had grown up without Internet access at home; when my parents were teens they didn’t have TV. New risks arrive with every generation, and each new risk requires time to understand the consequences of getting it wrong.
That is, however, no excuse for some of the decisions adults are making about systems that affect all of us. Also this week and also at the Guardian, Akiko Hart, interim director of Liberty writes scathingly about government plans to expand the use of live facial recognition to track shoplifters. Under Project Pegasus, shops will use technology provided by Facewatch.
I first encountered Facewatch ten years ago at a conference on biometrics. Even then the company was already talking about “cloud-based crime reporting” in order to deter low-level crime. And even then there were questions about fairness. For how long would shoplifters remain on a list of people to watch closely? What redress was there going to be if the system got it wrong? Facewatch’s attitude seemed to be simply that what the company was doing wasn’t illegal because its customer companies were sharing information across their own branches. What Hart is describing, however, is much worse: a state-backed automated system that will see ten major retailers upload their CCTV images for matching against police databases. Policing minister Chris Philp hopes to expand this into a national shoplifting database including the UK’s 45 million passport photos. Hart suggests instead tackling poverty.
Quite apart from how awful all that is, what I’m interested in here is the increased embedding in public life of technology we already know is flawed and discriminatory. Since 2013, myriad investigations have found the algorithms that power facial recognition to have been trained on unrepresentative databases that make them increasingly inaccurate as the subjects diverge from “white male”.
There are endless examples of misidentification leading to false arrests. Last month, a man who was pulled over on the road in Georgia filed a lawsuit after being arrested and held for several days for a crime he didn’t commit in Louisiana, where he had never been.
The point really isn’t this specific software or these specific cases. The point is more that we have a technology that we know is discriminatory and that we know would still violate human rights if it were accurate…and yet it keeps getting more and more deeply embedded in public systems. None of these systems are transparent enough to tell us what facial identification model they use, or publish benchmarks and test results.
So much of what net.wars is about is avoiding bad technology law that sticks. In this case, it’s bad technology that is becoming embedded in systems that one day will have to be ripped out, and we are entirely ignoring the risks. On that day, our children and their children will look at us, and say, “What were you thinking? You did know better.”
Illustrations: The CCTV camera on George Orwell’s house at 22 Portobello Road, London.
In the latest example of corporate destruction, the Guardian reports on the disturbing trend in which streaming services like Disney and Warner Bros Discovery are deleting finished, even popular, shows for financial reasons. It’s like Douglas Adams’ rock star Hotblack Desiato spending a year dead for tax reasons.
Given that consumers’ budgets are stretched so thin that many are reevaluating the streaming services they’re paying for, you would think this would be the worst possible time to delete popular entertainments. Instead, the industry seems to be possessed by a death wish in which it’s making its offerings *less* attractive. Even worse, the promise they appeared to offer to showrunners was creative freedom and broad and permanent access to their work. The news that Disney+ is even canceling finished shows (Nautilus) shortly before their scheduled release in order to pay less *tax* should send a chill through every creator’s spine. No one wants to spend years of their life – for almost *any* amount of money – making things that wind up in the corporate equivalent of the warehouse at the end of Raiders of the Lost Ark.
It’s time, as the Masked Scheduler suggested recently on Mastodon, for the emergence of modern equivalents of creator-founded studios United Artists and Desilu.
***
Many of us were skeptical about Meta’s Oversight Board; it was easy to predict that Facebook would use it to avoid dealing with the PR fallout from controversial cases, but never relinquish control. And so it is proving.
This week, Meta overruled the Board‘s recommendation of a six-month suspension of the Facebook account belonging to former Cambodian prime minister Hun Sen. At issue was a video of one of Sen’s speeches, which everyone agreed incited violence against his opposition. Meta has kept the video up on the grounds of “newsworthiness”; Meta also declined to follow the Board’s recommendation to clarify its rules for public figures in “contexts in which citizens are under continuing threat of retaliatory violence from their governments”.
In the Platformer newsletter Casey Newton argues that the Board’s deliberative process is too slow to matter – it took eight months to decide this case, too late to save the election at stake or deter the political violence that has followed. Newton also concludes from the list of decisions that the Board is only “nibbling round the edges” of Meta’s policies.
A company with shareholders, a business model, and a king is never going to let an independent group make decisions that will profoundly shape its future. From Kate Klonick’s examination, we know the Board members are serious people prepared to think deeply about content moderation and its discontents. But they were always in a losing position. Now, even they must know that.
***
It should go without saying that anything that requires an Internet connection should be designed for connection failures, especially when the connected devices are required to operate the physical world. The downside was made clear by the 2017 incident, when lost signal meant a Tesla-owning venture capitalist couldn’t restart his car. Or the one in 2021, when a bunch of Tesla owners found their phone app couldn’t unlock their car doors. Tesla’s solution both times was to tell car owners to make sure they always had their physical car keys. Which, fine, but then why have an app at all?
Last week, Bambu 3D printers began printing unexpectedly when they got disconnected from the cloud. The software managing the queue of printer jobs lost the ability to monitor them, causing some to be restarted multiple times. Given the heat and extruded material 3D printers generate, this is dangerous for both themselves and their surroundings.
At TechRadar, Bambu’s PR acknowledges this: “It is difficult to have a cloud service 100% reliable all the time, but we should at least have designed the system more carefully to avoid such embarrassing consequences.” As TechRadar notes, if only embarrassment were the worst risk.
So, new rule: before installation test every new “smart” device by blocking its Internet connection to see how it behaves. Of course, companies should do this themselves, but as we/’ve seen, you can’t rely on that either.
***
Finally, in “be careful what you legislate for”, Canada is discovering the downside of C-18, which became law in June. and requires the biggest platforms to pay for the Canadian news content they host. Google and Meta warned all along that they would stop hosting Canadian news rather than pay for it. Experts like law professor Michael Geist predicted that the bill would merely serve to dramatically cut traffic to news sites.
On August 1, Meta began adding blocks for news links on Facebook and Instagram. A coalition of Canadian news outlets quickly asked the Competition Bureau to mount an inquiry into Meta’s actions. At TechDirt Mike Masnick notes the irony: first legacy media said Meta’s linking to news was anticompetitive; now they say not linking is anticompetitive.
However, there are worse consequences. Prime minister Justin Trudeau complains that Meta’s news block is endangering Canadians, who can’t access or share local up-to-date information about the ongoing wildfires.
In a sensible world, people wouldn’t rely on Facebook for their news, politicians would write legislation with greater understanding, and companies like Meta would wield their power responsibly. In *this* world, a we have a perfect storm.
A panel at the UK Internet Governance Forum a couple of weeks ago focused on this exact topic, and was mostly self-congratulatory. Which is when it occurred to me that the Internet may not *be* fragmented, but it *feels* fragmented. Almost every day I encounter some site I can’t reach: email goes into someone’s spam folder, the site or its content is off-limits because it’s been geofenced to conform with copyright or data protection laws, or the site mysteriously doesn’t load, with no explanation. The most likely explanation for the latter is censorship built into the Internet feed by the ISP or the establishment whose connection I’m using, but they don’t actually *say* that.
The ongoing attrition at Twitter is exacerbating this feeling, as the users I’ve followed for years continue to migrate elsewhere. At the moment, it takes accounts on several other services to keep track of everyone: definite fragmentation.
Here in the UK, this sense of fragmentation may be about to get a lot worse, as the long-heralded Online Safety bill – written and expanded until it’s become a “Frankenstein bill”, as Mark Scott and Annabelle Dickson report at Politico – hurtles toward passage. This week saw fruitless debates on amendments in the House of Lords, and it will presumably be back in the Commons shortly thereafter, where it could be passed into law by this fall.
A number of companies have warned that the bill, particularly if it passes with its provisions undermining end-to-end encryption intact, will drive them out of the country. I’m not sure British politicians are taking them seriously; so often such threats are idle. But in this case, I think they’re real, not least because post-Brexit Britain carries so much less global and commercial weight, a reality some politicians are in denial about. WhatsApp, Signal, and Apple have all said openly that they will not compromise the privacy of their masses of users elsewhere to suit the UK. Wikipedia has warned that including it in the requirement to age-verify its users will force it to withdraw rather than violate its principles about collecting as little information about users as possible. The irony is that the UK government itself runs on WhatsApp.
Wikipedia, Ian McRae, the director of market intelligence for prospective online safety regulator Ofcom, showed in a presentation at UKIGF, would be just one of the estimated 150,000 sites within the scope of the bill. Ofcom is ramping up to deal with the workload, an effort the agency expects to cost £169 million between now and 2025.
In a legal opinion commissioned by the Open Rights Group, barristers at Matrix Chambers find that clause 9(2) of the bill is unlawful. This, as Thomas Macaulay explains at The Next Web, is the clause that requires platforms to proactively remove illegal or “harmful” user-generated content. In fact: prior restraint. As ORG goes on to say, there is no requirement to tell users why their content has been blocked.
Until now, the impact of most badly-formulated British legislative proposals has been sort of abstract. Data retention, for example: you know that pervasive mass surveillance is a bad thing, but most of us don’t really expect to feel the impact personally. This is different. Some of my non-UK friends will only use Signal to communicate, and I doubt a day goes by that I don’t look something up on Wikipedia. I could use a VPN for that, but if the only way to use Signal is to have a non-UK phone? I can feel those losses already.
And if people think they dislike those ubiquitous cookie banners and consent clickthroughs, wait until they have to age-verify all over the place. Worst case: this bill will be an act of self-harm that one day will be as inexplicable to future generations as Brexit.
The UK is not the only one pursuing this path. Age verification in particular is catching on. The US states of Virginia, Mississippi, Louisiana, Arkansas, Texas, Montana, and Utah have all passed legislation requiring it; Pornhub now blocks users in Mississippi and Virginia. The likelihood is that many more countries will try to copy some or all of its provisions, just as Australia’s law requiring the big social media platforms to negotiate with news publishers is spawning copies in Canada and California.
This is where the real threat of the “splinternet” lies. Think of requiring 150,000 websites to implement age verification and proactively police content. Many of those sites, as the law firm Mischon de Reya writes may not even be based in the UK.
This means that any site located outside the UK – and perhaps even some that are based here – will be asking, “Is it worth it?” For a lot of them, it won’t be. Which means that however much the Internet retains its integrity, the British user experience will be the Internet as a sea of holes.
Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).
“What is the point of introducing contestability if the system is illegal?” a questioner asked at this year’s Compiuters, Privacy, and Data Protection, or more or less.
This question could have been asked in any number of sessions where tweaks to surface problems leave the underlying industry undisturbed. In fact, the questioner raised it during the panel on enforcement, GDPR, and the newly-in-force Digital Markets Act. Maria Luisa Stasi explained the DMA this way: it’s about business models. It’s a step into a deeper layer.
.
The key question: will these new laws – the DMA, the recent Digital Services Act, which came into force in November, the in-progress AI Act – be enforced better than GDPR has been?
The frustration has been building all five years of GDPR’s existence. Even though this week, Meta was fined €1.2 billion for transferring European citizens’ data to the US, Noyb reports that 85% of its 800-plus cases remain undecided, 58% of them for more than 18 months. Even that €1.2 billion decision took ten years, €10 million, and three cases against the Irish Data Protection Commissioner to push through – and will now be appealed. Noyb has an annotated map of the various ways EU countries make litigation hard. The post-Snowden political will that fueled GDPR’s passage has had ten years to fade.
It’s possible to find the state of privacy circa 2023 depressing. In the 30ish years I’ve been writing about privacy, numerous laws have been passed, privacy has become a widespread professional practice and area of study in numerous fields, and the number of activists has grown from a literal handful to tens of thousands around the world. But overall the big picture is one of escalating surveillance of all types and by all sorts of players. At the 2000 Computers, Freedom, and Privacy conference, Neal Stephenson warned not to focus on governments. Watch the “Little Brothers”, he said. Google was then a tiny self-funded startup, and Mark Zuckerberg was 16. Stephenson was prescient.
And yet, that surveillance can be weirdly patchy. In a panel on children online, Leanda Barrington-Leach noted platforms’ selective knowledge: “How do they know I like red Nike trainers but don’t know I’m 12?” A partial answer came later: France’s CNIL has looked at age verification technologies and concluded that none are “mature enough” to both do the job and protect privacy.
In a discussion of deceptive practices, paraphrasing his recent paper, Mark Leiser pinpointed a problem: “We’re stuck with a body of law that looks at online interface as a thing where you look for dark patterns, but there’s increasing evidence that they’re being embedded in the systems architecture underneath and I’d argue we’re not sufficiently prepared to regulate that.”
As a response, Woody Hartzog and Neil Richards have proposed the concept of “data loyalty”. Similar to a duty of care, the “loyalty” in this case is owed by the platform to its users. “Loyalty is the requirement to make the interests of the trusted party [the platform] subservient to those of the trustee or vulnerable one [the user],” Hartzog explained. And the more vulnerable you are the greater the obligation on the powerful party.
The tone was set early with a keynote from Julie Cohen that highlighted structural surveillance and warned against accepting the Big Tech mantra that more technology naturally brings improved human social welfare..
“What happens to surveillance power as it moves into the information infrastructure?” she asked. Among other things, she concluded, it disperses accountability, making it harder to challenge but easier to embed. And once embedded, well…look how much trouble people are having just digging Huawei equipment out of mobile networks.
Cohen’s comments resonate. A couple of years ago, when smart cities were the hot emerging technology, it became clear that many of the hyped ideas were only really relevant to large, dense urban areas. In smaller cities, there’s no scope for plotting more efficient delivery routes, for example, because there aren’t enough options. As a result, congestion is worse in a small suburban city than in Manhattan, where parallel routes draw off traffic. But even a small town has scope for surveillance, and so some of us concluded that this was the technology that would trickle down. This is exactly what’s happening now: the Fusus technology platform even boasts openly of bringing the surveillance city to the suburbs.
Laws will not be enough to counter structural surveillance. In a recent paper, Cohen wrote, “Strategies for bending the arc of surveillance toward the safe and just space for human wellbeing must include both legal and technical components.”
And new approaches, as was shown by an unusual panel on sustainability, raised by the computational and environmental costs of today’s AI. This discussion suggested a new convergence: the intersection, as Katrin Fritsch put it, of digital rights, climate justice, infrastructure, and sustainability.
In the deception panel, Roseamunde van Brakel similarly said we need to adopt a broader conception of surveillance harm that includes social harm and risks for society and democracy and also the impact on climate of use of all these technologies. Surveillance, in other words, has environmental costs that everyone has ignored.
I find this convergence hopeful. The arc of surveillance won’t bend without the strength of allies..
Illustrations: CCTV camera at 22 Portobello Road, London, where George Orwell lived.
There is nowhere in the world, Brett Scott says in his recent book, Cloudmoney, that supermarkets price oatmeal in bitcoin. Even in El Salvador, where bitcoin became legal tender in 2021, what appear to be bitcoin prices are just the underlying dollar price refracted through bitcoin’s volatile exchange rate.
Fifteen years ago, when bitcoin was invented, its adherents thought by now it would be a mainstream currency instead of a niche highly speculative instrument of financial destruction and facilitator of crime. Five years ago, the serious money people thought it important enough to consider fighting back with central bank digital currencies (CBDCs).
In 2019, Facebook announced Libra, a consortium-backed cryptocurrency that would enable payments on its platform, apparently to match China’s social media messaging system WeChat, which are used by 1 billion users monthly. By 2021, when Facebook’s holding company renamed itself Meta, Libra had become “Diem”. In January 2022 Diem was sold to Silvergate Bank, which announced in February 2023 it would wind down and liquidate its assets, a casualty of the FTX collapse.
As Dave Birch writes in his 2020 book, The Currency Cold War, it was around the time of Facebook’s announcement that central banks began exploring CBDCs. According to the Atlantic Council’s tracker, 114 countries are exploring CDBCs, and 11 have launched one. Two – Ecuador and Senegal – have canceled theirs. Plans are inactive in 15 more.
politico
The tracker marks the EU, US, and UK as in development. The EU is quietly considering the digital euro. In the US, in March 2022 president Joe Biden issued an executive order including instructions to research a digital dollar. In the UK the Bank of England has an open consultation on the digital pound (closes June 7). It will not make a decision until at least 2025 after completing technical development of proofs of concept and the necessary architecture. The earliest we’d see a digital pound is around 2030.
But first: the BoE needs a business case. In 2021, the House of Lords issued a report (PDF) calling the digital pound a “solution in search of a problem” and concluding, “We have yet to hear a convincing case for why the UK needs a retail CBDC.” Note “retail”. Wholesale, for use only between financial institutions, may have clearer benefits.
Some of the imagined benefits of CBDCs are familiar: better financial inclusion, innovation, lowered costs, and improved efficiency. Others are more arcane: replicating the role of cash to anchor the monetary system in a digital economy. That’s perhaps the strongest argument, in that today’s non-cash payment options are commercial products but cash is public infrastructure. Birch suggests that the digital pound could allow individuals to hold accounts at the BoE. These would be as risk-free as cash and potentially open to those underserved by the banking system.
Many of these benefits will be lost on most of us. People who already have bank accounts or modern financial apps are unlikely to care about a direct account with the BoE, especially if, as Birch suggests, one “innovation” they might allow is negative interest rates. More important, what is the difference between pounds as numbers in cyberspace and pounds as fancier numbers in cyberspace? For most of us, our national currencies are already digital, even if we sometimes convert some of it into physical notes and coins. The big difference – and part of what they’re fighting over – is who owns the transaction data.
At Rest of World, Temitayo Lawal recounts the experience in Nigeria., the first African country to adopt a CBDC. Launched 18 months ago, the eNaira has been tried by only 0.5% of the population and used for just 1.4 million transactions. Among the reasons Lawal finds, Nigeria’s eNaira doesn’t have the flexibility or sophistication of independent cryptocurrencies, younger Nigerians see little advantage to the eNaira over the apps they were already using, 30 million Nigerians (about 13% of the population) lack Internet access, and most people don’t want to entrust their financial information to their government. By comparison, during that time Nigerians traded $1.16 billion in bitcoin on the peer-to-peer platform Paxful.
Many of these factors play out the same way elsewhere. From 2014 to 2018, Ecuador operated Dinero Electrónico, a mobile payment system that allowed direct transfer of US dollars and aimed to promote financial inclusion. In a 2020 paper, researchers found DE never reached critical mass because it didn’t offer enough incentive for adoption, was opposed by the commercial banks, and lacked a sufficient supporting ecosystem for cashing in and out. In China, which launched its CBDC in August 2020, the e-CNY is rarely used because, the Economist reports Alipay and We Chat work well enough that retailers don’t see the need to accept it. The Bahamanian sand dollar has gained little traction. Denmark and Japan have dropped the idea entirely, as has Finland, although it supports the idea of a digital euro.
The good news, such as it is, is that by the time Western countries are ready to make a decision either some country will have found a successful formula that can be adapted, or everyone who’s tried it will have failed and the thing can be shelved until it’s time to rediscover it. That still leaves the problem that Scott warns of: a cashless society will give Big Tech and Big Finance huge power over us. We do need an alternative.