The arc of surveillance

“What is the point of introducing contestability if the system is illegal?” a questioner asked at this year’s Compiuters, Privacy, and Data Protection, or more or less.

This question could have been asked in any number of sessions where tweaks to surface problems leave the underlying industry undisturbed. In fact, the questioner raised it during the panel on enforcement, GDPR, and the newly-in-force Digital Markets Act. Maria Luisa Stasi explained the DMA this way: it’s about business models. It’s a step into a deeper layer.
.
The key question: will these new laws – the DMA, the recent Digital Services Act, which came into force in November, the in-progress AI Act – be enforced better than GDPR has been?

The frustration has been building all five years of GDPR’s existence. Even though this week, Meta was fined €1.2 billion for transferring European citizens’ data to the US, Noyb reports that 85% of its 800-plus cases remain undecided, 58% of them for more than 18 months. Even that €1.2 billion decision took ten years, €10 million, and three cases against the Irish Data Protection Commissioner to push through – and will now be appealed. Noyb has an annotated map of the various ways EU countries make litigation hard. The post-Snowden political will that fueled GDPR’s passage has had ten years to fade.

It’s possible to find the state of privacy circa 2023 depressing. In the 30ish years I’ve been writing about privacy, numerous laws have been passed, privacy has become a widespread professional practice and area of study in numerous fields, and the number of activists has grown from a literal handful to tens of thousands around the world. But overall the big picture is one of escalating surveillance of all types and by all sorts of players. At the 2000 Computers, Freedom, and Privacy conference, Neal Stephenson warned not to focus on governments. Watch the “Little Brothers”, he said. Google was then a tiny self-funded startup, and Mark Zuckerberg was 16. Stephenson was prescient.

And yet, that surveillance can be weirdly patchy. In a panel on children online, Leanda Barrington-Leach noted platforms’ selective knowledge: “How do they know I like red Nike trainers but don’t know I’m 12?” A partial answer came later: France’s CNIL has looked at age verification technologies and concluded that none are “mature enough” to both do the job and protect privacy.

In a discussion of deceptive practices, paraphrasing his recent paper, Mark Leiser pinpointed a problem: “We’re stuck with a body of law that looks at online interface as a thing where you look for dark patterns, but there’s increasing evidence that they’re being embedded in the systems architecture underneath and I’d argue we’re not sufficiently prepared to regulate that.”

As a response, Woody Hartzog and Neil Richards have proposed the concept of “data loyalty”. Similar to a duty of care, the “loyalty” in this case is owed by the platform to its users. “Loyalty is the requirement to make the interests of the trusted party [the platform] subservient to those of the trustee or vulnerable one [the user],” Hartzog explained. And the more vulnerable you are the greater the obligation on the powerful party.

The tone was set early with a keynote from Julie Cohen that highlighted structural surveillance and warned against accepting the Big Tech mantra that more technology naturally brings improved human social welfare..

“What happens to surveillance power as it moves into the information infrastructure?” she asked. Among other things, she concluded, it disperses accountability, making it harder to challenge but easier to embed. And once embedded, well…look how much trouble people are having just digging Huawei equipment out of mobile networks.

Cohen’s comments resonate. A couple of years ago, when smart cities were the hot emerging technology, it became clear that many of the hyped ideas were only really relevant to large, dense urban areas. In smaller cities, there’s no scope for plotting more efficient delivery routes, for example, because there aren’t enough options. As a result, congestion is worse in a small suburban city than in Manhattan, where parallel routes draw off traffic. But even a small town has scope for surveillance, and so some of us concluded that this was the technology that would trickle down. This is exactly what’s happening now: the Fusus technology platform even boasts openly of bringing the surveillance city to the suburbs.

Laws will not be enough to counter structural surveillance. In a recent paper, Cohen wrote, “Strategies for bending the arc of surveillance toward the safe and just space for human wellbeing must include both legal and technical components.”

And new approaches, as was shown by an unusual panel on sustainability, raised by the computational and environmental costs of today’s AI. This discussion suggested a new convergence: the intersection, as Katrin Fritsch put it, of digital rights, climate justice, infrastructure, and sustainability.

In the deception panel, Roseamunde van Brakel similarly said we need to adopt a broader conception of surveillance harm that includes social harm and risks for society and democracy and also the impact on climate of use of all these technologies. Surveillance, in other words, has environmental costs that everyone has ignored.

I find this convergence hopeful. The arc of surveillance won’t bend without the strength of allies..

Illustrations: CCTV camera at 22 Portobello Road, London, where George Orwell lived.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Appropriate privacy

At a workshop this week, one of the organizers posed a question that included the term “appropriate”. As in: “lawful access while maintaining appropriate user privacy”. We were there to think about approaches that could deliver better privacy and security over the next decade, with privacy defined as “the embedding of encryption or anonymization in software or devices”.

I had to ask: What work is “appropriate” doing in that sentence?

I had to ask because last weekend’s royal show was accompanied by preemptive arrests well before events began – at 7:30 AM. Most of the arrested were anti-monarchy protesters armed with luggage straps and placards, climate change protesters whose T-shirts said “Just Stop Oil”, and volunteers for the Night Stars on suspicion that the rape whistles they hand out to vulnerable women might be used to disrupt the parading horses. All of these had coordinated with the Metropolitan Police in advance or actually worked with them…which made no difference. All were held for many hours. Since then, the news has broken that an actual monarchist was arrested, DNA-sampled, fingerprinted, and held for 13 hours just for standing *near* some protesters.

It didn’t help the look of the thing that several days before the Big Show, the Met tweeted a warning that: “Our tolerance for any disruption, whether through protest or otherwise, will be low.”

The arrests were facilitated by the last-minute passage of the Public Order Act just days before with the goal of curbing “disruptive” protests. Among the now-banned practices is “locking on” – that is, locking oneself to a physical structure, a tactic the suffragettes used. among many others in campaigning for women’s right to vote. Because that right is now so thoroughly accepted, we tend to forget how radical and militant the Suffragists had to be to get their point across and how brutal the response was. A century from now, the mainstream may look back and marvel at the treatment meted out to climate change activists. We all know they’re *right*, whether or not we like their tactics.

Since the big event, the House of Lords has published its report on current legislation. The government is seeking to expand the Public Order Act even further by lowering the bar for “serious disruption” from “significant” and “prolonged” to “more than minor” and may include the cumulative impact of repeated protests in the same area. The House of Lords is unimpressed by these amendments via secondary legislation, first because of their nature, and second because they were rejected during the scrutiny of the original bill, which itself is only days old. Secondary legislation gets looked at less closely; the Lords suggest that using this route to bring back rejected provisions “raises possible constitutional issues”. All very Polite for accusing the government of abusing the system.

In the background, we’re into the fourth decade of the same argument between governments and technical experts over encryption. Technical experts by and large take the view that opening a hole for law enforcement access to encrypted content fatally compromises security; law enforcement by and large longs for the old days when they could implement a wiretap with a single phone call to a major national telephone company. One of the technical experts present at the workshop phrased all this gently by explaining that providing access enlarges the attack surface, and the security of such a system will always be weaker because there are more “moving parts”. Adding complexity always makes security harder.

This is, of course, a live issue because of the Online Safety bill, a sprawling mess of 262 pages that includes a requirement to scan public and private messaging for child sexual abuse material, whether or not the communications are encrypted.

None of this is the fault of the workshop we began with, which is part of a genuine attempt to find a way forward on a contentious topic, and whose organizers didn’t have any of this in mind when they chose their words. But hearing “appropriate” in that way at that particular moment raised flags: you can justify anything if the level of disruption that’s allowed to trigger action is vague and you’re allowed to use “on suspicion of” indiscriminately as an excuse. “Police can do what they want to us now,” George Monbiot writes at the Guardian of the impact of the bill.

Lost in the upset about the arrests was the Met’s decision to scan the crowds with live facial recognition. It’s impossible to overstate the impact of this technology. There will be no more recurring debates about ID cards because our faces will do the job. Nothing has been said about how the Met used it on the day, whether its use led to arrests (or on what grounds), or what the Met plans to do with the collected data. The police – and many private actors – have certainly inhaled the Silicon Valley ethos of “ask forgiveness, not permission”.

In this direction of travel, many things we have taken for granted as rights become privileges that can be withdrawn at will, and what used to be public spaces open to all become restricted like an airport or a small grocery store in Whitley Bay. This is the sliding scale in which “appropriate user privacy” may be defined.

Illustrations: Protesters at the coronation (by Alisdair Hickson at Wikimedia .

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Strike two

Whatever happens with the Hollywood writers’ strike that began on Tuesday, the recent golden era of American TV, which arguably began with The Sopranos, is ending for viewers as well as creators.

A big reason for that golden era was that Hollywood’s loss of interest in grown-up movies pushed actors and writers who formerly looked down on TV to move across to where the more interesting work was finding a home. Another was the advent of streaming services, which competed with existing channels by offering creators greater freedom – and more money. It was never sustainable.

Streaming services’ business models are different. For nearly a decade, Netflix depended on massive debt to build a library to protect itself when the major studios ended their licensing deals. The company has so far gotten away with it because of (now ended) low interest rates and Wall Street’s focus on subscriber numbers in valuing its shares. Newer arrivals such as Amazon, Apple, and Disney can all finance loss-making startup streaming services from their existing businesses. All of these are members of the Alliance of Motion Picture and Television Producers, along with broadcast networks, cable providers, and motion picture studios. For the purposes of the strike, they are the “enemy”.

This landscape could not be more different than that of the last writers’ strike, in 2007-2008, when DVD royalties were important and streaming was the not-yet future. Of the technology companies refusing to bargain today, only Netflix was a player in 2007 – and it was then sending out DVDs by mail.

Essentially, what is happening to Hollywood writers is what happened to songwriters when music streaming services took over the music biz: income shrinkage. In 2021, veteran screenwriter Ken Levine, gave the detail of his persistently shrinking residuals (declining royalties paid for reuse). When American Airlines included an episode he directed of Everyone Loves Raymond in its transcontinental in-flight package for six months, his take from the thousands of airings was $1.19. He also documented, until he ended his blog in 2022, other ways writers are being squeezed; at Disconnect, Paris Marx provides a longer list. The Writers Guild of America’s declared goals are to redress these losses and bring residuals and other pay on streaming services into line with older broadcasters.

Even an outsider can see the bigger picture: broadcast networks, traditionally the biggest payers, are watching their audiences shrink and retrenching, and cable and streaming services commission shorter seasons, which they renew at a far more leisurely pace. Also a factor is the shift in which broadcast networks reair their new shows a day or two later on their streaming service. The DVD royalties that mattered in the 2007-2008 strike are dying away, and just as in music royalties from streaming are a fraction of the amount. Overall, the WGA says that in the last decade writers’ average incomes have dropped by 4% – 23% if you include inflation. Meanwhile, industry profits have continued to rise.

The new issue on the block is AI – not because large language models are good enough to generate good scripts (as if), but because writers fear the studios will use them to generate crappy scripts and then demand that the writers rewrite them into quality for a pittance. Freelance journalists have already reported seeing publishers try this gambit.

In 2007, 2007, and again in 2017, Levine noted that the studios control the situation. They can make a deal and end the strike any time they decide it’s getting too expensive or disruptive. Eventually, he said, the AMPTP will cut a deal, writers will get some of what they need, and everyone will go back to work. Until then, the collateral damage will mount to writers and staff in adjacent industries and California’s economy. At Business Insider, Lucia Moses suggests that Netflix, Amazon, and Disney all have enough content stockpiled to see them through.

Longer-term, there will be less predictable consequences. In 2007-2008, Leigh Blickley reported in a ten-years-later lookback at the Huffington Post, these included the boom in “unscripted” reality TV and the death of pathways into the business for new writers.

Underlying all this is a simple but fundamental change. Broadcast networks cared what Americans watched because their revenues depended on attracting large audiences that advertisers would pay to reach. Until VCRs arrived to liberate us from the tyranny of schedules, the networks competed on the quality and appeal of their programming in each time slot. Streaming services compete on their whole catalogue, and care only that you subscribe; ratings don’t count.

The WGA warns that the studios’ long-term goal is to turn screenwriting into gig economy work. In 2019, at BIG, Matt Stoller warned that Netflix was predatorily killing Hollywood, first by using debt financing to corner the market, and second by vertically integrating its operation. Like the the studios that were forced to divest their movie theaters in 1948, Netflix, Amazon, and Apple own content, controls its distribution, and sells retail access. It should be no surprise if a vertically integrated industry with a handful of monopolistic players cuts costs by treating writers the way Uber treats drivers: enshittification.

The WGA’s 12,000 members know their skills, which underpin a trillion-dollar industry, are rare. They have a strong union and a long history of solidarity. If they can’t win against modern corporate extraction, what hope for the rest of us?

Illustrations: WGA members picketing in 2007 (by jengod at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

The privacy price of food insecurity

One of the great unsolved questions continues to be: what is my data worth? Context is always needed: worth to whom, under what circumstances, for what purpose? Still, supermarkets may give us a clue.

At Novara Media, Jake Hurfurt, who runs investigations for Big Brother Watch, has been studying suprmarket loyalty cards. He finds that increasingly only loyalty card holders have access to special offers, which used to be open to any passing customer.

Tesco now and Sainsburys soon, he says, “are turning the cost-of-living crisis into a cost-of-privacy crisis”,

Neat phrasing, but I’d say it differently: these retailers are taking advantage of the cost-of-living crisis to extort desperate people ito giving up their data. The average value of the discounts might – for now – give a clue to the value supermarkets place on it.

But not for long, since the pattern going forward is a predictable one of monopoly power: as the remaining supermarkets follow suit and smaller independent shops thin out under the weight of rising fuel bills and shrinking margins, and people have fewer choices, the savings from the loyalty card-only special offers will shrink. Not so much that they won’t be worth having, but it seems obvious they’ll be more generous with the discounts – if “generous” is the word – in the sign-up phase than they will once they’ve achieved customer lock-in.

The question few shoppers are in a position to answer while they’re strying to lower the cost of filling their shopping carts is what the companies do with the data they collect. BBW took the time to analyze Tesco’s and Sainsburys’ privacy policies, and found that besides identity data they collect detailed purchase histories as well as bank accounts and payment information…which they share with “retail partners, media partners, and service providers”. In Tesco’s case, these include Facebook, Google, and, for those who subscribe to them, Virgin Media and Sky. Hyper-targeted personal ads right there on your screen!

All that sounds creepy enough. But consider what could well come next. Also this week, a cross-party group of 50 MPs and peers and cosinged by BBW, Privacy International and Liberty, wrote to Frasers Group deploring that company’s use of live facial recognition in its stores, which include Sports Direct and the department store chain House of Fraser. Frasers Group’s purpose, like retailers and pub chains were trialing a decade ago , is effectively to keep out people suspected of shoplifting and bad behavior. Note that’s “suspected”, not “convicted”.

What happens as these different privacy invasions start to combine?

A store equipped with your personal shopping history and financial identity plus live facial recognition cameras, knows the instant you walk into the store who you are, what you like to buy, and how valuable a customer your are. Such a system, equipped with some sort of socring, could make very fine judgments. Such as: this customer is suspected of stealing another customer’s handbag, but they’re highly profitable to us, so we’ll let that go. Or: this customer isn’t suspected of anything much but they look scruffy and although they browse they never buy anything – eject! Or even: this journalist wrote a story attacking our company. Show them the most expensive personalized prices. One US entertainment company is already using live facial recognition to bar entry to its venues to anyone who works for any law firm involved in litigation against it. Britain’s data protection laws should protect us against that sort of abuse, but will they survive the upcoming bonfire of retained EU law?

And, of course, what starts with relatively anodyne product advertising becomes a whole lot more sinister when it starts getting applied to politics, voter manipulation and segmentation, and the “pre-crime” systems

Add the possibilities of technology that allows retailers to display personalized pricing in-store, just like an online retailer could do in the privacy of your own browser, Could we get to a scenario where a retailer, able to link your real world identity and purchasing power to your online nd offline movements could perform a detailed calculation of what you’d be willing to pay for a particular item? What would surge pricing for the last remaining stock of the year’s hottest toy on Christmas Eve look like?

This idea allows me to imagine shopping partnerships, where the members compare prices and the partner with the cheapest prices buys that item for the whole group. In this dystopian future, I imagine such gambits would be banned.

Most of this won’t affect people rich enough to grandly refuse to sign up for loyalty cards, and none of it will affect people rich and eccentric enough to do source everything from local, independent shops – and, if they’re allowed, pay cash.

Four years ago, Jaron Lanier toured with the proposal that we should be paid for contributing to commercial social media sites. The problem with this idea was and is that payment creates a perverse incentive for users to violate their own privacy even more than they do already, and that fair payment can’t be calculated when the consequences of disclosure are perforce unknown.

The supermarket situation is no different. People need food security and affordability, They should not have to pay for that with their privacy.

Illustrations: .London supermarket checkout, 2006 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Excluding the vote

“You have to register at home, where your parents live,” said the clerk at the Board of Elections office.

I was 18, and registering to vote for the first time. It was 1972.

“I don’t live there,” I said. “I live here.” “Here” was Ithaca, NY, a town that, I learned later, was hyper-conscious that college students – Cornell, Ithaca College – outnumbered local residents. They didn’t want us interlopers overwhelming their preferences.

We had a couple more back-and-forths like this, and then she picked up the phone and called the state authorities in Albany for an official ruling. I knew – or thought I knew – that the law was on my side.

It was. I registered. I voted.

In about a month, the UK will hold local elections. For the first time, anyone presenting themselves to vote at the polls will be required to show an ID card with a photograph. This is a policy purely imported from American Republicans, and it has no basis in necessity. The Electoral Commission, in recommending its introduction, admitted that the issue was public perception. The big issues with respect to elections are around dark money and the processes by which candidates are chosen.

For 49 days in the fall of 2022, Liz Truss served as prime minister; she was chosen by 81,326 Tory party members. Out of the country’s roughly 68 million people, only 141,725 (out of an estimated 172,000 party members) voted in that contest because, since the Conservatives had decisively won the 2019 election, they were just electing a new leader. Rishi Sunak was voted in by 202 MPs.

The government’s proximate excuse for bringing in voter ID is the fraud-riddled May 2014 mayoral election in the London borough of Tower Hamlets. Four local residents risked their own money to challenge the outcome, and in 2015 won an Election Court ruling voiding the election and barring the cheating winner from standing for public office for five years. Their complaints; included vote-rigging, false statements made by the winning candidates about his rival, bribery, and religious influence.

The High Court of Justice’s judgment in the case says: “…in practice, where electoral malpractice is established, particularly in the field of vote-rigging, it is very rare indeed to find members of the general public engaging in DIY vote-rigging on behalf of a candidate. Generally speaking, if there is widespread personation or false registration or misuse of postal votes, it will have been organised by the candidate or by someone who is, in law, his agent.”

Surely a more logical response to the Tower Hamlets case would be to make it easier – or at least quicker – for individuals to challenge election results and examine ways to ensure better behavior by *candidates*, not voters.

The judgment also notes that personation – assuming someone else’s identity in order to vote – was far more of a risk when fewer people qualified to vote. There followed a long period when it was too labor-intensive for too little reward; you need a lot of impersonators to change the result. In recent years, however, postal voting has made it viable again; in two wards of a 2008 Birmingham election Labour candidates committed 15 types of fraud involving postal ballots. The election in those two wards was re-run.

In his book Security Engineering, Cambridge professor Ross Anderson notes that the likelihood that expanded use of postal ballots would open the way for vote-buying an intimidation was predicted even as first Margaret Thatcher and then Tony Blair pursued the policy. But the main point is clear: the big problem is postal ballots, which you can’t solve by requiring voter ID from those who vote in person. It’s the wrong threat model. As Anderson observes, “…it’s typically the incumbent who tweaks the laws, buys the voting machines, and creates as many advantages for their own side, small and large, as the local political culture will tolerate.”

But voter ID is the policy that Boris Johnson used his 80-seat majority to push through in the form of the Elections Act (2022), which also weakens the independence of the Electoral Commission. As the bill went through Parliament, estimates were that about 3.5 million people lacked any qualifying form of ID, and that those 3.5 million skew heavily toward people who are not expected to vote Conservative.

This was all maddening enough – and then they published the list of acceptable forms of ID. Tl;dr: the list blatantly skews in favor of older and richer people, who are presumed to be more likely to vote Conservative. Passports, driving licenses, and travel passes 60+ for people are all acceptable. Student ID cards and travel cards and passesare not. The government says they are not secure enough, a bit like saying a lock on the door is pointless because it’s not a burglar alarm.

There is a scheme for issuing free voter cards; applications must be in by April 25. People can also vote by post or by proxy without ID. And there are third parties pushing paid ID cards, too. But what it comes down to is next month a bunch of people are going to go to vote and will be barred. And this from the same people who wanted online voting to “increase access”.

Illustrations: London polling station 2017 (by Mramoeba at Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Unclear and unpresent dangers

Monthly computer magazines used to fret that their news pages would be out of date by the time the new issue reached readers. This week in AI, a blog posting is out of date before you hit send.

This – Friday – morning, the Italian data protection authority, Il Garante, has ordered ChatGPT to stop processing the data of Italian users until it complies with the General Data Protection Regulation. Il Garante’s objections, per Apple’s translation, posted by Ian Brown: ChatGPT provides no legal basis for collecting and processing its massive store of the personal data used to train the model, and that it fails to filter out users under 13.

This may be the best possible answer to the complaint I’d been writing below.

On Wednesday, the Future of Life Institute published an open letter calling for a six-month pause on developing systems more powerful than Open AI’s current state of the art, GPT4. Barring Elon Musk, Steve Wozniack, and Skype co-founder Jaan Tallinn, most of the signatories are unfamiliar names to most of us, though the companies and institutions they represent aren’t – Pinterest, the MIT Center for Artificial Intelligence, UC Santa Cruz, Ripple, ABN-Amro Bank. Almost immediately, there was a dispute over the validity of the signatures..

My first reaction was on the order of: huh? The signatories are largely people who are inventing this stuff. They don’t have to issue a call. They can just *stop*, work to constrain the negative impacts of the services they provide, and lead by example. Or isn’t that sufficiently performative?

A second reaction: what about all those AI ethics teams that Silicon Valley companies are disbanding? Just in the last few weeks, these teams have been axed or cut at Microsoft and Twitch; Twitter of course ditched such fripperies last November in Musk’s inaugural wave of cost-cutting. The letter does not call to reinstate these.

The problem, as familiar critics such as Emily Bender pointed out almost immediately, is that the threats the letter focuses on are distant not-even-thunder. As she went on to say in a Twitter thread, the artificial general intelligence of the Singularitarian’s rapture is nowhere in sight. By focusing on distant threats – longtermism – we ignore the real and present problems whose roots are being continuously more deeply embedded into the new-building infrastructure: exploited workers, culturally appropriated data, lack of transparency around the models and algorithms used to build these systems….basically, all the ways they impinge upon human rights.

This isn’t the first time such a letter has been written and circulated. In 2015, Stephen Hawking, Musk, and about 150 others similarly warned of the dangers of the rise of “superintelligences”. Just a year later, in 2016, Pro Publica investigated the algorithm behind COMPAS, a risk-scoring criminal justice system in use in US courts in several states. Under Julia Angwin‘s scrutiny, the algorithm failed at both accuracy and fairness; it was heavily racially biased. *That*, not some distant fantasy, was the real threat to society.

“Threat” is the key issue here. This is, at heart, a letter about a security issue, and solutions to security issues are – or should be – responses to threat models. What is *this* threat model, and what level of resources to counter it does it justify?

Today, I’m far more worried by the release onto public roads of Teslas running Full Self Drive helmed by drivers with an inflated sense of the technology’s reliability than I am about all of human work being wiped away any time soon. This matters because, as Jessie Singal, author of There Are No Accidents, keeps reminding us, what we call “accidents” are the results of policy decisions. If we ignore the problems we are presently building in favor of fretting about a projected fantasy future, that, too, is a policy decision, and the collateral damage is not an accident. Can’t we do both? I imagine people saying. Yes. But only if we *do* both.

In a talk this week for a group at the French international research group AI Act. This effort began well before today’s generative tools exploded into public consciousness, and isn’t likely to conclude before 2024. It is, therefore, much more focused on the kinds of risks attached to public sector scandals like COMPAS and those documented in Cathy O’Neil’s 2017 book Weapons of Math Destruction, which laid bare the problems with algorithmic scoring with little to tether it to reality.

With or without a moratorium, what will “AI” look like in 2024? It has changed out of recognition just since the last draft text was published. Prediction from this biological supremacist: it still won’t be sentient.

All this said, as Edwards noted, even if the letter’s proposal is self-serving, a moratorium on development is not necessarily a bad idea. It’s just that if the risk is long-term and existential, what will six months do? If the real risk is the hidden continued centralization of data and power, then those six months could be genuinely destructive. So far, it seems like its major function is as a distraction. Resist.

Illustrations: IBM’s Watson, which beat two of Jeopardy‘s greatest champions in 2011. It has since failed to transform health care.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Performing intelligence

“Oh, great,” I thought when news broke of the release of GPT-4. “Higher-quality deception.”

Most of the Internet disagreed; having gone mad only a few weeks ago over ChatGPT, everyone’s now agog over this latest model. It passed all these tests!

One exception was the journalist Paris Marx, who commented on Twitter: “It’s so funny to me that the AI people think it’s impressive when their programs pass a test after being trained on all the answers.”

Agreed. It’s also so funny to me that they call that “AI” and don’t like it when researchers like computational linguist Emily Bender call it a “stochastic parrot”. At Marx’s Tech Won’t Save Us podcast, Goldsmith professor Dan McQuillan, author of Resisting AI: An Anti-fascist Approach to Artificial Intelligence, calls it a “bullshit engine” whose developers’ sole goal is plausibility – plausibility that, as Bender has said, allows us imaginative humans to think we detect a mind behind it, and the result is to risk devaluing humans.

Let’s walk back to an earlier type of system that has been widely deployed: benefits scoring systems. A couple of weeks ago, Lighthouse Reports and Wired magazine teamed up on an investigation of these systems, calling them “suspicion machines”.

Their work focuses on the welfare benefits system in use in Rotterdam between 2017 and 2021, which uses 315 variables to risk-score benefits recipients according to the likelihood that their claims are fraudulent. In detailed, worked case analyses, they find systemic discrimination: you lose points for being female, for being female and having children (males aren’t asked about children), for being non-white, and for ethnicity (knowing Dutch a requirement for welfare recipients). Other variables include missing meetings, age, and “lacks organizing skills”, which was just one of 54 variables based on case workers’ subjective assessments. Any comment a caseworker adds translates to a 1 added to the risk score, even if it’s positive. The top-scoring 10% are flagged for further investigation.

This is the system that Accenture, the city’s technology partner on the early versions, said at its unveiling in 2018 was an “ethical solution” and promised “unbiased citizen outcomes”. Instead, Wired says, the algorithm “fails the city’s own test of fairness”.

The project’s point wasn’t to pick on Rotterdam; of the dozens of cities they contacted it just happened to be the only one that was willing to share the code behind the algorithm, along with the list of variables, prior evaluations, and the data scientists’ handbook. It even – after being threatened with court action under freedom of information laws, shared the mathematical model itself.

The overall conclusion: the system was so inaccurate it was little better than random sampling “according to some metrics”.

What strikes me, aside from the details of this design, is the initial choice of scoring benefits recipients for risk of fraud. Why not score them for risk of missing out on help they’re entitled to? The UK government’s figures on benefits fraud indicate that in 2021-2022 overpayment (including error as well as fraud) amounted to 4%; and *underpayment* 1.2% of total expenditure. Underpayment is a lot less, but it’s still substantial (£2.6 billion). Yes, I know, the point of the scoring system is to save money, but the point of the *benefits* system is to help people who need it. The suspicion was always there, but the technology has altered the balance.

This was the point the writer Ellen Ullman noted in her 1996 book Close to the Machine”: the hard-edged nature of these systems and their ability to surveil people in new ways, “infect” their owners with suspicion even of people they’ve long trusted and even when the system itself was intended to be helpful. On a societal scale, these “suspicion machines” embed increased division in our infrastructure; in his book, McQuillan warns us to watch for “functionality that contributes to violent separations of ‘us and them’.”

Along those lines, it’s disturbing that Open AI, the owner of ChatGPT and GPT-4 (and several other generative AI gewgaws) has now decided to keep secret the details of its large language models. That is, we have no sight into what data was used in training, what software and hardware methods were used, or how energy-intensive it is. If there’s a machine loose in the world’s computer systems pretending to be human, shouldn’t we understand how it works? It would help with damping down imagining we see a mind in there.

The company’s argument appears to be that because these models could become harmful it’s bad to publish how they work because then bad actors will use them to create harm. In the cybersecurity field we call this “security by obscurity” and there is a general consensus that it does not work as a protection.

In a lengthy article at New York magazine, Elizabeth Weil. quotes Daniel Dennett’s assessment of these machines: “counterfeit people” that should be seen as the same sort of danger to our system as counterfeit money. Bender suggests that rather than trying to make fake people we should be focusing more on making tools to help people.

The thing that makes me tie it to the large language models that are producing GPT is that in both cases it’s all about mining our shared cultural history, with all its flaws and misjudgments, in response to a prompt and pretending the results have meaning and create new knowledge. And *that’s* what’s being embedded into the world’s infrastructure. Have we learned nothing from Clever Hans?

Illustrations: Clever Hans, performing in Leipzig in 1912 (by Karl Krali, via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Gap year

What do Internet users want?

First, they want meaningful access. They want usability. They want not to be scammed, manipulated, lied to, exploited, or cheated.

It’s unlikely that any of the ongoing debates in either the US or UK will deliver any of those.

First and foremost, this week concluded two frustrating years in which the US Senate failed to confirm the appointment of Public Knowledge co-founder and EFF board member Gigi Sohn to the Federal Communications Commission. In her withdrawal statement, Sohn blamed a smear campaign by “legions of cable and media industry lobbyists, their bought-and-paid-for surrogates, and dark money political groups with bottomless pockets”.

Whether you agree or not, the result remains that for the last two years and for the foreseeable future the FCC will remain deadlocked and problems such as the US’s lack of competition and patchy broadband provision will remain unsolved.

Meanwhile, US politicians continue obsessing about whether and how to abort-retry-fail Section 230, that pesky 26-word law that relieves Internet hosts of liability for third-party content. This week it was the turn of the Senate Judiciary Committee. In its hearing, the Internet Society’s Andrew Sullivan stood out for trying to get across to lawmakers that S230 wasn’t – couldn’t have been – intended as protectionism for the technology giants because they did not exist when the law was passed. It’s fair to say that S230 helped allow the growth of *some* Internet companies – those that host user-generated content. That means all the social media sites as well as web boards and blogs and Google’s search engine and Amazon’s reviews, but neither Apple nor Netflix makes its living that way. Attacking the technology giants is a popular pasttime just now, but throwing out S230 without due attention to the unexpected collateral damage will just make them bigger.

Also on the US political mind is a proposed ban on TikTok. It’s hard to think of a move that would more quickly alienate young people. Plus, it fails to get at the root problem. If the fear is that TikTok gathers data on Americans and sends it home to China for use in designing manipulative programs…well, why single out TikTok when it lives in a forest of US companies doing the same kind of thing? As Karl Bode writes at TechDirt, if you really want to mitigate that threat, rein in the whole forest. Otherwise, if China really wants that data it can buy it on the open market.

Meanwhile, in the UK, as noted last week, opposition continues to increase to the clauses in the Online Safety bill proposing to undermine end-to-end encryption by requiring platforms to proactively scan private messages. This week, WhatsApp said it would withdraw its app from the UK rather than comply. However important the UK market is, it can’t possibly be big enough for Meta to risk fines of 4% of global revenues and criminal sanctions for executives. The really dumb thing is that everyone within the government uses WhatsApp because of its convenience and security, and we all know it. Or do they think they’ll have special access denied the rest of the population?

Also in the UK this week, the Data Protection and Digital Information bill returned to Parliament for its second reading. This is the UK’s post-Brexit attempt to “take control” by revising the EU’s General Data Protection Regulation; it was delayed during Liz Truss’s brief and destructive outing as prime minister. In its statement, the government talks about reducing the burdens on businesses without any apparent recognition that divergence from GDPR is risky for anyone trading internationally and complying with two regimes must inevitably be more expensive than complying with one.

The Open Rights Group and 25 other civil society organizations have written a letter (PDF) laying out their objections, noting that the proposed bill, in line with other recent legislation that weakens civil rights, weakens oversight and corporate accountability, lessens individuals’ rights, and weakens the independence of the Information Commissioner’s Office. “Co-designed with businesses from the start” is how the government describes the bill. But data protection law was not supposed to be designed for business – or, as Peter Geoghegan says at the London Review of Books, to aid SLAPP suits; it is supposed to protect our human rights in the face of state and corporate power. As the cryptography pioneer Whit Diffie said in 2019, “The problem isn’t privacy; it’s corporate malfeasance.”

The most depressing thing about all of these discussions is that the public interest is the loser in all of them. It makes no sense to focus on TikTok when US companies are just as aggressive in exploiting users’ data. It makes no sense to focus solely on the technology giants when the point of S230 was to protect small businesses, non-profits, and hobbyists. And it makes no sense to undermine the security afforded by end-to-end encryption when it’s essential for protecting the vulnerable people the Online Safety bill is supposed to help. In a survey, EDRi finds that compromising secure messaging is highly unpopular with young people, who clearly understand the risks to political activism and gender identity exploration.

One of the most disturbing aspects of our politics in this century so far is the widening gap between what people want, need, and know and the things politicians obsess about. We’re seeing this reflected in Internet policy, and it’s not helpful.

Illustrations: Andrew Sullivan, president of the Internet Society, testifying in front of the Senate Judiciary Committee.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Review: Survival of the Richest

A former junior minister who’d been a publicity magnet while in office once told me that it’s impossible to travel on the tube when you’re famous – except in morning rush hour, when everyone glumly hides behind their newspaper. (This was some while ago, before smartphones.)

It was the first time I’d realized that if you were going to be famous it was wise to also be rich enough to buy yourself some personal space. The problem we face today is that we have multi-billionaires who are so rich that they can surround themselves with nothing *but* personal space, and rain in the form of other people never falls into their lives.

In fact, as Douglas Rushkoff writes in Survival of the Richest, this class of human sees the rest of us as an impediment to their own survival. Instead, they want to extract everything they can from us and then achieve escape velocity as completely as possible.

Rushkoff came to realize this when he was transported far out into the American southwestern desert by a pentangle of multi-billionaires who wanted advice: what, in his opinion, was the best way to hide from out and survive various prospective catastrophes (“The Event”)? Climate change, pandemics, mass migration, and resource depletion – where to go and for how long? Alaska, New Zealand, Mars, or the Metaverse: all their ideas about the future involved escaping humanity. Except: what, one wanted to know, would be the best way to keep control of their private security force?

This was the moment when Rushkoff discovered what he calls “The Mindset”, whose origins and development are what the book is really about. It is, he writes, “a mindset where ‘winning’ means earning enough money to insulate themselves from the damage they are creating by earning money in that way. It’s as if they want to build a car that goes fast enough to escape from its own exhaust”. The Mindset is a game – and a game needs an end: in this case, a catastrophe they can invent a technology to escape.

He goes on to tease out the elements of The Mindset: financial abstraction, Richard Dawkins’ memes that see humans as machines running code with no pesky questions of morals, technology design, the type of philanthropy that hands out vaccines but refuses to waive patents so lower-income countries can make them. The Mindset comprehends competition, but not collaboration even though, as Rushkoff notes, our greatest achievement, science, is entirely collaborative.

‘Twas not ever thus. Go back to Apple’s famous 1984 Super Bowl ad and recall the promise that ushered in the first personal computers: empower the masses and destroy the monolith (at the time, IBM). Now, the top 0.1% compete to “win” control of all they survey, the top 1% scrabble for their pocket change, and the rest subsist on whatever is too small for them to notice. This is not the future we thought we were buying into.

As Rushkoff concludes, the inevitability narrative that accompanies so much technological progress is nonsense. We have choices. We can choose to define value in social terms rather than exit strategies. We can build companies and services – and successful cooperatives – to serve people and stop expanding them when they reach the size that fits their purpose. We do not have to believe today’s winners when they tell us a more equitable world is impossible. We don’t need escape fantasies; we can change reality.

Re-centralizing

But first, a housekeeping update. Net.wars has moved – to a new address and new blogging software. For details, see here. If you read net.wars via RSS, adjust your feed to https://netwars.pelicancrossing.net. Past posts’ old URLs will continue to work, as will the archive index page, which lists every net.wars column back to November 2001. And because of the move: comments are now open for the first time in probably about ten years. I will also shortly set up a mailing list for those who would rather get net.wars by email.

***

This week the Ada Lovelace Institute held a panel discussion of ethics for researchers in AI. Arguably, not a moment too soon.

At Noema magazine, Timnet Gebru writes, as Mary L Gray and Siddharth Suri have previously, that what today passes for “AI” and “machine learning” is, underneath, the work of millions of poorly-paid marginalized workers who add labels, evaluate content, and provide verification. At Wired, Gebru adds that their efforts are ultimately directed by a handful of Silicon Valley billionaires whose interests are far from what’s good for the rest of us. That would be the “rest of us” who are being used, willingly or not, knowingly or not, as experimental research subjects.

Two weeks ago, for example, a company called Koko ran an experiment offering chatbot-written/human-overseen mental health counseling without informing the 4,000 people who sought help via the “Koko Cares” Discord server. In a Twitter thread. company co-founder Rob Morris said those users rated the bot’s responses highly until they found out a bot had written them.

People can build relationships with anything, including chatbots, as was proved in 1996 with the release of the experimental chatbot therapist Eliza. People found Eliza’s responses comforting even though they knew it was a bot. Here, however, informed consent processes seem to have been ignored. Morris’s response, when widely criticized for the unethical nature of this little experiment was to say it was exempt from informed consent requirements because helpers could opt whether to use the chatbot’s reponses and Koko had no plan to publish the results.

One would like it to be obvious that *publication* is not the biggest threat to vulnerable people in search of help. One would also like modern technology CEOs to have learned the right lesson from prior incidents such as Facebook’s 2012 experiment to study users’ moods when it manipulated their newsfeeds. Facebook COO Sheryl Sandberg apologized for *how the experiment was communicated*, but not for doing it. At the time, we thought that logic suggested that such companies would continue to do the research but without publishing the results. Though isn’t tweeting publication?

It seems clear that scale is part of the problem here, like the old saying, one death is a tragedy; a million deaths are a statistic. Even the most sociopathic chatbot owner is unlikely to enlist an experimental chatbot to respond to a friend or family member in distress. But once a screen intervenes, the thousands of humans on the other side are just a pile of user IDs; that’s part of how we get so much online abuse. For those with unlimited control over the system we must all look like ants. And who wouldn’t experiment on ants?

In that sense, the efforts of the Ada Lovelace panel to sketch out the diligence researchers should follow are welcome. But the reality of human nature is that it will always be possible to find someone unscrupulous to do unethical research – and the reality of business nature is not to care much about research ethics if the resulting technology will generate profits. Listening to all those earnest, worried researchers left me writing this comment: MBAs need ethics. MBAs, government officials, and anyone else who is in charge of how new technologies are used and whose decisions affect the lives of the people those technologies are imposed upon.

This seemed even more true a day later, at the annual activists’ gathering Privacy Camp. In a panel on the proliferation of surveillance technology at the borders, speakers noted that every new technology that could be turned to helping migrants is instead being weaponized against them. The Border Violence Monitoring Network has collected thousands of such testimonies.

The especially relevant bit came when Hope Barker, a senior policy analyst with BVMN, noted this problem with the forthcoming AI Act: accountability is aimed at developers and researchers, not users.

Granted, technology that’s aborted in the lab isn’t available for abuse. But no technology stays the same after leaving the lab; it gets adapted, altered, updated, merged with other technologies, and turned to uses the researchers never imagined – as Wendy Hall noted in moderating the Ada Lovelace panel. And if we have learned anything from the last 20 years it is that over time technology services enshittify, to borrow Cory Doctorow’s term in a rant which covers the degradation of the services offered by Amazon, Facebook, and soon, he predicts, TikTok.

The systems we call “AI” today have this in common with those services: they are centralized. They are technologies that re-advantage large organizations and governments because they require amounts of data and computing power that are beyond the capabilities of small organizations and individuals to acquire. We can only rent them or be forced to use them. The ur-evil AI, HAL in Stanley Kubrick’s 2001: A Space Odyssey taught us to fear an autonomous rogue. But the biggest danger with “AIs” of the type we are seeing today, that are being put into decision making and law enforcement, is not the technology, nor the people who invented it, but the expanding desires of its controller.

Illustrations: HAL, in 2001.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns back to November 2001. Comment here, or follow on Mastodon or Twitter.