Appropriate privacy

At a workshop this week, one of the organizers posed a question that included the term “appropriate”. As in: “lawful access while maintaining appropriate user privacy”. We were there to think about approaches that could deliver better privacy and security over the next decade, with privacy defined as “the embedding of encryption or anonymization in software or devices”.

I had to ask: What work is “appropriate” doing in that sentence?

I had to ask because last weekend’s royal show was accompanied by preemptive arrests well before events began – at 7:30 AM. Most of the arrested were anti-monarchy protesters armed with luggage straps and placards, climate change protesters whose T-shirts said “Just Stop Oil”, and volunteers for the Night Stars on suspicion that the rape whistles they hand out to vulnerable women might be used to disrupt the parading horses. All of these had coordinated with the Metropolitan Police in advance or actually worked with them…which made no difference. All were held for many hours. Since then, the news has broken that an actual monarchist was arrested, DNA-sampled, fingerprinted, and held for 13 hours just for standing *near* some protesters.

It didn’t help the look of the thing that several days before the Big Show, the Met tweeted a warning that: “Our tolerance for any disruption, whether through protest or otherwise, will be low.”

The arrests were facilitated by the last-minute passage of the Public Order Act just days before with the goal of curbing “disruptive” protests. Among the now-banned practices is “locking on” – that is, locking oneself to a physical structure, a tactic the suffragettes used. among many others in campaigning for women’s right to vote. Because that right is now so thoroughly accepted, we tend to forget how radical and militant the Suffragists had to be to get their point across and how brutal the response was. A century from now, the mainstream may look back and marvel at the treatment meted out to climate change activists. We all know they’re *right*, whether or not we like their tactics.

Since the big event, the House of Lords has published its report on current legislation. The government is seeking to expand the Public Order Act even further by lowering the bar for “serious disruption” from “significant” and “prolonged” to “more than minor” and may include the cumulative impact of repeated protests in the same area. The House of Lords is unimpressed by these amendments via secondary legislation, first because of their nature, and second because they were rejected during the scrutiny of the original bill, which itself is only days old. Secondary legislation gets looked at less closely; the Lords suggest that using this route to bring back rejected provisions “raises possible constitutional issues”. All very Polite for accusing the government of abusing the system.

In the background, we’re into the fourth decade of the same argument between governments and technical experts over encryption. Technical experts by and large take the view that opening a hole for law enforcement access to encrypted content fatally compromises security; law enforcement by and large longs for the old days when they could implement a wiretap with a single phone call to a major national telephone company. One of the technical experts present at the workshop phrased all this gently by explaining that providing access enlarges the attack surface, and the security of such a system will always be weaker because there are more “moving parts”. Adding complexity always makes security harder.

This is, of course, a live issue because of the Online Safety bill, a sprawling mess of 262 pages that includes a requirement to scan public and private messaging for child sexual abuse material, whether or not the communications are encrypted.

None of this is the fault of the workshop we began with, which is part of a genuine attempt to find a way forward on a contentious topic, and whose organizers didn’t have any of this in mind when they chose their words. But hearing “appropriate” in that way at that particular moment raised flags: you can justify anything if the level of disruption that’s allowed to trigger action is vague and you’re allowed to use “on suspicion of” indiscriminately as an excuse. “Police can do what they want to us now,” George Monbiot writes at the Guardian of the impact of the bill.

Lost in the upset about the arrests was the Met’s decision to scan the crowds with live facial recognition. It’s impossible to overstate the impact of this technology. There will be no more recurring debates about ID cards because our faces will do the job. Nothing has been said about how the Met used it on the day, whether its use led to arrests (or on what grounds), or what the Met plans to do with the collected data. The police – and many private actors – have certainly inhaled the Silicon Valley ethos of “ask forgiveness, not permission”.

In this direction of travel, many things we have taken for granted as rights become privileges that can be withdrawn at will, and what used to be public spaces open to all become restricted like an airport or a small grocery store in Whitley Bay. This is the sliding scale in which “appropriate user privacy” may be defined.

Illustrations: Protesters at the coronation (by Alisdair Hickson at Wikimedia .

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

The privacy price of food insecurity

One of the great unsolved questions continues to be: what is my data worth? Context is always needed: worth to whom, under what circumstances, for what purpose? Still, supermarkets may give us a clue.

At Novara Media, Jake Hurfurt, who runs investigations for Big Brother Watch, has been studying suprmarket loyalty cards. He finds that increasingly only loyalty card holders have access to special offers, which used to be open to any passing customer.

Tesco now and Sainsburys soon, he says, “are turning the cost-of-living crisis into a cost-of-privacy crisis”,

Neat phrasing, but I’d say it differently: these retailers are taking advantage of the cost-of-living crisis to extort desperate people ito giving up their data. The average value of the discounts might – for now – give a clue to the value supermarkets place on it.

But not for long, since the pattern going forward is a predictable one of monopoly power: as the remaining supermarkets follow suit and smaller independent shops thin out under the weight of rising fuel bills and shrinking margins, and people have fewer choices, the savings from the loyalty card-only special offers will shrink. Not so much that they won’t be worth having, but it seems obvious they’ll be more generous with the discounts – if “generous” is the word – in the sign-up phase than they will once they’ve achieved customer lock-in.

The question few shoppers are in a position to answer while they’re strying to lower the cost of filling their shopping carts is what the companies do with the data they collect. BBW took the time to analyze Tesco’s and Sainsburys’ privacy policies, and found that besides identity data they collect detailed purchase histories as well as bank accounts and payment information…which they share with “retail partners, media partners, and service providers”. In Tesco’s case, these include Facebook, Google, and, for those who subscribe to them, Virgin Media and Sky. Hyper-targeted personal ads right there on your screen!

All that sounds creepy enough. But consider what could well come next. Also this week, a cross-party group of 50 MPs and peers and cosinged by BBW, Privacy International and Liberty, wrote to Frasers Group deploring that company’s use of live facial recognition in its stores, which include Sports Direct and the department store chain House of Fraser. Frasers Group’s purpose, like retailers and pub chains were trialing a decade ago , is effectively to keep out people suspected of shoplifting and bad behavior. Note that’s “suspected”, not “convicted”.

What happens as these different privacy invasions start to combine?

A store equipped with your personal shopping history and financial identity plus live facial recognition cameras, knows the instant you walk into the store who you are, what you like to buy, and how valuable a customer your are. Such a system, equipped with some sort of socring, could make very fine judgments. Such as: this customer is suspected of stealing another customer’s handbag, but they’re highly profitable to us, so we’ll let that go. Or: this customer isn’t suspected of anything much but they look scruffy and although they browse they never buy anything – eject! Or even: this journalist wrote a story attacking our company. Show them the most expensive personalized prices. One US entertainment company is already using live facial recognition to bar entry to its venues to anyone who works for any law firm involved in litigation against it. Britain’s data protection laws should protect us against that sort of abuse, but will they survive the upcoming bonfire of retained EU law?

And, of course, what starts with relatively anodyne product advertising becomes a whole lot more sinister when it starts getting applied to politics, voter manipulation and segmentation, and the “pre-crime” systems

Add the possibilities of technology that allows retailers to display personalized pricing in-store, just like an online retailer could do in the privacy of your own browser, Could we get to a scenario where a retailer, able to link your real world identity and purchasing power to your online nd offline movements could perform a detailed calculation of what you’d be willing to pay for a particular item? What would surge pricing for the last remaining stock of the year’s hottest toy on Christmas Eve look like?

This idea allows me to imagine shopping partnerships, where the members compare prices and the partner with the cheapest prices buys that item for the whole group. In this dystopian future, I imagine such gambits would be banned.

Most of this won’t affect people rich enough to grandly refuse to sign up for loyalty cards, and none of it will affect people rich and eccentric enough to do source everything from local, independent shops – and, if they’re allowed, pay cash.

Four years ago, Jaron Lanier toured with the proposal that we should be paid for contributing to commercial social media sites. The problem with this idea was and is that payment creates a perverse incentive for users to violate their own privacy even more than they do already, and that fair payment can’t be calculated when the consequences of disclosure are perforce unknown.

The supermarket situation is no different. People need food security and affordability, They should not have to pay for that with their privacy.

Illustrations: .London supermarket checkout, 2006 (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Breaking badly

This week, the Online Safety Bill reached the House of Lords, which will consider 300 amendments. There are lots of problems with this bill, but the one that continues to have the most campaigning focus is the age-old threat to require access to end-to-end encrypted messaging services.

At his blog, security consultant Alec Muffett predicts the bill will fail in implementation if it passes. For one thing, he cites the argument made by Richard Allan, Baron of Hallam that the UK government wants the power to order decryption but will likely only ever use it as a threat to force the technology companies to provide other useful data. Meanwhile, the technology companies have pushed back with an open letter saying they will withdraw their encrypted products from the UK market rather than weaken them.

In addition, Muffett believes the legally required secrecy when a service provider is issued with a Technical Capability Notice to provide access to communications, which was devised for the legacy telecommunications world, is impossible in today’s world of computers and smartphones. Secrecy is no longer possible, given the many researchers and hackers who make it their job to study changes to apps, and who would surely notice and publicize new decryption capabilities. The government will be left with the choice of alienating the public or failing to deliver its stated objectives.

At Computer Weekly, Bill Goodwin points out that undermining encryption will affect anyone communicating with anyone in Britain, including the Ukrainian military communicating with the UK’s Ministry of Defence.

Meanwhile, this week Ed Caesar reports at The New Yorker on law enforcement’s successful efforts to penetrate communications networks protected by Encrochat and Sky ECC. It’s a reminder that there are other choices besides opening up an entire nation’s communications to attack.

***

This week also saw the disappointing damp-squib settlement of the lawsuit brought by Dominion Voting Systems against Fox News. Disappointing, because it leaves Fox and its hosts free to go on wreaking daily havoc across America by selling their audience rage-enhanced lies without even an apology. The payment that Fox has agreed to – $787 million – sounds like a lot, but a) the company can afford it given the size of its cash pile, and b) most of it will likely be covered by insurance.

If Fox’s major source of revenues were advertising, these defamation cases – still to come is a similar case brought by Smartmatic – might make their mark by alienating advertisers, as has been happening with Twitter. But it’s not; instead, Fox is supported by the fees cable companies pay to carry the channel. Even subscribers who never watch it are paying monthly for Fox News to go on fomenting discord and spreading disinformation. And Fox is seeking a raise to $3 per subscriber, which would mean more than $1,8 billion a year just from affiliate revenue.

All of that insulates the company from boycotts, alienated advertisers, and even the next tranche of lawsuits. The only feedback loop in play is ratings – and Fox News remains the most-watched basic cable network.

This system could not be more broken.

***

Meanwhile, an era is ending: Netflix will mail out its last rental DVD in September. As Chris Stokel-Walker writes at Wired, the result will be to shrink the range of content available by tens of thousands of titles because the streaming library is a fraction of the size of the rental library.

This reality seems backwards. Surely streaming services ought to have the most complete libraries. But licensing and lockups mean that Netflix can only host for streaming what content owners decree it may, whereas with the mail rental service once Netflix had paid the commercial rental rate to buy the DVD it could stay in the catalogue until the disk wore out.

The upshot is yet another data point that makes pirate services more attractive: no ads, easy access to the widest range of content, and no licensing deals to get in the way.

***

In all the professions people have been suggesting are threatened by large language model-based text generation – journalism, in particular – no one to date has listed fraudulent spiritualist mediums. And yet…

The family of Michael Schumacher is preparing legal action against the German weekly Die Aktuelle for publishing an interview with the seven-time Formula 1 champion. Schumacher has been out of the public eye since suffering a brain injury while skiing in 2013. The “interview” is wholly fictitious, the quotes created by prompting an “AI” chat bot.

Given my history as a skeptic, my instinctive reaction was to flash on articles in which mediums produced supposed quotes from dead people, all of which tended to be anodyne representations bereft of personality. Dressing this up in the trappings of “AI” makes such fakery no less reprehensible.

An article in the Washington Post examines Google’s C4 data set scraped from 15 million websites and used to train several of the highest profile large language models. The Post has provided a search engine, which tells us that my own pelicancrossing.net, which was first set up in 1996, has contributed 160,000 words or phrases (“tokens”), or 0.0001% of the total. The obvious implication is that LLM-generated fake interviews with famous people can draw on things they’ve actually said in the past, mixing falsity and truth into a wasteland that will be difficult to parse.

Illustrations: The House of Lords in 2011 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Unclear and unpresent dangers

Monthly computer magazines used to fret that their news pages would be out of date by the time the new issue reached readers. This week in AI, a blog posting is out of date before you hit send.

This – Friday – morning, the Italian data protection authority, Il Garante, has ordered ChatGPT to stop processing the data of Italian users until it complies with the General Data Protection Regulation. Il Garante’s objections, per Apple’s translation, posted by Ian Brown: ChatGPT provides no legal basis for collecting and processing its massive store of the personal data used to train the model, and that it fails to filter out users under 13.

This may be the best possible answer to the complaint I’d been writing below.

On Wednesday, the Future of Life Institute published an open letter calling for a six-month pause on developing systems more powerful than Open AI’s current state of the art, GPT4. Barring Elon Musk, Steve Wozniack, and Skype co-founder Jaan Tallinn, most of the signatories are unfamiliar names to most of us, though the companies and institutions they represent aren’t – Pinterest, the MIT Center for Artificial Intelligence, UC Santa Cruz, Ripple, ABN-Amro Bank. Almost immediately, there was a dispute over the validity of the signatures..

My first reaction was on the order of: huh? The signatories are largely people who are inventing this stuff. They don’t have to issue a call. They can just *stop*, work to constrain the negative impacts of the services they provide, and lead by example. Or isn’t that sufficiently performative?

A second reaction: what about all those AI ethics teams that Silicon Valley companies are disbanding? Just in the last few weeks, these teams have been axed or cut at Microsoft and Twitch; Twitter of course ditched such fripperies last November in Musk’s inaugural wave of cost-cutting. The letter does not call to reinstate these.

The problem, as familiar critics such as Emily Bender pointed out almost immediately, is that the threats the letter focuses on are distant not-even-thunder. As she went on to say in a Twitter thread, the artificial general intelligence of the Singularitarian’s rapture is nowhere in sight. By focusing on distant threats – longtermism – we ignore the real and present problems whose roots are being continuously more deeply embedded into the new-building infrastructure: exploited workers, culturally appropriated data, lack of transparency around the models and algorithms used to build these systems….basically, all the ways they impinge upon human rights.

This isn’t the first time such a letter has been written and circulated. In 2015, Stephen Hawking, Musk, and about 150 others similarly warned of the dangers of the rise of “superintelligences”. Just a year later, in 2016, Pro Publica investigated the algorithm behind COMPAS, a risk-scoring criminal justice system in use in US courts in several states. Under Julia Angwin‘s scrutiny, the algorithm failed at both accuracy and fairness; it was heavily racially biased. *That*, not some distant fantasy, was the real threat to society.

“Threat” is the key issue here. This is, at heart, a letter about a security issue, and solutions to security issues are – or should be – responses to threat models. What is *this* threat model, and what level of resources to counter it does it justify?

Today, I’m far more worried by the release onto public roads of Teslas running Full Self Drive helmed by drivers with an inflated sense of the technology’s reliability than I am about all of human work being wiped away any time soon. This matters because, as Jessie Singal, author of There Are No Accidents, keeps reminding us, what we call “accidents” are the results of policy decisions. If we ignore the problems we are presently building in favor of fretting about a projected fantasy future, that, too, is a policy decision, and the collateral damage is not an accident. Can’t we do both? I imagine people saying. Yes. But only if we *do* both.

In a talk this week for a group at the French international research group AI Act. This effort began well before today’s generative tools exploded into public consciousness, and isn’t likely to conclude before 2024. It is, therefore, much more focused on the kinds of risks attached to public sector scandals like COMPAS and those documented in Cathy O’Neil’s 2017 book Weapons of Math Destruction, which laid bare the problems with algorithmic scoring with little to tether it to reality.

With or without a moratorium, what will “AI” look like in 2024? It has changed out of recognition just since the last draft text was published. Prediction from this biological supremacist: it still won’t be sentient.

All this said, as Edwards noted, even if the letter’s proposal is self-serving, a moratorium on development is not necessarily a bad idea. It’s just that if the risk is long-term and existential, what will six months do? If the real risk is the hidden continued centralization of data and power, then those six months could be genuinely destructive. So far, it seems like its major function is as a distraction. Resist.

Illustrations: IBM’s Watson, which beat two of Jeopardy‘s greatest champions in 2011. It has since failed to transform health care.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Re-centralizing

But first, a housekeeping update. Net.wars has moved – to a new address and new blogging software. For details, see here. If you read net.wars via RSS, adjust your feed to https://netwars.pelicancrossing.net. Past posts’ old URLs will continue to work, as will the archive index page, which lists every net.wars column back to November 2001. And because of the move: comments are now open for the first time in probably about ten years. I will also shortly set up a mailing list for those who would rather get net.wars by email.

***

This week the Ada Lovelace Institute held a panel discussion of ethics for researchers in AI. Arguably, not a moment too soon.

At Noema magazine, Timnet Gebru writes, as Mary L Gray and Siddharth Suri have previously, that what today passes for “AI” and “machine learning” is, underneath, the work of millions of poorly-paid marginalized workers who add labels, evaluate content, and provide verification. At Wired, Gebru adds that their efforts are ultimately directed by a handful of Silicon Valley billionaires whose interests are far from what’s good for the rest of us. That would be the “rest of us” who are being used, willingly or not, knowingly or not, as experimental research subjects.

Two weeks ago, for example, a company called Koko ran an experiment offering chatbot-written/human-overseen mental health counseling without informing the 4,000 people who sought help via the “Koko Cares” Discord server. In a Twitter thread. company co-founder Rob Morris said those users rated the bot’s responses highly until they found out a bot had written them.

People can build relationships with anything, including chatbots, as was proved in 1996 with the release of the experimental chatbot therapist Eliza. People found Eliza’s responses comforting even though they knew it was a bot. Here, however, informed consent processes seem to have been ignored. Morris’s response, when widely criticized for the unethical nature of this little experiment was to say it was exempt from informed consent requirements because helpers could opt whether to use the chatbot’s reponses and Koko had no plan to publish the results.

One would like it to be obvious that *publication* is not the biggest threat to vulnerable people in search of help. One would also like modern technology CEOs to have learned the right lesson from prior incidents such as Facebook’s 2012 experiment to study users’ moods when it manipulated their newsfeeds. Facebook COO Sheryl Sandberg apologized for *how the experiment was communicated*, but not for doing it. At the time, we thought that logic suggested that such companies would continue to do the research but without publishing the results. Though isn’t tweeting publication?

It seems clear that scale is part of the problem here, like the old saying, one death is a tragedy; a million deaths are a statistic. Even the most sociopathic chatbot owner is unlikely to enlist an experimental chatbot to respond to a friend or family member in distress. But once a screen intervenes, the thousands of humans on the other side are just a pile of user IDs; that’s part of how we get so much online abuse. For those with unlimited control over the system we must all look like ants. And who wouldn’t experiment on ants?

In that sense, the efforts of the Ada Lovelace panel to sketch out the diligence researchers should follow are welcome. But the reality of human nature is that it will always be possible to find someone unscrupulous to do unethical research – and the reality of business nature is not to care much about research ethics if the resulting technology will generate profits. Listening to all those earnest, worried researchers left me writing this comment: MBAs need ethics. MBAs, government officials, and anyone else who is in charge of how new technologies are used and whose decisions affect the lives of the people those technologies are imposed upon.

This seemed even more true a day later, at the annual activists’ gathering Privacy Camp. In a panel on the proliferation of surveillance technology at the borders, speakers noted that every new technology that could be turned to helping migrants is instead being weaponized against them. The Border Violence Monitoring Network has collected thousands of such testimonies.

The especially relevant bit came when Hope Barker, a senior policy analyst with BVMN, noted this problem with the forthcoming AI Act: accountability is aimed at developers and researchers, not users.

Granted, technology that’s aborted in the lab isn’t available for abuse. But no technology stays the same after leaving the lab; it gets adapted, altered, updated, merged with other technologies, and turned to uses the researchers never imagined – as Wendy Hall noted in moderating the Ada Lovelace panel. And if we have learned anything from the last 20 years it is that over time technology services enshittify, to borrow Cory Doctorow’s term in a rant which covers the degradation of the services offered by Amazon, Facebook, and soon, he predicts, TikTok.

The systems we call “AI” today have this in common with those services: they are centralized. They are technologies that re-advantage large organizations and governments because they require amounts of data and computing power that are beyond the capabilities of small organizations and individuals to acquire. We can only rent them or be forced to use them. The ur-evil AI, HAL in Stanley Kubrick’s 2001: A Space Odyssey taught us to fear an autonomous rogue. But the biggest danger with “AIs” of the type we are seeing today, that are being put into decision making and law enforcement, is not the technology, nor the people who invented it, but the expanding desires of its controller.

Illustrations: HAL, in 2001.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns back to November 2001. Comment here, or follow on Mastodon or Twitter.