Breaking badly

This week, the Online Safety Bill reached the House of Lords, which will consider 300 amendments. There are lots of problems with this bill, but the one that continues to have the most campaigning focus is the age-old threat to require access to end-to-end encrypted messaging services.

At his blog, security consultant Alec Muffett predicts the bill will fail in implementation if it passes. For one thing, he cites the argument made by Richard Allan, Baron of Hallam that the UK government wants the power to order decryption but will likely only ever use it as a threat to force the technology companies to provide other useful data. Meanwhile, the technology companies have pushed back with an open letter saying they will withdraw their encrypted products from the UK market rather than weaken them.

In addition, Muffett believes the legally required secrecy when a service provider is issued with a Technical Capability Notice to provide access to communications, which was devised for the legacy telecommunications world, is impossible in today’s world of computers and smartphones. Secrecy is no longer possible, given the many researchers and hackers who make it their job to study changes to apps, and who would surely notice and publicize new decryption capabilities. The government will be left with the choice of alienating the public or failing to deliver its stated objectives.

At Computer Weekly, Bill Goodwin points out that undermining encryption will affect anyone communicating with anyone in Britain, including the Ukrainian military communicating with the UK’s Ministry of Defence.

Meanwhile, this week Ed Caesar reports at The New Yorker on law enforcement’s successful efforts to penetrate communications networks protected by Encrochat and Sky ECC. It’s a reminder that there are other choices besides opening up an entire nation’s communications to attack.

***

This week also saw the disappointing damp-squib settlement of the lawsuit brought by Dominion Voting Systems against Fox News. Disappointing, because it leaves Fox and its hosts free to go on wreaking daily havoc across America by selling their audience rage-enhanced lies without even an apology. The payment that Fox has agreed to – $787 million – sounds like a lot, but a) the company can afford it given the size of its cash pile, and b) most of it will likely be covered by insurance.

If Fox’s major source of revenues were advertising, these defamation cases – still to come is a similar case brought by Smartmatic – might make their mark by alienating advertisers, as has been happening with Twitter. But it’s not; instead, Fox is supported by the fees cable companies pay to carry the channel. Even subscribers who never watch it are paying monthly for Fox News to go on fomenting discord and spreading disinformation. And Fox is seeking a raise to $3 per subscriber, which would mean more than $1,8 billion a year just from affiliate revenue.

All of that insulates the company from boycotts, alienated advertisers, and even the next tranche of lawsuits. The only feedback loop in play is ratings – and Fox News remains the most-watched basic cable network.

This system could not be more broken.

***

Meanwhile, an era is ending: Netflix will mail out its last rental DVD in September. As Chris Stokel-Walker writes at Wired, the result will be to shrink the range of content available by tens of thousands of titles because the streaming library is a fraction of the size of the rental library.

This reality seems backwards. Surely streaming services ought to have the most complete libraries. But licensing and lockups mean that Netflix can only host for streaming what content owners decree it may, whereas with the mail rental service once Netflix had paid the commercial rental rate to buy the DVD it could stay in the catalogue until the disk wore out.

The upshot is yet another data point that makes pirate services more attractive: no ads, easy access to the widest range of content, and no licensing deals to get in the way.

***

In all the professions people have been suggesting are threatened by large language model-based text generation – journalism, in particular – no one to date has listed fraudulent spiritualist mediums. And yet…

The family of Michael Schumacher is preparing legal action against the German weekly Die Aktuelle for publishing an interview with the seven-time Formula 1 champion. Schumacher has been out of the public eye since suffering a brain injury while skiing in 2013. The “interview” is wholly fictitious, the quotes created by prompting an “AI” chat bot.

Given my history as a skeptic, my instinctive reaction was to flash on articles in which mediums produced supposed quotes from dead people, all of which tended to be anodyne representations bereft of personality. Dressing this up in the trappings of “AI” makes such fakery no less reprehensible.

An article in the Washington Post examines Google’s C4 data set scraped from 15 million websites and used to train several of the highest profile large language models. The Post has provided a search engine, which tells us that my own pelicancrossing.net, which was first set up in 1996, has contributed 160,000 words or phrases (“tokens”), or 0.0001% of the total. The obvious implication is that LLM-generated fake interviews with famous people can draw on things they’ve actually said in the past, mixing falsity and truth into a wasteland that will be difficult to parse.

Illustrations: The House of Lords in 2011 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Unclear and unpresent dangers

Monthly computer magazines used to fret that their news pages would be out of date by the time the new issue reached readers. This week in AI, a blog posting is out of date before you hit send.

This – Friday – morning, the Italian data protection authority, Il Garante, has ordered ChatGPT to stop processing the data of Italian users until it complies with the General Data Protection Regulation. Il Garante’s objections, per Apple’s translation, posted by Ian Brown: ChatGPT provides no legal basis for collecting and processing its massive store of the personal data used to train the model, and that it fails to filter out users under 13.

This may be the best possible answer to the complaint I’d been writing below.

On Wednesday, the Future of Life Institute published an open letter calling for a six-month pause on developing systems more powerful than Open AI’s current state of the art, GPT4. Barring Elon Musk, Steve Wozniack, and Skype co-founder Jaan Tallinn, most of the signatories are unfamiliar names to most of us, though the companies and institutions they represent aren’t – Pinterest, the MIT Center for Artificial Intelligence, UC Santa Cruz, Ripple, ABN-Amro Bank. Almost immediately, there was a dispute over the validity of the signatures..

My first reaction was on the order of: huh? The signatories are largely people who are inventing this stuff. They don’t have to issue a call. They can just *stop*, work to constrain the negative impacts of the services they provide, and lead by example. Or isn’t that sufficiently performative?

A second reaction: what about all those AI ethics teams that Silicon Valley companies are disbanding? Just in the last few weeks, these teams have been axed or cut at Microsoft and Twitch; Twitter of course ditched such fripperies last November in Musk’s inaugural wave of cost-cutting. The letter does not call to reinstate these.

The problem, as familiar critics such as Emily Bender pointed out almost immediately, is that the threats the letter focuses on are distant not-even-thunder. As she went on to say in a Twitter thread, the artificial general intelligence of the Singularitarian’s rapture is nowhere in sight. By focusing on distant threats – longtermism – we ignore the real and present problems whose roots are being continuously more deeply embedded into the new-building infrastructure: exploited workers, culturally appropriated data, lack of transparency around the models and algorithms used to build these systems….basically, all the ways they impinge upon human rights.

This isn’t the first time such a letter has been written and circulated. In 2015, Stephen Hawking, Musk, and about 150 others similarly warned of the dangers of the rise of “superintelligences”. Just a year later, in 2016, Pro Publica investigated the algorithm behind COMPAS, a risk-scoring criminal justice system in use in US courts in several states. Under Julia Angwin‘s scrutiny, the algorithm failed at both accuracy and fairness; it was heavily racially biased. *That*, not some distant fantasy, was the real threat to society.

“Threat” is the key issue here. This is, at heart, a letter about a security issue, and solutions to security issues are – or should be – responses to threat models. What is *this* threat model, and what level of resources to counter it does it justify?

Today, I’m far more worried by the release onto public roads of Teslas running Full Self Drive helmed by drivers with an inflated sense of the technology’s reliability than I am about all of human work being wiped away any time soon. This matters because, as Jessie Singal, author of There Are No Accidents, keeps reminding us, what we call “accidents” are the results of policy decisions. If we ignore the problems we are presently building in favor of fretting about a projected fantasy future, that, too, is a policy decision, and the collateral damage is not an accident. Can’t we do both? I imagine people saying. Yes. But only if we *do* both.

In a talk this week for a group at the French international research group AI Act. This effort began well before today’s generative tools exploded into public consciousness, and isn’t likely to conclude before 2024. It is, therefore, much more focused on the kinds of risks attached to public sector scandals like COMPAS and those documented in Cathy O’Neil’s 2017 book Weapons of Math Destruction, which laid bare the problems with algorithmic scoring with little to tether it to reality.

With or without a moratorium, what will “AI” look like in 2024? It has changed out of recognition just since the last draft text was published. Prediction from this biological supremacist: it still won’t be sentient.

All this said, as Edwards noted, even if the letter’s proposal is self-serving, a moratorium on development is not necessarily a bad idea. It’s just that if the risk is long-term and existential, what will six months do? If the real risk is the hidden continued centralization of data and power, then those six months could be genuinely destructive. So far, it seems like its major function is as a distraction. Resist.

Illustrations: IBM’s Watson, which beat two of Jeopardy‘s greatest champions in 2011. It has since failed to transform health care.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Re-centralizing

But first, a housekeeping update. Net.wars has moved – to a new address and new blogging software. For details, see here. If you read net.wars via RSS, adjust your feed to https://netwars.pelicancrossing.net. Past posts’ old URLs will continue to work, as will the archive index page, which lists every net.wars column back to November 2001. And because of the move: comments are now open for the first time in probably about ten years. I will also shortly set up a mailing list for those who would rather get net.wars by email.

***

This week the Ada Lovelace Institute held a panel discussion of ethics for researchers in AI. Arguably, not a moment too soon.

At Noema magazine, Timnet Gebru writes, as Mary L Gray and Siddharth Suri have previously, that what today passes for “AI” and “machine learning” is, underneath, the work of millions of poorly-paid marginalized workers who add labels, evaluate content, and provide verification. At Wired, Gebru adds that their efforts are ultimately directed by a handful of Silicon Valley billionaires whose interests are far from what’s good for the rest of us. That would be the “rest of us” who are being used, willingly or not, knowingly or not, as experimental research subjects.

Two weeks ago, for example, a company called Koko ran an experiment offering chatbot-written/human-overseen mental health counseling without informing the 4,000 people who sought help via the “Koko Cares” Discord server. In a Twitter thread. company co-founder Rob Morris said those users rated the bot’s responses highly until they found out a bot had written them.

People can build relationships with anything, including chatbots, as was proved in 1996 with the release of the experimental chatbot therapist Eliza. People found Eliza’s responses comforting even though they knew it was a bot. Here, however, informed consent processes seem to have been ignored. Morris’s response, when widely criticized for the unethical nature of this little experiment was to say it was exempt from informed consent requirements because helpers could opt whether to use the chatbot’s reponses and Koko had no plan to publish the results.

One would like it to be obvious that *publication* is not the biggest threat to vulnerable people in search of help. One would also like modern technology CEOs to have learned the right lesson from prior incidents such as Facebook’s 2012 experiment to study users’ moods when it manipulated their newsfeeds. Facebook COO Sheryl Sandberg apologized for *how the experiment was communicated*, but not for doing it. At the time, we thought that logic suggested that such companies would continue to do the research but without publishing the results. Though isn’t tweeting publication?

It seems clear that scale is part of the problem here, like the old saying, one death is a tragedy; a million deaths are a statistic. Even the most sociopathic chatbot owner is unlikely to enlist an experimental chatbot to respond to a friend or family member in distress. But once a screen intervenes, the thousands of humans on the other side are just a pile of user IDs; that’s part of how we get so much online abuse. For those with unlimited control over the system we must all look like ants. And who wouldn’t experiment on ants?

In that sense, the efforts of the Ada Lovelace panel to sketch out the diligence researchers should follow are welcome. But the reality of human nature is that it will always be possible to find someone unscrupulous to do unethical research – and the reality of business nature is not to care much about research ethics if the resulting technology will generate profits. Listening to all those earnest, worried researchers left me writing this comment: MBAs need ethics. MBAs, government officials, and anyone else who is in charge of how new technologies are used and whose decisions affect the lives of the people those technologies are imposed upon.

This seemed even more true a day later, at the annual activists’ gathering Privacy Camp. In a panel on the proliferation of surveillance technology at the borders, speakers noted that every new technology that could be turned to helping migrants is instead being weaponized against them. The Border Violence Monitoring Network has collected thousands of such testimonies.

The especially relevant bit came when Hope Barker, a senior policy analyst with BVMN, noted this problem with the forthcoming AI Act: accountability is aimed at developers and researchers, not users.

Granted, technology that’s aborted in the lab isn’t available for abuse. But no technology stays the same after leaving the lab; it gets adapted, altered, updated, merged with other technologies, and turned to uses the researchers never imagined – as Wendy Hall noted in moderating the Ada Lovelace panel. And if we have learned anything from the last 20 years it is that over time technology services enshittify, to borrow Cory Doctorow’s term in a rant which covers the degradation of the services offered by Amazon, Facebook, and soon, he predicts, TikTok.

The systems we call “AI” today have this in common with those services: they are centralized. They are technologies that re-advantage large organizations and governments because they require amounts of data and computing power that are beyond the capabilities of small organizations and individuals to acquire. We can only rent them or be forced to use them. The ur-evil AI, HAL in Stanley Kubrick’s 2001: A Space Odyssey taught us to fear an autonomous rogue. But the biggest danger with “AIs” of the type we are seeing today, that are being put into decision making and law enforcement, is not the technology, nor the people who invented it, but the expanding desires of its controller.

Illustrations: HAL, in 2001.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns back to November 2001. Comment here, or follow on Mastodon or Twitter.