Re-centralizing

But first, a housekeeping update. Net.wars has moved – to a new address and new blogging software. For details, see here. If you read net.wars via RSS, adjust your feed to https://netwars.pelicancrossing.net. Past posts’ old URLs will continue to work, as will the archive index page, which lists every net.wars column back to November 2001. And because of the move: comments are now open for the first time in probably about ten years. I will also shortly set up a mailing list for those who would rather get net.wars by email.

***

This week the Ada Lovelace Institute held a panel discussion of ethics for researchers in AI. Arguably, not a moment too soon.

At Noema magazine, Timnet Gebru writes, as Mary L Gray and Siddharth Suri have previously, that what today passes for “AI” and “machine learning” is, underneath, the work of millions of poorly-paid marginalized workers who add labels, evaluate content, and provide verification. At Wired, Gebru adds that their efforts are ultimately directed by a handful of Silicon Valley billionaires whose interests are far from what’s good for the rest of us. That would be the “rest of us” who are being used, willingly or not, knowingly or not, as experimental research subjects.

Two weeks ago, for example, a company called Koko ran an experiment offering chatbot-written/human-overseen mental health counseling without informing the 4,000 people who sought help via the “Koko Cares” Discord server. In a Twitter thread. company co-founder Rob Morris said those users rated the bot’s responses highly until they found out a bot had written them.

People can build relationships with anything, including chatbots, as was proved in 1996 with the release of the experimental chatbot therapist Eliza. People found Eliza’s responses comforting even though they knew it was a bot. Here, however, informed consent processes seem to have been ignored. Morris’s response, when widely criticized for the unethical nature of this little experiment was to say it was exempt from informed consent requirements because helpers could opt whether to use the chatbot’s reponses and Koko had no plan to publish the results.

One would like it to be obvious that *publication* is not the biggest threat to vulnerable people in search of help. One would also like modern technology CEOs to have learned the right lesson from prior incidents such as Facebook’s 2012 experiment to study users’ moods when it manipulated their newsfeeds. Facebook COO Sheryl Sandberg apologized for *how the experiment was communicated*, but not for doing it. At the time, we thought that logic suggested that such companies would continue to do the research but without publishing the results. Though isn’t tweeting publication?

It seems clear that scale is part of the problem here, like the old saying, one death is a tragedy; a million deaths are a statistic. Even the most sociopathic chatbot owner is unlikely to enlist an experimental chatbot to respond to a friend or family member in distress. But once a screen intervenes, the thousands of humans on the other side are just a pile of user IDs; that’s part of how we get so much online abuse. For those with unlimited control over the system we must all look like ants. And who wouldn’t experiment on ants?

In that sense, the efforts of the Ada Lovelace panel to sketch out the diligence researchers should follow are welcome. But the reality of human nature is that it will always be possible to find someone unscrupulous to do unethical research – and the reality of business nature is not to care much about research ethics if the resulting technology will generate profits. Listening to all those earnest, worried researchers left me writing this comment: MBAs need ethics. MBAs, government officials, and anyone else who is in charge of how new technologies are used and whose decisions affect the lives of the people those technologies are imposed upon.

This seemed even more true a day later, at the annual activists’ gathering Privacy Camp. In a panel on the proliferation of surveillance technology at the borders, speakers noted that every new technology that could be turned to helping migrants is instead being weaponized against them. The Border Violence Monitoring Network has collected thousands of such testimonies.

The especially relevant bit came when Hope Barker, a senior policy analyst with BVMN, noted this problem with the forthcoming AI Act: accountability is aimed at developers and researchers, not users.

Granted, technology that’s aborted in the lab isn’t available for abuse. But no technology stays the same after leaving the lab; it gets adapted, altered, updated, merged with other technologies, and turned to uses the researchers never imagined – as Wendy Hall noted in moderating the Ada Lovelace panel. And if we have learned anything from the last 20 years it is that over time technology services enshittify, to borrow Cory Doctorow’s term in a rant which covers the degradation of the services offered by Amazon, Facebook, and soon, he predicts, TikTok.

The systems we call “AI” today have this in common with those services: they are centralized. They are technologies that re-advantage large organizations and governments because they require amounts of data and computing power that are beyond the capabilities of small organizations and individuals to acquire. We can only rent them or be forced to use them. The ur-evil AI, HAL in Stanley Kubrick’s 2001: A Space Odyssey taught us to fear an autonomous rogue. But the biggest danger with “AIs” of the type we are seeing today, that are being put into decision making and law enforcement, is not the technology, nor the people who invented it, but the expanding desires of its controller.

Illustrations: HAL, in 2001.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns back to November 2001. Comment here, or follow on Mastodon or Twitter.

net.wars – 2023 and beyond

After 16 years, net.wars has a new site URL and new software. All the entries going back to 2006 will stay at the old site URL, but going forward you will find net.wars every Friday here, where you are now.

One consequence is that comments are now open (again, for those with very long memories). The old blog got so much spam that even putting up a post became a slow and difficult process, and the host recommended shutting them down. Another is that posts through last Friday, January 20, 2023. are static pages. To find old posts, either consult the archive page, which goes all the way back to November 2001, when net.wars began as a column for The Inquirer, or use DuckDuckGo (or some other search engine of your choice) to search.

Thanks for reading.