Faking it

I have finally figured out what benefit exTwitter gets from its new owner’s decision to strip out the headlines from linked third-party news articles: you cannot easily tell the difference between legitimate links and ads. Both have big unidentified pictures, and if you forget to look for the little “Ad” label at the top right or check the poster’s identity to make sure it’s someone you actually follow, it’s easy to inadvertently lessen the financial losses accruing to said owner by – oh, the shame and horror – clicking on that ad. This is especially true because the site has taken to injecting these ads with increasing frequency into the carefully curated feed that until recently didn’t have this confusion. Reader, beware.

***

In all the discussion of deepfakes and AI-generated bullshit texts, did anyone bring up the possibility of datafakes? Nature highlights a study in which researchers created a fake database to provide evidence for concluding that one of two surgical procedures is better than the other. This is nasty stuff. The rising numbers of retracted papers already showed serious problems with peer review (which are not new, but are getting worse). To name just a couple: reviewers are unpaid and often overworked, and what they look for are scientific advances, not fraud.

In the UK, Ben Goldacre has spearheaded initiatives to improve on the quality of published research. A crucial part of this is ensuring people state in advance the hypothesis they’re testing, and publish the results of all trials, not just the ones that produce the researcher’s (or funder’s) preferred result.

Science is the best process we have for establishing an edifice of reliable knowledge. We desperately need it to work. As the dust settles on the week of madness at OpenAI, whose board was supposed to care more about safety than about its own existence, we need to get over being distracted by the dramas and the fears of far-off fantasy technology and focus on the fact that the people running the biggest computing projects by and large are not paying attention to the real and imminent problems their technology is bringing.

***

Callum Cant reports at the Guardian that Deliveroo has won a UK Supreme Court ruling that its drivers are self-employed and accordingly do not have the right to bargain collectively for higher pay or better working conditions. Deliveroo apparently won this ruling because of a technicality – its insertion of a clause that allows drivers to send a substitute in their place, an option that is rarely used.

Cant notes the health and safety risks to the drivers themselves, but what about the rest of of us? A driver in his tenth hour of a seven-day-a-week grind doesn’t just put themselves at risk; they’re a risk to everyone they encounter on the roads. The way these things are going, if safety becomes a problem, instead of raising wages to allow drivers a more reasonable schedule and some rest, the likelihood is that these companies will turn to surveillance technology, as Amazon has.

In the US, this is what’s happened to truck drivers, and, as Karen Levy documents in her book, Data Driven, it’s counterproductive. Installing electronic logging devices into truckers’ cabs has led older, more experienced, and, above all, *safer* drivers to leave the profession, to be replaced with younger, less-experienced, and cheaper drivers with a higher appetite for risk. As Levy writes, improved safety won’t come from surveiling exhausted drivers; what’s needed is structural change to create better working conditions.

***

The UK’s covid inquiry has been livestreaming its hearings on government decision making for the last few weeks, and pretty horrifying they are, too. That’s true even if you don’t include former deputy medical officer Johnathan Van-Tam’s account of the threats of violence aimed at him and his family. They needed police protection for nine months and were advised to move out of their house – but didn’t want to leave their cat. Will anyone take the job of protecting public health if this is the price?

Chris Whitty, the UK’s Chief Medical Officer, said the UK was “woefully underprepared”, locked down too late, and made decisions too slowly. He was one of the polite ones.

Former special adviser Dominic Cummings (from whom no one expected politeness) said everyone called Boris Johnson a trolley, because, like a shopping trolley with the inevitable wheel pointing in the wrong direction, he was so inconsistent.

The government chief scientific adviser, Patrick Vallance had kept a contemporaneous diary, which provided his unvarnished thoughts at the time, some of which were read out. Among them: Boris Johnson was obsessed with older people accepting their fate, unable to grasp the concept of doubling times or comprehend the graphs on the dashboard, and intermittently uncertain if “the whole thing” was a mirage.

Our leader envy in April 2020 seems correctly placed. To be fair, though: Whitty and Vallance, citing their interactions with their counterparts in other countries, both said that most countries had similar problems. And for the same reason: the leaders of democratic countries are generally not well-versed in science. As the Economist’s health policy editor, Natasha Loder warned in early 2022, elect better leaders. Ask, she said, before you vote, “Are these serious people?” Words to keep in mind as we head toward the elections of 2024.

Illustrations: The medium Mina Crandon and the “materialized spirit hand” she produced during seances.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The end of cool

For a good bit of this year’s We Robot, it felt like abstract “AI” – that is, algorithms running on computers with no mobility – had swallowed the robots whose future this conference was invented to think about. This despite a pre-conference visit to Boston Dynamics, which showed off its Atlas
robot
‘s ability to do gymnastics. It’s cute, but is it useful? Your washing machine is smarter, and its intelligence solves real problems like how to use less water.

There’s always some uncertainty about boundaries at this event: is a machine learning decision making system a robot? At the inaugural We Robot in 2012, the engineer Bill Smart summed up the difference: “My iPhone can’t stab me in my bed.” Of course, neither could an early Roomba, which most would agree was the first domestic robot. However, it was also dumb as a floor tile, achieving cleanliness through random repetition rather than intelligent mapping. In the Roomba 1.0 sense, a “robot” is “a device that does boring things so I don’t have to”. Not cool, but useful, and solves a real problem

During a session in which participants played a game designed to highlight the conflicts inherent in designing an urban drone delivery system, Lael Odhner offered yet another definition: “A robot is a literary device we use to voice our discomfort with technology.” In the context of an event where participants think through the challenges robots bring to law and policy, this may be the closest approximation.

In the design exercise, our table’s three choices were: fund the FAA (so they can devise and enforce rules and policies), build it as a municipally-owned public service both companies and individuals can use as customers, and ban advertising on the drones for reasons of both safety and offensiveness. A similar exercise last year produced more specific rules, but also led us to realize that a drone delivery service had no benefits over current delivery services.

Much depends on scale. One reason we chose a municipal public service was the scale of noise and environmental impact inevitably generated by multiple competing commercial services. In a paper, Woody Hartzog examined the meaning of “scale”: is scale *more*, or is scale *different*? You can argue, as net.wars often has, that scale *creates* difference, but it’s rarely clear where to place the threshold, or how reaching it changes a technology’s harms or who it makes vulnerable. Ryan Calo and Daniella DiPaola suggested that rather than associate vulnerability with particular classes of people we should see it as variable with circumstances: “Everyone is vulnerable sometimes, and vulnerability is a state that can be created and manipulated toward particular ends.” This seems a more logical and fairer approach.

An aspect of this is that there are two types of rules: harm rules, which empower institutions to limit harm, and power rules, which empower individuals to protect themselves. A possible worked example soon presented itself in Kegan J Strawn;s and Daniel Sokol‘s paper on safety techniques in mobile robots, which suggested copying medical ethics’ consent approach. Then someone described the street scene in which every pedestrian had to give consent to every passing experimental Tesla, a possibly an even worse scenario than ad-bearing delivery drones. Pedestrians get nothing out of the situation, and Teslas don’t become safer. What you really want is for car companies not to test the safety of autonomous vehicles on public roads with pedestrians as unwitting crash test dummies.

I try to think every year how our ideas about inegrating robots into society are changing over time. An unusual paper from Maria P. Angel considered this question with respect to privacy scholarship by surveying 1990s writing and 20 years of papers presented at Privacy Law Scholars. We Robot co-founders Calo, Michael Froomkin, and Ian Kerr partly copied its design. Angel’s conclusion is roughly that the 1990s saw calls for an end to self-regulation while the 2000s moved from privacy as necessary for individual autonomy and self-determination to collective benefits and most recently to its importance for human flourishing.

As Hartzog commented, he came to the first We Robot with the belief that “Robots are magic”, only to encounter Smart’s “really fancy hammers.” And, Smart and Cindy Grimm added in 2018, controlled by sensors that are “late, noisy, and wrong”. Hartzog’s early excitement was shared by many of us; the future looked so *interesting* when it was almost entirely imaginary.

Over time, the robotic future has become more nowish, and has shifted in response to technological development; the discussion has become more about real systems (2022) than imagined future ones. The arrival of real robots on our streets – for example, San Francisco’s 2017 use of security robots to deter homeless camps – changed parts of the discussion from theoretical to practical.

In the mid-2010s, much discussion focused on problems of fairness, especially to humans in the loop, who, Madeleine Claire Elish correctly predicted in 2016 would be blamed for failures. More recently, the proliferation of data-gathering devices (sensors, cameras) into everything from truckers’ cabs to agriculture and the arrival of new algorithmic systems dubbed AI has raised awareness of the companies behind these technologies. And, latterly, that often the technology diverts attention from the better possibilities of structural change.

But that’s not as cool.

Illustrations: Boston Dynamics’ Atlas robots doing synchronized backflips (via YouTube).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Re-centralizing

But first, a housekeeping update. Net.wars has moved – to a new address and new blogging software. For details, see here. If you read net.wars via RSS, adjust your feed to https://netwars.pelicancrossing.net. Past posts’ old URLs will continue to work, as will the archive index page, which lists every net.wars column back to November 2001. And because of the move: comments are now open for the first time in probably about ten years. I will also shortly set up a mailing list for those who would rather get net.wars by email.

***

This week the Ada Lovelace Institute held a panel discussion of ethics for researchers in AI. Arguably, not a moment too soon.

At Noema magazine, Timnet Gebru writes, as Mary L Gray and Siddharth Suri have previously, that what today passes for “AI” and “machine learning” is, underneath, the work of millions of poorly-paid marginalized workers who add labels, evaluate content, and provide verification. At Wired, Gebru adds that their efforts are ultimately directed by a handful of Silicon Valley billionaires whose interests are far from what’s good for the rest of us. That would be the “rest of us” who are being used, willingly or not, knowingly or not, as experimental research subjects.

Two weeks ago, for example, a company called Koko ran an experiment offering chatbot-written/human-overseen mental health counseling without informing the 4,000 people who sought help via the “Koko Cares” Discord server. In a Twitter thread. company co-founder Rob Morris said those users rated the bot’s responses highly until they found out a bot had written them.

People can build relationships with anything, including chatbots, as was proved in 1996 with the release of the experimental chatbot therapist Eliza. People found Eliza’s responses comforting even though they knew it was a bot. Here, however, informed consent processes seem to have been ignored. Morris’s response, when widely criticized for the unethical nature of this little experiment was to say it was exempt from informed consent requirements because helpers could opt whether to use the chatbot’s reponses and Koko had no plan to publish the results.

One would like it to be obvious that *publication* is not the biggest threat to vulnerable people in search of help. One would also like modern technology CEOs to have learned the right lesson from prior incidents such as Facebook’s 2012 experiment to study users’ moods when it manipulated their newsfeeds. Facebook COO Sheryl Sandberg apologized for *how the experiment was communicated*, but not for doing it. At the time, we thought that logic suggested that such companies would continue to do the research but without publishing the results. Though isn’t tweeting publication?

It seems clear that scale is part of the problem here, like the old saying, one death is a tragedy; a million deaths are a statistic. Even the most sociopathic chatbot owner is unlikely to enlist an experimental chatbot to respond to a friend or family member in distress. But once a screen intervenes, the thousands of humans on the other side are just a pile of user IDs; that’s part of how we get so much online abuse. For those with unlimited control over the system we must all look like ants. And who wouldn’t experiment on ants?

In that sense, the efforts of the Ada Lovelace panel to sketch out the diligence researchers should follow are welcome. But the reality of human nature is that it will always be possible to find someone unscrupulous to do unethical research – and the reality of business nature is not to care much about research ethics if the resulting technology will generate profits. Listening to all those earnest, worried researchers left me writing this comment: MBAs need ethics. MBAs, government officials, and anyone else who is in charge of how new technologies are used and whose decisions affect the lives of the people those technologies are imposed upon.

This seemed even more true a day later, at the annual activists’ gathering Privacy Camp. In a panel on the proliferation of surveillance technology at the borders, speakers noted that every new technology that could be turned to helping migrants is instead being weaponized against them. The Border Violence Monitoring Network has collected thousands of such testimonies.

The especially relevant bit came when Hope Barker, a senior policy analyst with BVMN, noted this problem with the forthcoming AI Act: accountability is aimed at developers and researchers, not users.

Granted, technology that’s aborted in the lab isn’t available for abuse. But no technology stays the same after leaving the lab; it gets adapted, altered, updated, merged with other technologies, and turned to uses the researchers never imagined – as Wendy Hall noted in moderating the Ada Lovelace panel. And if we have learned anything from the last 20 years it is that over time technology services enshittify, to borrow Cory Doctorow’s term in a rant which covers the degradation of the services offered by Amazon, Facebook, and soon, he predicts, TikTok.

The systems we call “AI” today have this in common with those services: they are centralized. They are technologies that re-advantage large organizations and governments because they require amounts of data and computing power that are beyond the capabilities of small organizations and individuals to acquire. We can only rent them or be forced to use them. The ur-evil AI, HAL in Stanley Kubrick’s 2001: A Space Odyssey taught us to fear an autonomous rogue. But the biggest danger with “AIs” of the type we are seeing today, that are being put into decision making and law enforcement, is not the technology, nor the people who invented it, but the expanding desires of its controller.

Illustrations: HAL, in 2001.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns back to November 2001. Comment here, or follow on Twitter.