Faking it

I have finally figured out what benefit exTwitter gets from its new owner’s decision to strip out the headlines from linked third-party news articles: you cannot easily tell the difference between legitimate links and ads. Both have big unidentified pictures, and if you forget to look for the little “Ad” label at the top right or check the poster’s identity to make sure it’s someone you actually follow, it’s easy to inadvertently lessen the financial losses accruing to said owner by – oh, the shame and horror – clicking on that ad. This is especially true because the site has taken to injecting these ads with increasing frequency into the carefully curated feed that until recently didn’t have this confusion. Reader, beware.

***

In all the discussion of deepfakes and AI-generated bullshit texts, did anyone bring up the possibility of datafakes? Nature highlights a study in which researchers created a fake database to provide evidence for concluding that one of two surgical procedures is better than the other. This is nasty stuff. The rising numbers of retracted papers already showed serious problems with peer review (which are not new, but are getting worse). To name just a couple: reviewers are unpaid and often overworked, and what they look for are scientific advances, not fraud.

In the UK, Ben Goldacre has spearheaded initiatives to improve on the quality of published research. A crucial part of this is ensuring people state in advance the hypothesis they’re testing, and publish the results of all trials, not just the ones that produce the researcher’s (or funder’s) preferred result.

Science is the best process we have for establishing an edifice of reliable knowledge. We desperately need it to work. As the dust settles on the week of madness at OpenAI, whose board was supposed to care more about safety than about its own existence, we need to get over being distracted by the dramas and the fears of far-off fantasy technology and focus on the fact that the people running the biggest computing projects by and large are not paying attention to the real and imminent problems their technology is bringing.

***

Callum Cant reports at the Guardian that Deliveroo has won a UK Supreme Court ruling that its drivers are self-employed and accordingly do not have the right to bargain collectively for higher pay or better working conditions. Deliveroo apparently won this ruling because of a technicality – its insertion of a clause that allows drivers to send a substitute in their place, an option that is rarely used.

Cant notes the health and safety risks to the drivers themselves, but what about the rest of of us? A driver in his tenth hour of a seven-day-a-week grind doesn’t just put themselves at risk; they’re a risk to everyone they encounter on the roads. The way these things are going, if safety becomes a problem, instead of raising wages to allow drivers a more reasonable schedule and some rest, the likelihood is that these companies will turn to surveillance technology, as Amazon has.

In the US, this is what’s happened to truck drivers, and, as Karen Levy documents in her book, Data Driven, it’s counterproductive. Installing electronic logging devices into truckers’ cabs has led older, more experienced, and, above all, *safer* drivers to leave the profession, to be replaced with younger, less-experienced, and cheaper drivers with a higher appetite for risk. As Levy writes, improved safety won’t come from surveiling exhausted drivers; what’s needed is structural change to create better working conditions.

***

The UK’s covid inquiry has been livestreaming its hearings on government decision making for the last few weeks, and pretty horrifying they are, too. That’s true even if you don’t include former deputy medical officer Johnathan Van-Tam’s account of the threats of violence aimed at him and his family. They needed police protection for nine months and were advised to move out of their house – but didn’t want to leave their cat. Will anyone take the job of protecting public health if this is the price?

Chris Whitty, the UK’s Chief Medical Officer, said the UK was “woefully underprepared”, locked down too late, and made decisions too slowly. He was one of the polite ones.

Former special adviser Dominic Cummings (from whom no one expected politeness) said everyone called Boris Johnson a trolley, because, like a shopping trolley with the inevitable wheel pointing in the wrong direction, he was so inconsistent.

The government chief scientific adviser, Patrick Vallance had kept a contemporaneous diary, which provided his unvarnished thoughts at the time, some of which were read out. Among them: Boris Johnson was obsessed with older people accepting their fate, unable to grasp the concept of doubling times or comprehend the graphs on the dashboard, and intermittently uncertain if “the whole thing” was a mirage.

Our leader envy in April 2020 seems correctly placed. To be fair, though: Whitty and Vallance, citing their interactions with their counterparts in other countries, both said that most countries had similar problems. And for the same reason: the leaders of democratic countries are generally not well-versed in science. As the Economist’s health policy editor, Natasha Loder warned in early 2022, elect better leaders. Ask, she said, before you vote, “Are these serious people?” Words to keep in mind as we head toward the elections of 2024.

Illustrations: The medium Mina Crandon and the “materialized spirit hand” she produced during seances.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The end of ownership

It seems no manufacturer will be satisfied until they have turned everything they make into an ongoing revenue stream. Once, it was enough to sell widgets. Then, you needed to have a line of upgrades and add-ons for your widgets and all your sales personnel were expected to “upsell” at every opportunity. Now, you need to turn some of those upgrades and add-ons into subscription services, and throw in some ads for extra revenue. All those ad-free moments in your life? To you, this is space in which to think your own thoughts. To advertisers, these are golden opportunities that haven’t been exploitable before and should be turned to their advantage. (Years ago, I remember, for example, a speaker at a lunchtime meeting convened by the Internet Advertising Bureau saying with great excitement that viral emails could bring ads into workplaces, which had previously been inaccessible.)

The immediate provocation for this musing is the Chamberlain garage door opener that blocks third-party apps in order to display ads. To be fair, I have no skin in this specific game: I have neither garage door opener nor garage door. I don’t even have a car (any more). But I have used these items, and I therefore feel comfortable in saying that this whole idea sucks.

There are three objectionable aspects. First is the ad itself and the market change it represents. I accept that some apps on my phone show ads, but I accept that because I have so far decided not to pay for them (in part because I don’t want to give my credit card information to Google in order to do so). I also accept them because I have chosen to use the apps. Here, however, the app comes with the garage door opener, which you *have* paid for, and the company is double-dipping by trying to turn it into an ongoing revenue stream; its desire to block third-party apps is entirely to protect that revenue stream. Did you even *want* an app with your garage door opener? Does a garage door need options? My friends who have them seem perfectly happy with the two choices of open or closed, and with a gizmo clipped to their sun visor that just has a physical button to push.

Second is the reported user interface design, which forces you to scroll past the ad to get to the button to open the door. This is theft: Chamberlain is stealing a sliver of your time and patience whenever you need to open your garage door. Both are limited resources.

Third is the loss of control over – ownership of – objects you have ostensibly bought. With few exceptions, it has always been understood that once you’ve bought a physical object it’s yours to do with what you want. Even in the case of physical containers of intellectual property – books, CDs, LPs – you always had the right to resell or give away the physical item and to use it as often as you wanted to. The arrival of digital media forced a clarification: you owned the physical object but not the music, pictures, film, or text encoded on it. The part-pairing discussed here a couple of weeks ago is an early example of the extension of this principle to formerly wholly-owned objects. The more software infiltrates the physical world, the more manufacturers will seek to use that software to control how we use the devices they make.

In the case we began with, Chamberlain’s decision to shut off API access to third parties to protect its own profits mirrors a recent trend in social media such as Reddit and Twitter in response to large language models built on training data scraped from their sites. The upshot in the Chamberlain case is that the garage door openers stop working with home automation systems into which the owners want to integrate them. Chamberlain has called this integration unauthorized usage and complains that said use means a tiny proportion of its customers consumed more than half of the traffic to and from its system. Seems like someone could have designed a technical solution for this.

At Pluralistic, Cory Doctorow lists four ways companies can be stopped from exerting unreasonable post-purchase control: fear of their competition, regulation, technical feasibility, and customer DIY. All four, he writes, have so far failed in this case, not least because Chamberlain is now owned by the private equity firm Blackstone, which has already bought up its competitors. Because there are so many other examples, we can’t dismiss this as a one-off; it’s a trend! Or, in Doctorow’s words, “a vast and deadly rot”.

An early example came from Tesla in 2020; when it disabled Full Self-Drive on a used Model S on the grounds that the customer hadn’t paid for it. Over-the-air software updates give companies this level of control long after purchase.

Doctorow believes a countering movement is underway. I hope so, because writing this has led me to this little imaginary future horror: the guitar that silences itself until you type in a code to verify that you have paid royalties for the song you’re trying to play. Logically, then, all interaction with physical objects could become like waiting through the ads for other shows on DVDs until you could watch the one you paid to see. Life is *really* too short.

Illustrations: Steve (Campbell Scott) shows Linda (Kyra Sedgwick) how much he likes her by offering her a garage door opener in Cameron Crowe’s 1992 film Singles.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Guarding the peace

Police are increasingly attempting to prevent crime by using social media targeting tools to shape public behavior, says a new report from the Scottish Institute for Policing Research (PDF) written by a team of academic researchers led by Ben Collier at the University of Edinburgh. There is no formal regulation of these efforts, and the report found many examples of what is genteelly calls “unethical practice”.

On the one hand, “behavioral change marketing” seems an undeniably clever use of new technological tools. If bad actors can use targeted ads to scam, foment division, and incite violence, why shouldn’t police use them to encourage the opposite? The tools don’t care whether you’re a Russian hacker targeting 70-plus white pensioners with anti-immigrant rhetoric or a charity trying to reach vulnerable people to offer help. Using them is a logical extension of the drive toward preventing, rather than solving, crime. Governments have long used PR techniques to influence the public, from benign health PSAs on broadcast media to Theresa May’s notorious , widely cricised, and unsuccessful 2013 campaign of van ads telling illegal immigrants to go home.

On the other hand, it sounds creepy as hell. Combining police power with poorly-evidenced assumptions about crime and behavior and risk and the manipulation and data gathering of surveillance capitalism…yikes.

The idea of influence policing derives at least in part from Cass R. Sunstein‘s and Richard H. Thaler‘s 2008 book Nudge. The “nudge theory” it promoted argued that the use of careful design (“choice architecture”) could push people into making more desirable decisions.

The basic contention seems unarguable; using design to push people toward decisions they might not make by themselves is the basis of many large platforms’ design decisions. Dark patterns are all about that.

Sunstein and Thaler published their theory at the post-financial crisis moment when governments were looking to reduce costs. As early as 2010, the UK’s Cabinet Office set up the Behavioural Insights Team to improve public compliance with government policies. The “Nudge Unit” has been copied widely across the world.

By 2013, it was being criticized for forcing job seekers to fill out a scientifically invalid psychometric test. In 2021, Observer columnist Sonia Sodha called its record “mixed”, deploring the expansion of nudge theory into complex, intractable social problems. In 2022, new research cast doubt on the whole idea that nudges have little effect on personal behavior.

The SIRP report cites the Government Communications Service, the outgrowth of decades of government work to gain public compliance with policy. The GCS itself notes its incorporation of marketing science and other approaches common in the commercial sector. Its 7,000 staff work in departments across government.

This has all grown up alongside the increasing adoption of digital marketing practices across the UK’s public sector, including the tax authorities (HMRC), the Department of Work and Pensions, and especially, the Home Office – and alongside the rise of sophisticated targeting tools for online advertising.

The report notes: “Police are able to develop ‘patchwork profiles’ built up of multiple categories provided by ad platforms and detailed location-based categories using the platform targeting categories to reach extremely specific groups.”

The report’s authors used the Meta Ad Library to study the ads, the audiences and profiles police targeted, and the cost. London’s Metropolitan Police, which a recent scathing report found endemically racist and misogynist, was an early adopter and is the heaviest studied user of digitally targeted ads on Meta.

Many of the cample campaigns these organizations run sound mostly harmless. Campaigns intended to curb domestic violence, for example, may aim at encouraging bystanders to come forward with concerns. Others focus on counter-radicalisation and security themes or, increasingly, preventing online harms and violence against women and girls.

As a particular example of the potential for abuse, the report calls out the Home Office Migrants on the Move campaign, a collaboration with a “migration behavior change” agency called Seefar. This targeted people in France seeking asylum in the UK and attempted to frighten them out of trying to cross the Channel in small boats. The targeting was highly specific, with many ads aimed at as few as 100 to 1,000 people, chosen for their language and recent travel in or through Brussels and Calais.

The report’s authors raise concerns: the harm implicit in frightening already-extremely vulnerable people, the potential for damaging their trust in authorities to help them, and the privacy implications of targeting such specific groups. In the report’s example, Arabic speakers in Brussels might see the Home Office ads but their French neighbors would not – and those Arabic speakers would be unlikely to be seeking asylum. The Home Office’s digital equivalent of May’s van ads, therefore, would be seen only by a selection of microtargeted individuals.

The report concludes: “We argue that this campaign is a central example of the potential for abuse of these methods, and the need for regulation.”

The report makes a number of recommendations including improved transparency, formalized regulation and oversight, better monitoring, and public engagement in designing campaigns. One key issue is coming up with better ways of evaluating the results. Surprise, surprise: counting clicks, which is what digital advertising largely sells as a metric, is not a useful way to measure social change.

All of these arguments make sense. Improving transparency in particular seems crucial, as does working with the relevant communities. Deterring crime doesn’t require tricks and secrecy; it requires collaboration and openness.

Illustrations: Theresa May’s notorious van ad telling illegal immigrants to go home.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon