The good fight

This week saw a small gathering to celebrate the 25th anniversary (more or less) of the Foundation for Information Policy Research, a think tank led by Cambridge and Edinburgh University professor Ross Anderson. FIPR’s main purpose is to produce tools and information that campaigners for digital rights can use. Obdisclosure: I am a member of its advisory council.

What, Anderson asked those assembled, should FIPR be thinking about for the next five years?

When my turn came, I said something about the burnout that comes to many campaigners after years of fighting the same fights. Digital rights organizations – Open Rights Group, EFF, Privacy International, to name three – find themselves trying to explain the same realities of math and technology decade after decade. Small wonder so many burn out eventually. The technology around the debates about copyright, encryption, and data protection has changed over the years, but in general the fundamental issues have not.

In part, this is because what people want from technology doesn’t change much. A tangential example of this presented itself this week, when I read the following in the New York Times, written by Peter C Baker about the “Beatles'” new mash-up recording:

“So while the current legacy-I.P. production boom is focused on fictional characters, there’s no reason to think it won’t, in the future, take the form of beloved real-life entertainers being endlessly re-presented to us with help from new tools. There has always been money in taking known cash cows — the Beatles prominent among them — and sprucing them up for new media or new sensibilities: new mixes, remasters, deluxe editions. But the story embedded in “Now and Then” isn’t “here’s a new way of hearing an existing Beatles recording” or “here’s something the Beatles made together that we’ve never heard before.” It is Lennon’s ideas from 45 years ago and Harrison’s from 30 and McCartney and Starr’s from the present, all welded together into an officially certified New Track from the Fab Four.”

I vividly remembered this particular vision of the future because just a few days earlier I’d had occasion to look it up – a March 1992 interview for Personal Computer World with the ILM animator Steve Williams, who the year before had led the team that produced the liquid metal man for the movie Terminator 2. Williams imagined CGI would become pervasive (as it has):

“…computer animation blends invisibly with live action to create an effect that has no counterpart in the real world. Williams sees a future in which directors can mix and match actors’ body parts at will. We could, he predicts, see footage of dead presidents giving speeches, films starring dead or retired actors, even wholly digital actors. The arguments recently seen over musicians who lip-synch to recordings during supposedly ‘live’ concerts are likely to be repeated over such movie effects.”

Williams’ latest work at the time was on Death Becomes Her. Among his calmer predictions was that as CGI became increasingly sophisticated the boundary between computer-generated characters and enhancements would become invisible. Thirty years on, the big excitement recently has been Harrison Ford’s deaging for Indiana Jones and the Dial of Destiny. That used CGI, AI, and other tools to digitally swap in his face from 1980s footage.

Side note: in talking about the Ford work to Wired, ILM supervisor Andrew Whitehurst, exactly like Williams in 1992, called the new technology “another pencil”.

Williams also predicted endless legal fights over copyright and other rights. That at least was spot-on; AI and the perpetual reuse of retained footage without further payment is part of what the recent SAG-AFTRA strikes were about.

Yet, the problem here isn’t really technology; it’s the incentives. The businessfolk of Hollywood’s eternal desire is to guarantee their return on investment, and they think recycling old successes is the safest way to do that. Closer to digital rights, law enforcement always wants greater access to private communications; the frustration is that incoming generations of politicians don’t understand the laws of mathematics any better than their predecessors in the 1990s.

Many of the speakers focused on the issue of getting government to listen to and understand the limits of technology. Increasingly, though, a new problem is that, as Bruce Schneier writes in his latest book, The Hacker’s Mind, everyone has learned to think like hackers and subvert the systems they’re supposed to protect. The Silicon Valley mantra of “ask forgiveness, not permission” has become pervasive, whether it’s a technology platform deciding to collect masses of data about us or a police force deciding to stick a live facial recognition pilot next to Oxford Circus tube station. Except no one asks for forgiveness either.

Five years ago, at FIPR’s 20th anniversary, when GDPR is new, Anderson predicted (correctly) that the battles over encryption would move to device access. Today, it’s less clear what’s next. Facial recognition represents a step change; it overrides consent and embeds distrust in our public infrastructure.

If I were to predict the battles of the next five years, I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentatioon because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is danagerous.

Illustrations: The liquid metal man in Terminator 2 reconstituting itself.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

New phone, who dis?

So I got a new phone. What makes the experience remarkable is that the old phone was a Samsung Galaxy Note 4, which, if Wikipedia is correct, was released in 2014. So the phone was at least eight, probably nine, years old. When you update incrementally, like a man who gets his hair cut once a week, it’s hard to see any difference. When you leapfrog numerous generations of updates, it’s seeing the man who’s had his first haircut in a year: it’s a shock.

The tl;dr: most of what I don’t like about the switch is because of Google.

There were several reasons why I waited so long. It was a good enough phone and it had a very good camera for its time; I finessed the lack of security updates by not using the phone for functions where it mattered. Also, I didn’t want to give up the disappearing headphone jack, home button, or, especially, user-replaceable battery. The last of those is why I could keep the phone for so long, and it was the biggest deal-breaker.

For that reason, I’ve known for years that the Note’s eventual replacement would likely be a Fairphone, a Dutch outfit that is doing its best to produce sustainable phones. It’s repairable and user-upgradable (it takes one screwdriver to replace a cracked screen or the camera), and changing the bettery takes a second. I had to compromise on the headphone jack, which requires a USB-C dongle. Not having the home button is hard to get used to; I used it constantly. It turns out, though, that it’s even harder to get used to not having the soft button on the bottom left that used to show me recently used apps so I could quickly switch back to the thing I was using a few minutes ago. But that….is software.

The biggest and most noticeable change between Android 6 (the Note 4 got its last software update in 2017) and Android 13 (last week) is the assumptions both Android chief Google and the providers of other apps make about what users want. On the Note 4, I had a quick-access button to turn the wifi on and off. Except for the occasional call over Signal, I saw no reason to keep it on to drain the battery unnecessarily. Today, that same switch is buried several layers deep in settings with apparently no way to move that into the list of quick-access functions. That’s just one example. But no acommodation for my personal quirks can change the sense of being bullied into giving away more data and control than I’d like.

Giving in to Google does, however, mean an easy transfer of your old phone’s contents to your new phone (if transferring the external SD card isn’t enough).

Too late I remembered the name Murena – a company that equips Fairphones with de-Googlified Android. As David Pierce writes at The Verge, that requires a huge effort. Murena has built replacements for the standard Google apps, a cloud system for email, calendars, and productivity software. Even so, Pierce writes, apps hit the limit: despite Murena’s effort to preserve user anonymity, it’s just not possible to download them without interacting with Google, especially when payment is required. And who wants to run their phone without third-party apps? Not even me (although I note that many of those I use can still be sideloaded).

The reality is I would have preferred to wait even longer to make the change. I was pushed by the fact that several times recently the Note has complained that it can’t download email because it was running out of storage space (which is why I would prefer to store everything on an external SD card, but: not an option for email and apps). And on a recent trip to the US, there were numerous occasions where the phone simply didn’t work, even though there shouldn’t be any black spots in places like Boston and San Francisco. A friend suggested that in all likelihood there were freuqency bands being turned off while other newer ones were probably ones the Note couldn’t use. I had forgotten that 5G, which I last thought about in 2018, had been arriving. So: new phone. Resentfully.

This kind of forced wastefulness is one of the things Donald Norman talks about in his new book, Design for a Better World. To some extent, the book is a mea culpa: after decades of writing about how to design things better to benefit us as individuals, Norman has recognized the necessity to rethink and replace human-centered design with humanity-centered design. Sustainability is part of that.

Everything around us is driven by design choices. Building unrepairable phones is a choice, and a destructive one, given the amount of rare materials used inside that wind up in landfills instead of, new phones or some other application. The Guardian’s review of the latest Fairphone asks, “Could this be the first phone to last ten years?” I certainly hope so, but if something takes it down before then it will be an externality like switched-off bands, the end of software updates, or a bank’s decision to require customers use an app for two-factor authentication and then update it so older phones can’t run it. These are, as Norman writes, complex systems in which the incentives are all misplaced. And so: new phone. Largely unnecessarily.

Illustrations: Personally owned 1970s AT&T phone.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The end of ownership

It seems no manufacturer will be satisfied until they have turned everything they make into an ongoing revenue stream. Once, it was enough to sell widgets. Then, you needed to have a line of upgrades and add-ons for your widgets and all your sales personnel were expected to “upsell” at every opportunity. Now, you need to turn some of those upgrades and add-ons into subscription services, and throw in some ads for extra revenue. All those ad-free moments in your life? To you, this is space in which to think your own thoughts. To advertisers, these are golden opportunities that haven’t been exploitable before and should be turned to their advantage. (Years ago, I remember, for example, a speaker at a lunchtime meeting convened by the Internet Advertising Bureau saying with great excitement that viral emails could bring ads into workplaces, which had previously been inaccessible.)

The immediate provocation for this musing is the Chamberlain garage door opener that blocks third-party apps in order to display ads. To be fair, I have no skin in this specific game: I have neither garage door opener nor garage door. I don’t even have a car (any more). But I have used these items, and I therefore feel comfortable in saying that this whole idea sucks.

There are three objectionable aspects. First is the ad itself and the market change it represents. I accept that some apps on my phone show ads, but I accept that because I have so far decided not to pay for them (in part because I don’t want to give my credit card information to Google in order to do so). I also accept them because I have chosen to use the apps. Here, however, the app comes with the garage door opener, which you *have* paid for, and the company is double-dipping by trying to turn it into an ongoing revenue stream; its desire to block third-party apps is entirely to protect that revenue stream. Did you even *want* an app with your garage door opener? Does a garage door need options? My friends who have them seem perfectly happy with the two choices of open or closed, and with a gizmo clipped to their sun visor that just has a physical button to push.

Second is the reported user interface design, which forces you to scroll past the ad to get to the button to open the door. This is theft: Chamberlain is stealing a sliver of your time and patience whenever you need to open your garage door. Both are limited resources.

Third is the loss of control over – ownership of – objects you have ostensibly bought. With few exceptions, it has always been understood that once you’ve bought a physical object it’s yours to do with what you want. Even in the case of physical containers of intellectual property – books, CDs, LPs – you always had the right to resell or give away the physical item and to use it as often as you wanted to. The arrival of digital media forced a clarification: you owned the physical object but not the music, pictures, film, or text encoded on it. The part-pairing discussed here a couple of weeks ago is an early example of the extension of this principle to formerly wholly-owned objects. The more software infiltrates the physical world, the more manufacturers will seek to use that software to control how we use the devices they make.

In the case we began with, Chamberlain’s decision to shut off API access to third parties to protect its own profits mirrors a recent trend in social media such as Reddit and Twitter in response to large language models built on training data scraped from their sites. The upshot in the Chamberlain case is that the garage door openers stop working with home automation systems into which the owners want to integrate them. Chamberlain has called this integration unauthorized usage and complains that said use means a tiny proportion of its customers consumed more than half of the traffic to and from its system. Seems like someone could have designed a technical solution for this.

At Pluralistic, Cory Doctorow lists four ways companies can be stopped from exerting unreasonable post-purchase control: fear of their competition, regulation, technical feasibility, and customer DIY. All four, he writes, have so far failed in this case, not least because Chamberlain is now owned by the private equity firm Blackstone, which has already bought up its competitors. Because there are so many other examples, we can’t dismiss this as a one-off; it’s a trend! Or, in Doctorow’s words, “a vast and deadly rot”.

An early example came from Tesla in 2020; when it disabled Full Self-Drive on a used Model S on the grounds that the customer hadn’t paid for it. Over-the-air software updates give companies this level of control long after purchase.

Doctorow believes a countering movement is underway. I hope so, because writing this has led me to this little imaginary future horror: the guitar that silences itself until you type in a code to verify that you have paid royalties for the song you’re trying to play. Logically, then, all interaction with physical objects could become like waiting through the ads for other shows on DVDs until you could watch the one you paid to see. Life is *really* too short.

Illustrations: Steve (Campbell Scott) shows Linda (Kyra Sedgwick) how much he likes her by offering her a garage door opener in Cameron Crowe’s 1992 film Singles.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The grown-ups

In an article this week in the Guardian, Adrian Chiles asks what decisions today’s parents are making that their kids will someday look back on in horror the way we look back on things from our childhood. Probably his best example is riding in cars without seatbelts (which I’m glad to say I survived). In contrast to his suggestion, I don’t actually think tomorrow’s parents will look back and think they shouldn’t have had smartphones, though it’s certainly true that last year a current parent MP (whose name I’ve lost) gave an impassioned speech opposing the UK’s just-passed Online Safety Act in which she said she had taken risks on the Internet as a teenager that she wouldn’t want her kids to take now.

Some of that, though, is that times change consequences. I knew plenty of teens who smoked marijuana in the 1970s. I knew no one whose parents found them severely ill from overdoing it. Last week, the parent of a current 15-year-old told me he’d found exactly that. His kid had made the classic error (see several 2010s sitcoms) of not understanding how slowly gummies act. Fortunately, marijuana won’t kill you, as the parent found to his great relief after some frenzied online searching. Even in 1972, it was known that consuming marijuana by ingestion (for example, in brownies) made it take effect more slowly. But the marijuana itself, by all accounts, was less potent. It was, in that sense, safer (although: illegal, with all the risks that involves).

The usual excuse for disturbing levels of past parental risk-taking is “We didn’t know any better”. A lot of times that’s true. When today’s parents of teenagers were 12 no one had smartphones; when today’s parents were teens their parents had grown up without Internet access at home; when my parents were teens they didn’t have TV. New risks arrive with every generation, and each new risk requires time to understand the consequences of getting it wrong.

That is, however, no excuse for some of the decisions adults are making about systems that affect all of us. Also this week and also at the Guardian, Akiko Hart, interim director of Liberty writes scathingly about government plans to expand the use of live facial recognition to track shoplifters. Under Project Pegasus, shops will use technology provided by Facewatch.

I first encountered Facewatch ten years ago at a conference on biometrics. Even then the company was already talking about “cloud-based crime reporting” in order to deter low-level crime. And even then there were questions about fairness. For how long would shoplifters remain on a list of people to watch closely? What redress was there going to be if the system got it wrong? Facewatch’s attitude seemed to be simply that what the company was doing wasn’t illegal because its customer companies were sharing information across their own branches. What Hart is describing, however, is much worse: a state-backed automated system that will see ten major retailers upload their CCTV images for matching against police databases. Policing minister Chris Philp hopes to expand this into a national shoplifting database including the UK’s 45 million passport photos. Hart suggests instead tackling poverty.

Quite apart from how awful all that is, what I’m interested in here is the increased embedding in public life of technology we already know is flawed and discriminatory. Since 2013, myriad investigations have found the algorithms that power facial recognition to have been trained on unrepresentative databases that make them increasingly inaccurate as the subjects diverge from “white male”.

There are endless examples of misidentification leading to false arrests. Last month, a man who was pulled over on the road in Georgia filed a lawsuit after being arrested and held for several days for a crime he didn’t commit in Louisiana, where he had never been.

In 2021, a story I’d missed, the University of Illinois at Urbana-Champaign announced it would discontinue using Proctorio, remote proctoring software that monitors students for cheating. The issue: the software frequently fails to recognize non-white faces. In a retail store, this might mean being followed until you leave. In an exam situation, this may mean being accused of cheating and having your results thrown out. A few months later, at Vice, Todd Feathers reported that a student researcher had studied the algorithm Proctorio was using and found its facial detection model failed to recognize black faces more than half the time. Late last year, the Dutch Institute of Human Rights found that using Proctorio could be discriminatory.

The point really isn’t this specific software or these specific cases. The point is more that we have a technology that we know is discriminatory and that we know would still violate human rights if it were accurate…and yet it keeps getting more and more deeply embedded in public systems. None of these systems are transparent enough to tell us what facial identification model they use, or publish benchmarks and test results.

So much of what net.wars is about is avoiding bad technology law that sticks. In this case, it’s bad technology that is becoming embedded in systems that one day will have to be ripped out, and we are entirely ignoring the risks. On that day, our children and their children will look at us, and say, “What were you thinking? You did know better.”

Illustrations: The CCTV camera on George Orwell’s house at 22 Portobello Road, London.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The end of cool

For a good bit of this year’s We Robot, it felt like abstract “AI” – that is, algorithms running on computers with no mobility – had swallowed the robots whose future this conference was invented to think about. This despite a pre-conference visit to Boston Dynamics, which showed off its Atlas
robot
‘s ability to do gymnastics. It’s cute, but is it useful? Your washing machine is smarter, and its intelligence solves real problems like how to use less water.

There’s always some uncertainty about boundaries at this event: is a machine learning decision making system a robot? At the inaugural We Robot in 2012, the engineer Bill Smart summed up the difference: “My iPhone can’t stab me in my bed.” Of course, neither could an early Roomba, which most would agree was the first domestic robot. However, it was also dumb as a floor tile, achieving cleanliness through random repetition rather than intelligent mapping. In the Roomba 1.0 sense, a “robot” is “a device that does boring things so I don’t have to”. Not cool, but useful, and solves a real problem

During a session in which participants played a game designed to highlight the conflicts inherent in designing an urban drone delivery system, Lael Odhner offered yet another definition: “A robot is a literary device we use to voice our discomfort with technology.” In the context of an event where participants think through the challenges robots bring to law and policy, this may be the closest approximation.

In the design exercise, our table’s three choices were: fund the FAA (so they can devise and enforce rules and policies), build it as a municipally-owned public service both companies and individuals can use as customers, and ban advertising on the drones for reasons of both safety and offensiveness. A similar exercise last year produced more specific rules, but also led us to realize that a drone delivery service had no benefits over current delivery services.

Much depends on scale. One reason we chose a municipal public service was the scale of noise and environmental impact inevitably generated by multiple competing commercial services. In a paper, Woody Hartzog examined the meaning of “scale”: is scale *more*, or is scale *different*? You can argue, as net.wars often has, that scale *creates* difference, but it’s rarely clear where to place the threshold, or how reaching it changes a technology’s harms or who it makes vulnerable. Ryan Calo and Daniella DiPaola suggested that rather than associate vulnerability with particular classes of people we should see it as variable with circumstances: “Everyone is vulnerable sometimes, and vulnerability is a state that can be created and manipulated toward particular ends.” This seems a more logical and fairer approach.

An aspect of this is that there are two types of rules: harm rules, which empower institutions to limit harm, and power rules, which empower individuals to protect themselves. A possible worked example soon presented itself in Kegan J Strawn;s and Daniel Sokol‘s paper on safety techniques in mobile robots, which suggested copying medical ethics’ consent approach. Then someone described the street scene in which every pedestrian had to give consent to every passing experimental Tesla, a possibly an even worse scenario than ad-bearing delivery drones. Pedestrians get nothing out of the situation, and Teslas don’t become safer. What you really want is for car companies not to test the safety of autonomous vehicles on public roads with pedestrians as unwitting crash test dummies.

I try to think every year how our ideas about inegrating robots into society are changing over time. An unusual paper from Maria P. Angel considered this question with respect to privacy scholarship by surveying 1990s writing and 20 years of papers presented at Privacy Law Scholars. We Robot co-founders Calo, Michael Froomkin, and Ian Kerr partly copied its design. Angel’s conclusion is roughly that the 1990s saw calls for an end to self-regulation while the 2000s moved from privacy as necessary for individual autonomy and self-determination to collective benefits and most recently to its importance for human flourishing.

As Hartzog commented, he came to the first We Robot with the belief that “Robots are magic”, only to encounter Smart’s “really fancy hammers.” And, Smart and Cindy Grimm added in 2018, controlled by sensors that are “late, noisy, and wrong”. Hartzog’s early excitement was shared by many of us; the future looked so *interesting* when it was almost entirely imaginary.

Over time, the robotic future has become more nowish, and has shifted in response to technological development; the discussion has become more about real systems (2022) than imagined future ones. The arrival of real robots on our streets – for example, San Francisco’s 2017 use of security robots to deter homeless camps – changed parts of the discussion from theoretical to practical.

In the mid-2010s, much discussion focused on problems of fairness, especially to humans in the loop, who, Madeleine Claire Elish correctly predicted in 2016 would be blamed for failures. More recently, the proliferation of data-gathering devices (sensors, cameras) into everything from truckers’ cabs to agriculture and the arrival of new algorithmic systems dubbed AI has raised awareness of the companies behind these technologies. And, latterly, that often the technology diverts attention from the better possibilities of structural change.

But that’s not as cool.

Illustrations: Boston Dynamics’ Atlas robots doing synchronized backflips (via YouTube).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Data Driven

Data Driven: Truckers, Technology, and the New Workplace Surveillance
By Karen Levy
Princeton University Press
ISBN: 978-0-6911-7530-0

The strikes in Hollywood show actors and writers in an existential crisis: a highly lucrative industry used to pay them a good middle class living but now has the majority struggling just to survive. In her recent book, Data Driven, Cornell assistant professor Karen Levy finds America’s truckers in a similar plight.

Both groups have had their industries change around them because of new technology. In Hollywood, streaming came along to break the feedback loop that powered a highly successful business model for generations. In trucking, the culprit is electronic logging devices (ELDs), which are changing the profession entirely.

Levy has been studying truckers since 2011. At that point, ELDs were beginning to appear in truckers’ cabs but were purely voluntary. That changed in 2017, when the Federal Motor Carrier Safety Administration’s rule mandating their use came into force. The intention, as always, is reasonably benign: to improve safety by ensuring that truckers on the road remain alert and comply with the regulations governing the hours they’re allowed to work.

As part of this work, Levy has interviewed truckers, family members, and managers, and studied trucker-oriented media such as online forums, radio programs, and magazines. She was also able to examine auditing practices in both analog and digital formats.

Some of her conclusions are worrying. For example, she finds that taking truckers’ paper logs into an office away from the cab allowed auditors more time to study them and greater ability to ask questions about them. ELDs, by contrast, are often wired into the cab, and the auditor must inspect them in situ. Where the paper logs were simply understood, many inspectors struggle with the ELDs’ inconsistent interfaces, and being required to enter what is after all the trucker’s personal living space tends to limit the time they spend.

Truckers by and large experience the ELDs as intrusive. Those who have been at the wheel the longest most resent the devaluation of their experience the devices bring. Unlike the paper logs, which remained under the truckers’ control, ELDs often send the data they collect direct to management, who may respond by issuing instructions that override the trucker’s own decisions and on-site information.

Levy’s main point would resonate with those Hollywood strikers. ELDs are being used to correct the genuine problem of tired, and therefore unsafe, truckers. Yet the reason truckers are so tired and take the risk of overworking is the way the industry is structured. Changing how drivers are paid from purely by the mile to including the hours they spend moving their trucks around the yards waiting to unload and other periods of unavoidable delay would be far more effective. Worse, it’s the most experienced truckers who are most alienated by the ELDs’ surveillance. Replacing them with younger, less experienced drivers will not improve road safety for any of us.

The two of us

The-other-Wendy-Grossman-who-is-a-journalist came to my attention in the 1990s by writing a story about something Internettish while a student at Duke University. Eventually, I got email for her (which I duly forwarded) and, once, a transatlantic phone call from a very excited but misinformed PR person. She got married, changed her name, and faded out of my view.

By contrast, Naomi Klein‘s problem has only inflated over time. The “doppelganger” in her new book, Doppelganger: A Trip into the Mirror World, is “Other Naomi” – that is, the American author Naomi Wolf, whose career launched in 1990 with The Beauty Myth . “Other Naomi” has spiraled into conspiracy theories, anti-government paranoia, and wild unscientific theories. Klein is Canadian; her books include No Logo (1999) and The Shock Doctrine (2007). There is, as Klein acknowledges a lot of *seeming* overlap in that a keyword search might surface both.

I had them confused myself until Wolf’s 2019 appearance on BBC radio, when a historian dished out a live-on-air teardown of the basis of her latest book. This author’s nightmare is the inciting incident Klein believes turned Wolf from liberal feminist author into a right-wing media star. The publisher withdrew and pulped the book, and Wolf herself was globally mocked. What does a high-profile liberal who’s lost her platform do now?

When the covid pandemic came, Wolf embraced every available mad theory and her liberal past made her a darling of the extremist right wing media. Increasingly obsessed with following Wolf’s exploits, which often popped up in her online mentions, Klein discovered that social media algorithms were exacerbating the confusion. She began to silence herself, fearing that any response she made would increase the algorithms’ tendency to conflate Naomis. She also abandoned an article deploring Bill Gates’s stance protecting corporate patents instead of spreading vaccines as widely as possible (The Gates Foundation later changed its position.)

Klein tells this story honestly, admitting to becoming addictively obsessed, promising to stop, then “relapsing” the first time she was alone in her car.

The appearance of overlap through keyword similarities is not limited to the two Naomis, as Klein finds on further investigation. YouTube stars like Steve Bannon, who founded Breitbart and served as Donald Trump’s chief strategist during his first months in the White House, wrote this playbook: seize on under-acknowledged legitimate grievances, turn them into right wing talking points, and recruit the previously-ignored victims as allies and supporters. The lab leak hypohesis, the advice being given by scientific authorities, why shopping malls were open when schools were closed, the profiteering (she correctly calls out the UK), the behavior of corporate pharma – all of these were and are valid topics for investigation, discussion, and debate. Their twisted adoption as right-wing causes made many on the side of public health harden their stance to avoid sounding like “one of them”. The result: words lost their meaning and their power.

These are problems no amount of content moderation or online safety can solve. And even if it could, is it right to ask underpaid workers in what Klein terms the “Shadowlands” to clean up our society’s nasty side so we don’t have to see it?

Klein begins with a single doppelganger, then expands into psychology, movies, TV, and other fiction, and ends by navigating expanding circles; the extreme right-wing media’s “Mirror World” is our society’s Mr Hyde. As she warns, those who live in what a friend termed “my blue bubble” may never hear about the media and commentators she investigates. After Wolf’s disgrace on the BBC, she “disappeared”, in reality going on to develop a much bigger platform in the Mirror World. But “they” know and watch us, and use our blind spots to expand their reach and recruit new and unexpected sectors of the population. Klein writes that she encounters many people who’ve “lost” a family member to the Mirror World.

This was the ground explored in 2015 by the filmmaker Jen Senko, who found the smae thing when researching her documentary The Brainwashing of My Dad. Senko’s exploration leads from the 1960s John Birch Society through to Rush Limbaugh and Roger Ailes’s intentional formation of Fox News. Klein here is telling the next stage of that same story. Mirror World is not an accident of technology; it was a plan, then technology came along and helped build it further in new directions.

As Klein searches for an explanation for what she calls “diagnonalism” – the phenomenon that sees a former Obama voter now vote for Trump, or a former liberal feminist shrug at the Dobbs decision – she finds it possible to admire the Mirror World’s inhabitants for one characteristic: “they still believe in the idea of changing reality”.

This is the heart of much of the alienation I see in some friends: those who want structural change say today’s centrist left wing favors the status quo, while those who are more profoundly disaffected dismiss the Bidens and Clintons as almost as corrupt as Trump. The pandemic increased their discontent; it did not take long for early optimistic hopes of “build back better” to fade into “I want my normal”.

Klein ends with hope. As both the US and UK wind toward the next presidential/general election, it’s in scarce supply.

Illustrations: Charlie Chaplin as one of his doppelgangers in The Great Dictator (1940).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Gutenberg Parenthesis

The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet
By Jeff Jarvis
Bloomsbury Academic
ISBN: 978-1-5013-9482-9

There’s a great quote I can’t trace in which a source singer from whom Sir Walter Scott collected folk songs told him he’d killed their songs by printing them. Printing had, that is, removed the song from the oral culture of repeated transmission, often with alterations, from singer to singer. Like pinning down a butterfly.

In The Gutenberg Parenthesis, Jeff Jarvis argues that modern digital culture offers the chance of a return to the collaborative culture that dominated most of human history. Jarvis is not the first to suggest that our legacy media are an anomaly. In his 2013 book Writing on the Wall, Tom Standage calls out the last 150 years of corporate-owned for-profit media as an anomaly in the 2,000-year sweep of social media. In his analogy, the earliest form was “Roman broadband” (slaves) carrying messages back and forth. Standage finds other historical social media analogues in the coffeehouses that hatched the scientific revolution. Machines, both print and broadcast, made us consumers instead of participants. In Jarvis’s account, printing made institutions and nation-states, the same ones that now are failing to control the new paradigm.

The “Gutenberg parenthesis” of Jarvis’s title was coined by Lars Ore Sauerberg, a professor at the University of Southern Denmark, who argues (in, for example, a 2009 paper for the journal Orbis Literarum) that the arrival of the printing press changed the nature of cognition. Jarvis takes this idea and runs with it: if we are, as he believes, now somewhere in a decades- or perhaps centuries-long process of closing the parenthesis – that is, exiting the era of print bracketed by Gutenberg’s invention of the printing press and the arrival of digital media – what comes next?

To answer this question, Jarvis begins by examining the transition *into* the era of printing. The invention of movable type and printing presses by themselves brought a step down in price and a step up in scale – what had once been single copies available only to people rich enough to pay a scribe suddenly became hundreds of copies that were still expensive. It took two centuries to arrive at the beginnings of copyright law, and then the industrial revolution to bring printing and corporate ownership at today’s scale.

Jarvis goes on to review the last two centuries of increasingly centralized and commercialized publishing. The institutions print brought provided authority that enabled them to counter misinformation effectively. In our new world, where these institutions are being challenged, many more voices can be heard – good, for obvious reasons of social justice and fairness, but unfortunate in terms of the spread of misinformation, malinformation, and disinformation. Jarvis believes we need to build new institutions that can enable the former and inhibit the latter. Exactly what those will look like is left as an exercise for the reader in the times to come. Could Gutenberg have predicted Entertainment Weekly?

Guarding the peace

Police are increasingly attempting to prevent crime by using social media targeting tools to shape public behavior, says a new report from the Scottish Institute for Policing Research (PDF) written by a team of academic researchers led by Ben Collier at the University of Edinburgh. There is no formal regulation of these efforts, and the report found many examples of what is genteelly calls “unethical practice”.

On the one hand, “behavioral change marketing” seems an undeniably clever use of new technological tools. If bad actors can use targeted ads to scam, foment division, and incite violence, why shouldn’t police use them to encourage the opposite? The tools don’t care whether you’re a Russian hacker targeting 70-plus white pensioners with anti-immigrant rhetoric or a charity trying to reach vulnerable people to offer help. Using them is a logical extension of the drive toward preventing, rather than solving, crime. Governments have long used PR techniques to influence the public, from benign health PSAs on broadcast media to Theresa May’s notorious , widely cricised, and unsuccessful 2013 campaign of van ads telling illegal immigrants to go home.

On the other hand, it sounds creepy as hell. Combining police power with poorly-evidenced assumptions about crime and behavior and risk and the manipulation and data gathering of surveillance capitalism…yikes.

The idea of influence policing derives at least in part from Cass R. Sunstein‘s and Richard H. Thaler‘s 2008 book Nudge. The “nudge theory” it promoted argued that the use of careful design (“choice architecture”) could push people into making more desirable decisions.

The basic contention seems unarguable; using design to push people toward decisions they might not make by themselves is the basis of many large platforms’ design decisions. Dark patterns are all about that.

Sunstein and Thaler published their theory at the post-financial crisis moment when governments were looking to reduce costs. As early as 2010, the UK’s Cabinet Office set up the Behavioural Insights Team to improve public compliance with government policies. The “Nudge Unit” has been copied widely across the world.

By 2013, it was being criticized for forcing job seekers to fill out a scientifically invalid psychometric test. In 2021, Observer columnist Sonia Sodha called its record “mixed”, deploring the expansion of nudge theory into complex, intractable social problems. In 2022, new research cast doubt on the whole idea that nudges have little effect on personal behavior.

The SIRP report cites the Government Communications Service, the outgrowth of decades of government work to gain public compliance with policy. The GCS itself notes its incorporation of marketing science and other approaches common in the commercial sector. Its 7,000 staff work in departments across government.

This has all grown up alongside the increasing adoption of digital marketing practices across the UK’s public sector, including the tax authorities (HMRC), the Department of Work and Pensions, and especially, the Home Office – and alongside the rise of sophisticated targeting tools for online advertising.

The report notes: “Police are able to develop ‘patchwork profiles’ built up of multiple categories provided by ad platforms and detailed location-based categories using the platform targeting categories to reach extremely specific groups.”

The report’s authors used the Meta Ad Library to study the ads, the audiences and profiles police targeted, and the cost. London’s Metropolitan Police, which a recent scathing report found endemically racist and misogynist, was an early adopter and is the heaviest studied user of digitally targeted ads on Meta.

Many of the cample campaigns these organizations run sound mostly harmless. Campaigns intended to curb domestic violence, for example, may aim at encouraging bystanders to come forward with concerns. Others focus on counter-radicalisation and security themes or, increasingly, preventing online harms and violence against women and girls.

As a particular example of the potential for abuse, the report calls out the Home Office Migrants on the Move campaign, a collaboration with a “migration behavior change” agency called Seefar. This targeted people in France seeking asylum in the UK and attempted to frighten them out of trying to cross the Channel in small boats. The targeting was highly specific, with many ads aimed at as few as 100 to 1,000 people, chosen for their language and recent travel in or through Brussels and Calais.

The report’s authors raise concerns: the harm implicit in frightening already-extremely vulnerable people, the potential for damaging their trust in authorities to help them, and the privacy implications of targeting such specific groups. In the report’s example, Arabic speakers in Brussels might see the Home Office ads but their French neighbors would not – and those Arabic speakers would be unlikely to be seeking asylum. The Home Office’s digital equivalent of May’s van ads, therefore, would be seen only by a selection of microtargeted individuals.

The report concludes: “We argue that this campaign is a central example of the potential for abuse of these methods, and the need for regulation.”

The report makes a number of recommendations including improved transparency, formalized regulation and oversight, better monitoring, and public engagement in designing campaigns. One key issue is coming up with better ways of evaluating the results. Surprise, surprise: counting clicks, which is what digital advertising largely sells as a metric, is not a useful way to measure social change.

All of these arguments make sense. Improving transparency in particular seems crucial, as does working with the relevant communities. Deterring crime doesn’t require tricks and secrecy; it requires collaboration and openness.

Illustrations: Theresa May’s notorious van ad telling illegal immigrants to go home.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Book review: Beyond Measure

Beyond Measure: The Hidden History of Measurement
Author: James Vincent
Publisher: Faber and Faber
ISBN: 978-0-571-35421-4

In 2022, then-government minister Jacob Rees-Mogg proposed that Britain should return to imperial measurements – pounds, ounces, yay Brexit! This was a ship that had long since sailed; 40-something friends learned only the metric system at school. Even those old enough to remember imperial measures had little nostalgia for them.

As James Vincent explains in Beyond Measure: The Hidden History of Measurement, and as most of us assume instinctively, measuring physical objects began with comparisons to pieces of the human body: feet, hands, cubits (elbow to fingertip), fathoms (the span of outstretched arms). Other forms of measurement were functional, such as the Irish collop, the amount of land needed to graze one cow. Such imprecise measurements had their benefits, such as convenient availability and immediately understandable context-based value.

Quickly, though, the desire to trade led to the need for consistency, which in turn fed the emergence of centralized state power. The growth of science increased the pressure for more and more consistent and precise measurements – Vincent spends a chapter on the surprisingly difficult quest to pin down a number we now learn as children: the temperature at which water boils. Perversely, though, each new generation of more precise measurement reveals new errors that require even more precise measurement to correct.

The history of measurement is also the history of power. Surveying the land enabled governments to decide its ownership; the world-changing discovery of statistics and the understanding they brought of social trends, and the resulting empowerment of governments, which could afford to amass the biggest avalanches of numbers.

Perhaps the quirkiest and most unexpected material is Vincent’s chapter on Standard Reference Materials. At the US National Institute for Standards and Measurement, Vincent finds carefully studied jars of peanut butter and powdered radioactive human lung. These, it turns out, provide standards against which manufacturers can check their products.

Often, Vincent observes, changes in measurement systems accompany moments of social disruption. The metric system, for example, was born in France at the time of the revolution. Defining units of measurement in terms of official weights and measures made standards egalitarian rather than dependent on one man’s body parts. By 2018, when Vincent visits the official kilo weight and meter stick in Paris, however, even that seemed too elite. Today, both kilogram and meter are defined in terms of constants of nature – the meter, for example, is defined as the distance light travels in 1/299,792,458th of a second (itself now defined in terms of the decay of caesium-133). These are units that anyone with appropriate equipment can derive at any time without needing to check it against a single stick in a vault. Still elite, but a much larger elite.

But still French, which may form part of Rees-Mogg’s objection to it. And, possibly, as Vincent finds some US Republicans have complained, *communist* because of its global adoption. Nonetheless, and despite anti-metric sentiments expressed even by futurists like Stewart Brand, the US is still more metric than most people think. The road system’s miles and retail stores’ pounds and ounces are mostly a veneer; underneath, industry and science have voted for global compatibility – and the federal government has, since 1893, defined feet and inches by metric units.