Review: Data Driven

Data Driven: Truckers, Technology, and the New Workplace Surveillance
By Karen Levy
Princeton University Press
ISBN: 978-0-6911-7530-0

The strikes in Hollywood show actors and writers in an existential crisis: a highly lucrative industry used to pay them a good middle class living but now has the majority struggling just to survive. In her recent book, Data Driven, Cornell assistant professor Karen Levy finds America’s truckers in a similar plight.

Both groups have had their industries change around them because of new technology. In Hollywood, streaming came along to break the feedback loop that powered a highly successful business model for generations. In trucking, the culprit is electronic logging devices (ELDs), which are changing the profession entirely.

Levy has been studying truckers since 2011. At that point, ELDs were beginning to appear in truckers’ cabs but were purely voluntary. That changed in 2017, when the Federal Motor Carrier Safety Administration’s rule mandating their use came into force. The intention, as always, is reasonably benign: to improve safety by ensuring that truckers on the road remain alert and comply with the regulations governing the hours they’re allowed to work.

As part of this work, Levy has interviewed truckers, family members, and managers, and studied trucker-oriented media such as online forums, radio programs, and magazines. She was also able to examine auditing practices in both analog and digital formats.

Some of her conclusions are worrying. For example, she finds that taking truckers’ paper logs into an office away from the cab allowed auditors more time to study them and greater ability to ask questions about them. ELDs, by contrast, are often wired into the cab, and the auditor must inspect them in situ. Where the paper logs were simply understood, many inspectors struggle with the ELDs’ inconsistent interfaces, and being required to enter what is after all the trucker’s personal living space tends to limit the time they spend.

Truckers by and large experience the ELDs as intrusive. Those who have been at the wheel the longest most resent the devaluation of their experience the devices bring. Unlike the paper logs, which remained under the truckers’ control, ELDs often send the data they collect direct to management, who may respond by issuing instructions that override the trucker’s own decisions and on-site information.

Levy’s main point would resonate with those Hollywood strikers. ELDs are being used to correct the genuine problem of tired, and therefore unsafe, truckers. Yet the reason truckers are so tired and take the risk of overworking is the way the industry is structured. Changing how drivers are paid from purely by the mile to including the hours they spend moving their trucks around the yards waiting to unload and other periods of unavoidable delay would be far more effective. Worse, it’s the most experienced truckers who are most alienated by the ELDs’ surveillance. Replacing them with younger, less experienced drivers will not improve road safety for any of us.

The two of us

The-other-Wendy-Grossman-who-is-a-journalist came to my attention in the 1990s by writing a story about something Internettish while a student at Duke University who had written a story about something Internettish. Eventually, I got email for her (which I duly forwarded) and, once, a transatlantic phone call from a very excited but misinformed PR person. She got married, changed her name, and faded out of my view.

By contrast, Naomi Klein‘s problem has only inflated over time. The “doppelganger” in her new book, Doppelganger: A Trip into the Mirror World, is “Other Naomi” – that is, the American author Naomi Wolf, whose career launched in 1990 with The Beauty Myth . “Other Naomi” has spiraled into conspiracy theories, anti-government paranoia, and wild unscientific theories. Klein is Canadian; her books include No Logo (1999) and The Shock Doctrine (2007). There is, as Klein acknowledges a lot of *seeming* overlap in that a keyword search might surface both.

I had them confused myself until Wolf’s 2019 appearance on BBC radio, when a historian dished out a live-on-air teardown of the basis of her latest book. This author’s nightmare is the inciting incident Klein believes turned Wolf from liberal feminist author into a right-wing media star. The publisher withdrew and pulped the book, and Wolf herself was globally mocked. What does a high-profile liberal who’s lost her platform do now?

When the covid pandemic came, Wolf embraced every available mad theory and her liberal past made her a darling of the extremist right wing media. Increasingly obsessed with following Wolf’s exploits, which often popped up in her online mentions, Klein discovered that social media algorithms were exacerbating the confusion. She began to silence herself, fearing that any response she made would increase the algorithms’ tendency to conflate Naomis. She also abandoned an article deploring Bill Gates’s stance protecting corporate patents instead of spreading vaccines as widely as possible (The Gates Foundation later changed its position.)

Klein tells this story honestly, admitting to becoming addictively obsessed, promising to stop, then “relapsing” the first time she was alone in her car.

The appearance of overlap through keyword similarities is not limited to the two Naomis, as Klein finds on further investigation. YouTube stars like Steve Bannon, who founded Breitbart and served as Donald Trump’s chief of strategist during his first months in the White House, wrote this playbook: seize on under-acknowledged legitimate grievances, turn them into right wing talking points, and recruit the previously-ignored victims as allies and supporters. The lab leak hypohesis, the advice being given by scientific authorities, why shopping malls were open when schools were closed, the profiteering (she correctly calls out the UK), the behavior of corporate pharma – all of these were and are valid topics for investigation, discussion, and debate. Their twisted adoption as right-wing causes made many on the side of public health harden their stance to avoid sounding like “one of them”. The result: words lost their meaning and their power.

These are problems no amount of content moderation or online safety can solve. And even if it could, is it right to ask underpaid workers in the what Klein terms the “Shadowlands” to clean up our society’s nasty side so we don’t have to see it?

Klein begins with a single doppelganger, then expands into psychology, movies, TV, and other fiction, and ends by navigating expanding circles; the extreme right-wing media’s “Mirror World” is our society’s Mr Hyde. As she warns, those who live in what a friend termed “my blue bubble” may never hear about the media and commentators she investigates. After Wolf’s disgrace on the BBC, she “disappeared”, in reality going on to develop a much bigger platform in the Mirror World. But “they” know and watch us, and use our blind spots to expand their reach and recruit new and unexpected sectors of the population. Klein writes that she encounters many people who’ve “lost” a family member to the Mirror World.

This was the ground explored in 2015 by the filmmaker Jen Senko, who found the smae thing when researching her documentary The Brainwashing of My Dad. Senko’s exploration leads from the 1960s John Birch Society through to Rush Limbaugh and Roger Ailes’s intentional formation of Fox News. Klein here is telling the next stage of that same story. Mirror World is not an accident of technology; it was a plan, then technology came along and helped build it further in new directions.

As Klein searches for an explanation for what she calls “diagnonalism” – the phenomenon that sees a former Obama voter now vote for Trump, or a former liberal feminist shrug at the Dobbs decision – she finds it possible to admire the Mirror World’s inhabitants for one characteristic: “they still believe in the idea of changing reality”.

This is the heart of much of the alienation I see in some friends: those who want structural change say today’s centrist left wing favors the status quo, while those who are more profoundly disaffected dismiss the Bidens and Clintons as almost as corrupt as Trump. The pandemic increased their discontent; it did not take long for early optimistic hopes of “build back better” to fade into “I want my normal”.

Klein ends with hope. As both the US and UK wind toward the next presidential/general election, it’s in scarce supply.

Illustrations: Charlie Chaplin as one of his doppelgangers in The Great Dictator (1940).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Gutenberg Parenthesis

The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet
By Jeff Jarvis
Bloomsbury Academic
ISBN: 978-1-5013-9482-9

There’s a great quote I can’t trace in which a source singer from whom Sir Walter Scott collected folk songs told him he’d killed their songs by printing them. Printing had, that is, removed the song from the oral culture of repeated transmission, often with alterations, from singer to singer. Like pinning down a butterfly.

In The Gutenberg Parenthesis, Jeff Jarvis argues that modern digital culture offers the chance of a return to the collaborative culture that dominated most of human history. Jarvis is not the first to suggest that our legacy media are an anomaly. In his 2013 book Writing on the Wall, Tom Standage calls out the last 150 years of corporate-owned for-profit media as an anomaly in the 2,000-year sweep of social media. In his analogy, the earliest form was “Roman broadband” (slaves) carrying messages back and forth. Standage finds other historical social media analogues in the coffeehouses that hatched the scientific revolution. Machines, both print and broadcast, made us consumers instead of participants. In Jarvis’s account, printing made institutions and nation-states, the same ones that now are failing to control the new paradigm.

The “Gutenberg parenthesis” of Jarvis’s title was coined by Lars Ore Sauerberg, a professor at the University of Southern Denmark, who argues (in, for example, a 2009 paper for the journal Orbis Literarum) that the arrival of the printing press changed the nature of cognition. Jarvis takes this idea and runs with it: if we are, as he believes, now somewhere in a decades- or perhaps centuries-long process of closing the parenthesis – that is, exiting the era of print bracketed by Gutenberg’s invention of the printing press and the arrival of digital media – what comes next?

To answer this question, Jarvis begins by examining the transition *into* the era of printing. The invention of movable type and printing presses by themselves brought a step down in price and a step up in scale – what had once been single copies available only to people rich enough to pay a scribe suddenly became hundreds of copies that were still expensive. It took two centuries to arrive at the beginnings of copyright law, and then the industrial revolution to bring printing and corporate ownership at today’s scale.

Jarvis goes on to review the last two centuries of increasingly centralized and commercialized publishing. The institutions print brought provided authority that enabled them to counter misinformation effectively. In our new world, where these institutions are being challenged, many more voices can be heard – good, for obvious reasons of social justice and fairness, but unfortunate in terms of the spread of misinformation, malinformation, and disinformation. Jarvis believes we need to build new institutions that can enable the former and inhibit the latter. Exactly what those will look like is left as an exercise for the reader in the times to come. Could Gutenberg have predicted Entertainment Weekly?

Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: Sorry, Sorry, Sorry

Sorry, Sorry, Sorry: The Case for Good Apologies
By Marjorie Ingalls and Susan McCarthy
Gallery Books
ISBN: 978-1-9821-6349-5

Years ago, a friend of mine deplored apologies: “People just apologize because they want you to like them,” he observed.

That’s certainly true at least some of the time, but as Marjorie Ingalls and Susan McCarthy argue at length in their book Sorry, Sorry, Sorry, well-constructed and presented apologies can make the world a better place. For the recipient, they can remove the sting of old wrongs; for the giver, they can ease the burden of old shames.

What you shouldn’t do, when apologizing, is what self-help groups sometimes describe as “plan the outcome”. That is, you present your apology and you take your chances. Follow Ingalls’ and McCarthy’s six steps to construct your apology, then hope for, but do not demand, forgiveness, and don’t mess the whole thing up by concluding with, “So, we’re good?”

Their six steps to a good apology:
1. Say you’re sorry.
2. For what you did.
3. Show you understand why it was bad.
4. Only explain if you need to; don’t make excuses.
5. Say why it won’t happen again.
6. Offer to make up for it.
Six and a half. Listen.

It’s certainly true that many apologies don’t have the desired effect. Often, it’s because the apology itself is terrible. Through their Sorry Watch blog, Ingalls and McCarthy have been collecting and analyzing bad public apologies for years (obDisclosure: I send in tips on apologies in tennis and British politics). Many of these appear in the book, organized into chapters on apologies from doctors and medical establishments, large corporations, and governments and nation-states. Alongside these are chapters on the psychology of apologies, teaching children to apologize, practical realities relating to gender, race, and other disparities. Women, for example, are more likely to apologize well, but take greater risk when they do – and are less likely to be forgiven.

Some templates for *bad* apologies when you’ve done something hurtful (do not try this at home!): “I’m sorry if…”, “I’m sorry that you felt…”, “I regret…”, and, of course, the often-used classic, “This is not who we are.”

These latter are, in Ingalls’ and McCarthy’s parlance “apology-shaped objects”, but not actually apologies. They explain this in detail with plenty of wit – and no less than five Bad Apology bingo cards.

Even for readers of the blog, there’s new information. I was particularly interested to learn that malpractice lawyers are likely wrong in telling doctors not to apologize because admitting fault invites a lawsuit. A 2006 Harvard hospital system report found little evidence for this contention – as long as the apologies are good ones. It’s the failure to communicate and the refusal to take responsibility that are much more anger-provoking. In other words, the problem there, as everywhere else, is *bad* apologies.

A lot of this ought to be common sense. But as Ingalls and McCarthy make plain, it may be sense but it’s not as common as any of us would like.

Doom cyberfuture

Midway through this year’s gikii miniconference for pop culture-obsessed Internet lawyers, Jordan Hatcher proposed that generational differences are the key to understanding the huge gap between the Internet pioneers, who saw regulation as the enemy, and the current generation, who are generally pushing for it. While this is a bit too pat – it’s easy to think of Millennial libertarians and I’ve never thought of Boomers as against regulation, just, rationally, against bad Internet law that sticks – it’s an intriguing idea.

Hatcher, because this is gikii and no idea can be presented without a science fiction tie-in, illustrated this with 1990s movies, which spread the “DCF-84 virus” – that is, “doom cyberfuture-84”. The “84” is not chosen for Orwell but for the year William Gibson’s Neuromancer was published. Boomers – he mentioned John Perry Barlow, born 1947, and Lawrence Lessig, born 1961 – were instead infected with the “optimism virus”.

It’s not clear which 1960s movies might have seeded us with that optimism. You could certainly make the case that 1968’s 2001: A Space Odyssey ends on a hopeful note (despite an evil intelligence out to kill humans along the way), but you don’t even have to pick a different director to find dystopia: I see your 2001 and give you Dr Strangelove (1964). Even Woodstock (1970) is partly dystopian; the consciousness of the Vietnam war permeates every rain-soaked frame. But so did the belief that peace could win: so, wash.

For younger people’s pessimism, Hatcher cited 1995’s Johnny Mnemonic (based on a Gibson short story) and Strange Days.

I tend to think that if 1990s people are more doom-laden than 1960s people it has more to do with real life. Boomers were born in a time of economic expansion, relatively affordable education and housing, and and when they protested a war the government eventually listened. Millennials were born in a time when housing and education meant a lifetime of debt, and when millions of them protested a war they were ignored.

In any case, Hatcher is right about the stratification of demographic age groups. This is particularly noticeable in social media use; you can often date people’s arrival on the Internet by which communications medium they prefer. Over dinner, I commented on the nuisance of typing on a phone versus a real keyboard, and two younger people laughed at me: so much easier to type on a phone! They were among the crowd whose papers studied influencers on TikTok (Taylor Annabell, Thijs Kelder, Jacob van de Kerkhof, Haoyang Gui, and Catalina Goanta) and the privacy dangers of dating apps (Tima Otu Anwana and Paul Eberstaller), the kinds of subjects I rarely engage with because I am a creature of text, like most journalists. Email and the web feel like my native homes in a way that apps, game worlds, and video services never will. That dates me both chronologically and by my first experiences of the online world (1991).

Most years at this event there’s a new show or movie that fires many people’s imagination. Last year it was Upload with a dash of Severance. This year, real technological development overwhelmed fiction, and the star of the show was generative AI and large language models. Besides my paper with Jon Crowcrosft, there was one from Marvin van Bekkum, Tim de Jonge, and Frederik Zuiderveen Borgesius that compared the science fiction risks of AI – Skynet, Roko’s basilisk, and an ordering of Asimov’s Laws that puts obeying orders above not harming humans (see XKCD, above) – to the very real risks of the “AI” we have: privacy, discrimination, and environmental damage.

Other AI papers included one by Colin Gavaghan, who asked if it actually matters if you can’t tell whether the entity that’s communicating with you is an AI? Is that what you really need to know? You can see his point: if you’re being scammed, the fact of the scam matters more than the nature of the perpetrator, though your feelings about it may be quite different.

A standard explanation of what put the “science” in science fiction (or the “speculative” in “speculative fiction”) used be to that the authors ask, “What if?” What if a planet had six suns whose interplay meant that darkness only came once every 1,000 years? Would the reaction really be as Ralph Waldo Emerson imagined it? (Isaac Asimov’s Nightfall). What if a new link added to the increasingly complex Boston MTA accidentally turned the system into a Mobius strip (A Subway Named Mobius, by Armin Joseph Deutsch). And so on.

In that sense, gikii is often speculative law, thought experiments that tease out new perspectives. What if Prime Day becomes a culturally embedded religious holiday (Megan Rae Blakely)? What if the EU’s trademark system applied in the Star Trek universe (Simon Sellers)? What if, as in Max Gladsone’s Craft Sequence books, law is practical magic (Antonia Waltermann)? In the trademark example, time travel is a problem; as competing interests can travel further and further back to get the first registration. In the latter…well, I’m intrigued by the idea that a law making dumping sewage in England’s rivers illegal could physically stop it from happening without all the pesky apparatus of law enforcement and parliamentary hearings.

Waltermann concluded by suggesting that to some extent law *is* magic in our world, too. A useful reminder: be careful what law you wish for because you just may get it. Boomer!

Illustrations: Part of XKCD‘s analysis of Asimov’s Laws of Robotics.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories AI, Future tech, LawTags , , Leave a comment on Doom cyberfuture

Small data

Shortly before this gets posted, Jon Crowcroft and I will have presented this year’s offering at Gikii, the weird little conference that crosses law, media, technology, and pop culture. This is what we will possibly may have said, as I understand it, with some added explanation for the slightly less technical audience I imagine will read this.

Two years ago, a team of four researchers – Timnit Gebru, Emily Bender, Margaret Mitchell (writing as Shmargaret Shmitchell), and Angelina McMillan-Major – wrote a now-famous paper called On the Dangers of Stochastic Parrots (PDF) calling into question the usefulness of the large language models (LLMs) that have caused so much ruckus this year. The “Stochastic Four” argued instead of small models built on carefully curated data: less prone to error, less exploitive of people’s data, less damaging to the planet. Gebru got fired over this paper; Google also fired Mitchell soon afterwards. Two years later, neural networks pioneer Geoff Hinton quit Google in order to voice similar concerns.

Despite the hype, LLMs have many problems. They are fundamentally an extractive technology and are resource-intensive. Building LLMs requires massive amounts of training data; so far, the companies have been unwilling to acknowledge their sources, perhaps because (as is happening already) they fear copyright suits.

More important from a technical standpoint, is the issue of model collapse; that is, models degrade when they begin to ingest synthetic AI-generated data instead of human input. We’ve seen this before with Google Flu Trends, which degraded rapidly as incoming new search data included many searches on flu-like symptoms that weren’t actually flu, and others that simply reflected the frequency of local news coverage. “Data pollution” as LLM-generated data fills the web, will mean that the web will be an increasingly useless source of training data for future generations of generative AI. Lots more noise, drowning out the signal (in the photo above, the signal would be the parrot).

Instead, if we follow the lead of the Stochastic Four, the more productive approach is small data – small, carefully curated datasets that train models to match specific goals. Far less resource-intensive, far fewer issues with copyright, appropriation, and extraction.

We know what the LLM future looks like in outline: big, centralized services, because no one else will be able to amass enough data. In that future, surveillance capitalism is an essential part of data gathering. SLM futures could look quite different: decentralized, with realigned incentives. At one point, we wanted to suggest that small data could bring the end of surveillance capitalism; that’s probably an overstatement. But small data could certainly create the ecosystem in which the case for mass data collection would be less compelling.

Jon and I imagined four primary alternative futures: federation, personalization, some combination of those two, and paradigm shift.

Precursors to a federated small data future already exist; these include customer service chatbots, predictive text assistants. In this future, we could imagine personalized LLM servers designed to serve specific needs.

An individualized future might look something like I suggested here in March: a model that fits in your pocket that is constantly updated with material of your own choosing. Such a device might be the closest yet to Vannevar Bush’s 1945 idea of the Memex (PDF), updated for the modern era by automating the dozens of secretary-curators he imagined doing the grunt work of labeling and selection. That future again has precursors in techniques for sharing the computation but not the data, a design we see proposed for health care, where the data is too sensitive to share unless there’s a significant public interest (as in pandemics or very rare illnesses), or in other data analysis designs intended to protect privacy.

In 2007, the science fiction writer Charles Stross suggested something like this, though he imagined it as a comprehensive life log, which he described as a “google for real life”. So this alternative future would look something like Stross’s pocket $10 life log with enhanced statistics-based data analytics.

Imagining what a paradigm shift might look like is much harder. That’s the kind of thing science fiction writers do; it’s 16 years since Stross gave that life log talk. However, in his 2018 history of advertising, The Attention Merchants, Columbia professor Tim Wu argued that industrialization was the vector that made advertising and its grab for our attention part of commerce. A hundred and fifty-odd years later, the centralizing effects of industrialization are being challenged starting with energy via renewables and local power generation and social media via the fediverse. Might language models also play their part in bringing a new, more collaborative and cooperative society?

It is, in other words, just possible that the hot new technology of 2023 is simply a dead end bringing little real change. It’s happened before. There have been, as Wu recounts, counter-moves and movements before, but they didn’t have the technological affordances of our era.

In the Q&A that followed, Miranda Mowbray pointed out that companies are trying to implement the individualized model, but that it’s impossible to do unless there are standardized data formats, and even then hard to do at scale.

Illustrations: Spot the parrot seen in a neighbor’s tree.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories AI, Events, New tech, old knowledgeTags Leave a comment on Small data

Power cuts

In the latest example of corporate destruction, the Guardian reports on the disturbing trend in which streaming services like Disney and Warner Bros Discovery are deleting finished, even popular, shows for financial reasons. It’s like Douglas Adams’ rock star Hotblack Desiato spending a year dead for tax reasons.

Given that consumers’ budgets are stretched so thin that many are reevaluating the streaming services they’re paying for, you would think this would be the worst possible time to delete popular entertainments. Instead, the industry seems to be possessed by a death wish in which it’s making its offerings *less* attractive. Even worse, the promise they appeared to offer to showrunners was creative freedom and broad and permanent access to their work. The news that Disney+ is even canceling finished shows (Nautilus) shortly before their scheduled release in order to pay less *tax* should send a chill through every creator’s spine. No one wants to spend years of their life – for almost *any* amount of money – making things that wind up in the corporate equivalent of the warehouse at the end of Raiders of the Lost Ark.

It’s time, as the Masked Scheduler suggested recently on Mastodon, for the emergence of modern equivalents of creator-founded studios United Artists and Desilu.

***

Many of us were skeptical about Meta’s Oversight Board; it was easy to predict that Facebook would use it to avoid dealing with the PR fallout from controversial cases, but never relinquish control. And so it is proving.

This week, Meta overruled the Board‘s recommendation of a six-month suspension of the Facebook account belonging to former Cambodian prime minister Hun Sen. At issue was a video of one of Sen’s speeches, which everyone agreed incited violence against his opposition. Meta has kept the video up on the grounds of “newsworthiness”; Meta also declined to follow the Board’s recommendation to clarify its rules for public figures in “contexts in which citizens are under continuing threat of retaliatory violence from their governments”.

In the Platformer newsletter Casey Newton argues that the Board’s deliberative process is too slow to matter – it took eight months to decide this case, too late to save the election at stake or deter the political violence that has followed. Newton also concludes from the list of decisions that the Board is only “nibbling round the edges” of Meta’s policies.

A company with shareholders, a business model, and a king is never going to let an independent group make decisions that will profoundly shape its future. From Kate Klonick’s examination, we know the Board members are serious people prepared to think deeply about content moderation and its discontents. But they were always in a losing position. Now, even they must know that.

***

It should go without saying that anything that requires an Internet connection should be designed for connection failures, especially when the connected devices are required to operate the physical world. The downside was made clear by the 2017 incident, when lost signal meant a Tesla-owning venture capitalist couldn’t restart his car. Or the one in 2021, when a bunch of Tesla owners found their phone app couldn’t unlock their car doors. Tesla’s solution both times was to tell car owners to make sure they always had their physical car keys. Which, fine, but then why have an app at all?

Last week, Bambu 3D printers began printing unexpectedly when they got disconnected from the cloud. The software managing the queue of printer jobs lost the ability to monitor them, causing some to be restarted multiple times. Given the heat and extruded material 3D printers generate, this is dangerous for both themselves and their surroundings.

At TechRadar, Bambu’s PR acknowledges this: “It is difficult to have a cloud service 100% reliable all the time, but we should at least have designed the system more carefully to avoid such embarrassing consequences.” As TechRadar notes, if only embarrassment were the worst risk.

So, new rule: before installation test every new “smart” device by blocking its Internet connection to see how it behaves. Of course, companies should do this themselves, but as we/’ve seen, you can’t rely on that either.

***

Finally, in “be careful what you legislate for”, Canada is discovering the downside of C-18, which became law in June. and requires the biggest platforms to pay for the Canadian news content they host. Google and Meta warned all along that they would stop hosting Canadian news rather than pay for it. Experts like law professor Michael Geist predicted that the bill would merely serve to dramatically cut traffic to news sites.

On August 1, Meta began adding blocks for news links on Facebook and Instagram. A coalition of Canadian news outlets quickly asked the Competition Bureau to mount an inquiry into Meta’s actions. At TechDirt Mike Masnick notes the irony: first legacy media said Meta’s linking to news was anticompetitive; now they say not linking is anticompetitive.

However, there are worse consequences. Prime minister Justin Trudeau complains that Meta’s news block is endangering Canadians, who can’t access or share local up-to-date information about the ongoing wildfires.

In a sensible world, people wouldn’t rely on Facebook for their news, politicians would write legislation with greater understanding, and companies like Meta would wield their power responsibly. In *this* world, a we have a perfect storm.

Illustrations:XKCD’s Dependency.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Infrastructure, Intellectual Property, Law, Media, Net lifeTags , , Leave a comment on Power cuts

Guarding the peace

Police are increasingly attempting to prevent crime by using social media targeting tools to shape public behavior, says a new report from the Scottish Institute for Policing Research (PDF) written by a team of academic researchers led by Ben Collier at the University of Edinburgh. There is no formal regulation of these efforts, and the report found many examples of what is genteelly calls “unethical practice”.

On the one hand, “behavioral change marketing” seems an undeniably clever use of new technological tools. If bad actors can use targeted ads to scam, foment division, and incite violence, why shouldn’t police use them to encourage the opposite? The tools don’t care whether you’re a Russian hacker targeting 70-plus white pensioners with anti-immigrant rhetoric or a charity trying to reach vulnerable people to offer help. Using them is a logical extension of the drive toward preventing, rather than solving, crime. Governments have long used PR techniques to influence the public, from benign health PSAs on broadcast media to Theresa May’s notorious , widely cricised, and unsuccessful 2013 campaign of van ads telling illegal immigrants to go home.

On the other hand, it sounds creepy as hell. Combining police power with poorly-evidenced assumptions about crime and behavior and risk and the manipulation and data gathering of surveillance capitalism…yikes.

The idea of influence policing derives at least in part from Cass R. Sunstein‘s and Richard H. Thaler‘s 2008 book Nudge. The “nudge theory” it promoted argued that the use of careful design (“choice architecture”) could push people into making more desirable decisions.

The basic contention seems unarguable; using design to push people toward decisions they might not make by themselves is the basis of many large platforms’ design decisions. Dark patterns are all about that.

Sunstein and Thaler published their theory at the post-financial crisis moment when governments were looking to reduce costs. As early as 2010, the UK’s Cabinet Office set up the Behavioural Insights Team to improve public compliance with government policies. The “Nudge Unit” has been copied widely across the world.

By 2013, it was being criticized for forcing job seekers to fill out a scientifically invalid psychometric test. In 2021, Observer columnist Sonia Sodha called its record “mixed”, deploring the expansion of nudge theory into complex, intractable social problems. In 2022, new research cast doubt on the whole idea that nudges have little effect on personal behavior.

The SIRP report cites the Government Communications Service, the outgrowth of decades of government work to gain public compliance with policy. The GCS itself notes its incorporation of marketing science and other approaches common in the commercial sector. Its 7,000 staff work in departments across government.

This has all grown up alongside the increasing adoption of digital marketing practices across the UK’s public sector, including the tax authorities (HMRC), the Department of Work and Pensions, and especially, the Home Office – and alongside the rise of sophisticated targeting tools for online advertising.

The report notes: “Police are able to develop ‘patchwork profiles’ built up of multiple categories provided by ad platforms and detailed location-based categories using the platform targeting categories to reach extremely specific groups.”

The report’s authors used the Meta Ad Library to study the ads, the audiences and profiles police targeted, and the cost. London’s Metropolitan Police, which a recent scathing report found endemically racist and misogynist, was an early adopter and is the heaviest studied user of digitally targeted ads on Meta.

Many of the cample campaigns these organizations run sound mostly harmless. Campaigns intended to curb domestic violence, for example, may aim at encouraging bystanders to come forward with concerns. Others focus on counter-radicalisation and security themes or, increasingly, preventing online harms and violence against women and girls.

As a particular example of the potential for abuse, the report calls out the Home Office Migrants on the Move campaign, a collaboration with a “migration behavior change” agency called Seefar. This targeted people in France seeking asylum in the UK and attempted to frighten them out of trying to cross the Channel in small boats. The targeting was highly specific, with many ads aimed at as few as 100 to 1,000 people, chosen for their language and recent travel in or through Brussels and Calais.

The report’s authors raise concerns: the harm implicit in frightening already-extremely vulnerable people, the potential for damaging their trust in authorities to help them, and the privacy implications of targeting such specific groups. In the report’s example, Arabic speakers in Brussels might see the Home Office ads but their French neighbors would not – and those Arabic speakers would be unlikely to be seeking asylum. The Home Office’s digital equivalent of May’s van ads, therefore, would be seen only by a selection of microtargeted individuals.

The report concludes: “We argue that this campaign is a central example of the potential for abuse of these methods, and the need for regulation.”

The report makes a number of recommendations including improved transparency, formalized regulation and oversight, better monitoring, and public engagement in designing campaigns. One key issue is coming up with better ways of evaluating the results. Surprise, surprise: counting clicks, which is what digital advertising largely sells as a metric, is not a useful way to measure social change.

All of these arguments make sense. Improving transparency in particular seems crucial, as does working with the relevant communities. Deterring crime doesn’t require tricks and secrecy; it requires collaboration and openness.

Illustrations: Theresa May’s notorious van ad telling illegal immigrants to go home.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: Should You Believe Wikipedia?

Should You Believe Wikipedia? Online Communities and the Construction of Knowledge
By Amy S. Bruckman
Publisher: Cambridge
Print publication year: 2022
ISBN: 978-1-108780-704

Every Internet era has had its new-thing obsession. For a time in the mid-1990s, it was “community”. Every business, some industry thinkers insisted, would need to build a community of customers, suppliers, and partners. Many tried, and the next decade saw the proliferation of blogs, web boards, and, soon, multi-player online games. We learned that every such venture of any size attracts abuse that requires human moderators to solve. We learned that community does not scale. Then came Facebook and other modern social media, fueled by mobile phones, and the business model became data collection to support advertising.

Back at the beginning, Amy S. Bruckman, now a professor at Georgia Tech but then a student at MIT, set up the education-oriented MOO Crossing, in which children could collaborate on building objects as a way of learning to code. For 20 years, she has taught a course on designing communities. In Should You Believe Wikipedia?, Bruckman distills the lessons she’s learned over all that time, combining years of practical online experience with readable theoretical analysis based on sociology, psychology, and epistemology. Whether or not to trust Wikipedia is just one chapter in her study of online communities and the issues they pose.

Like pubs, cafes, and town squares, online communities are third spaces – that is, neutral ground where people can meet on equal terms. Clearly not neutral: many popular blogs, which tend to be personal or promotional, or the X formerly known as Twitter. Third places also need to be enclosed but inviting, visible from surrounding areas, and offering affordances for activity. In that sense, two of the most successful online communities are Wikipedia and OpenStreetMap, both of which pursue a common enterprise that contributors can feel is of global value. Facebook is home to probably hundreds of thousands of communities – families, activists, support groups, and so on – but itself is too big, too diffuse, and too lacking in shared purpose to be a community. Bruckman also cites as examples of productive communities open source software projects and citizen science.

Bruckman’s book has arrived at a moment that we may someday see as a watershed. Numerous factors – Elon Musk’s takeover and remaking of Twitter, debates about regulation and antitrust, increased privacy awareness – are making many people reevaluate what they want from online social spaces. It is a moment when new experiments might thrive.

Something like that is needed, Bruckman concludes: people are not being well served by the free market’s profit motives and current business models. She would like to see more of the Internet populated by non-profits, but elides the key hard question: what are the sustainable models for supporting such endeavors? Mozilla, one of the open source software-building communities she praises, is sustained by payments from Google, making it still vulnerable to the dictates of shareholders, albeit at one remove. It remains an open question if the Fediverse, currently chiefly represented by Mastodon, can grow and prosper in the long term under its present structure of volunteer administrators running their own servers and relying on users’ donations to pay expenses. Other established commercial community hosts, such as Reddit, where Bruckman is a moderator, have long failed to find financial sustainability.

Bruckman never quite answers the question in the title. It reflects the skepticism at Wikipedia’s founding that an encyclopedia edited by anyone who wanted to participate could be any good. As she explains, however, the fact that every page has its Talk page that details disputes and exposes prior versions provides transparency the search engines don’t offer. It may not be clear if we *should* believe Wikipedia, whose quality varies depending on the subject, but she does make clear why we *can* when we do.