Relativity

“Status: closed,” the website read. It gave the time as 10:30 p.m.

Except it wasn’t. It was 5:30 p.m., and the store was very much open. The website, instead of consulting the time zone the store – I mean, the store’s particular branch whose hours and address I had looked up – was in was taking the time from my laptop. Which I hadn’t bothered to switch to the US east coat from Britain because I can subtract five hours in my head and why bother?

Years ago, I remember writing a rant (which I now cannot find) about the “myness” of modern computers: My Computer, My Documents. My account. And so on, like a demented two-year-old who needed to learn to share. The notion that the time on my laptop determined whether or not the store was open had something of the same feel: the computational universe I inhabit is designed to revolve around me, and any dispute with reality is someone else’s problem.

Modern social media have hardened this approach. I say “modern” because back in the days of bulletin board systems, information services, and Usenet, postings were time- and date-stamped with when they were sent and specifying a time zone. Now, every post is labelled “2m” or “30s” or “1d”, so the actual date and time are hidden behind their relationship to “now”. It’s like those maps that rotate along with you so wherever you’re pointed physically is at the top. I guess it works for some people, but I find it disorienting; instead of the map orienting itself to me, I want to orient myself to the map. This seems to me my proper (infinitesimal) place in the universe.

All of this leads up to the revival of software agents. This was a Big Idea in the late 1990s/early 2000s, when it was commonplace to think that the era of having to make appointments and book train tickets was almost over. Instead, software agents configured with your preferences would do the negotiating for you. Discussions of this sort of thing died away as the technology never arrived. Generative AI has brought this idea back, at least to some extent, particularly in the financial area, where smart contracts can be used to set rules and then run automatically. I think only people who never have to worry about being able to afford anything will like this. But they may be the only ones the “market” cares about.

Somewhere during the time when software agents were originally mooted, I happened to sit at a conference dinner with the University of Maryland human-computer interaction expert Ben Shneiderman. There are, he said, two distinct schools of thought in software. In one, software is meant to adapt to the human using it – think of predictive text and smartphones as an example. In the other, software is consistent, and while using it may be repetitive, you always know that x command or action will produce y result. If I remember correctly, both Shneiderman and I were of the “want consistency” school.

Philosophically, though, these twin approaches have something in common with seeing the universe as if the sun went around the earth as against the earth going around the sun. The first of those makes our planet and, by extension, us far more important in the universe than we really are. The second cuts us down to size. No surprise, then, if the techbros who build these things, like the Catholic church in Galileo’s day, prefer the former.

***

Politico has started the year by warning that the UK is seeking to expand its surveillance regime even further by amending the 2016 Investigatory Powers Act. Unnoticed in the run-up to Christmas, the industry body techUK sent a letter to “express our concerns”. The short version: the bill expands the definition of “telecommunications operator” to include non-UK providers when operating outside the UK; allows the Home Office to require companies to seek permission before making changes to a privately and uniquely specified list of services; and the government wants to whip it through Parliament as fast as possible.

No, no, Politico reports the Home Office told the House of Lords, it supports innovation and isn’t threatening encryption. These are minor technical changes. But: “public safety”. With the ink barely dry on the Online Safety Act, here we go again.

***

As data breaches go, the one recently reported by 23andMe is alarming. By using passwords exposed in previous breaches (“credential stuffing”) to break into 14,000 accounts, attackers gained access to 6.9 million account profiles. The reason is reminiscent of the Cambridge Analytica scandal, where access to a few hundred thousand Facebook accounts was leveraged to obtain the data of millions: people turned on “DNA Relatives to allow themselves to be found by those searching for genetic relatives. The company, which afterwards turned on a requireme\nt for two-factor authentication, is fending off dozens of lawsuits by blaming the users for reusing passwords. According to Gizmodo, the legal messiness is considerable, as the company recently changed its terms and conditions to make arbitration more difficult and litigation almost impossible.

There’s nothing good to say about a data breach like this or a company that handles such sensitive data with such disdainx. But it’s yet one more reason why putting yourself at the center of the universe is bad hoodoo.

Illustrations: DNA strands (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The good fight

This week saw a small gathering to celebrate the 25th anniversary (more or less) of the Foundation for Information Policy Research, a think tank led by Cambridge and Edinburgh University professor Ross Anderson. FIPR’s main purpose is to produce tools and information that campaigners for digital rights can use. Obdisclosure: I am a member of its advisory council.

What, Anderson asked those assembled, should FIPR be thinking about for the next five years?

When my turn came, I said something about the burnout that comes to many campaigners after years of fighting the same fights. Digital rights organizations – Open Rights Group, EFF, Privacy International, to name three – find themselves trying to explain the same realities of math and technology decade after decade. Small wonder so many burn out eventually. The technology around the debates about copyright, encryption, and data protection has changed over the years, but in general the fundamental issues have not.

In part, this is because what people want from technology doesn’t change much. A tangential example of this presented itself this week, when I read the following in the New York Times, written by Peter C Baker about the “Beatles'” new mash-up recording:

“So while the current legacy-I.P. production boom is focused on fictional characters, there’s no reason to think it won’t, in the future, take the form of beloved real-life entertainers being endlessly re-presented to us with help from new tools. There has always been money in taking known cash cows — the Beatles prominent among them — and sprucing them up for new media or new sensibilities: new mixes, remasters, deluxe editions. But the story embedded in “Now and Then” isn’t “here’s a new way of hearing an existing Beatles recording” or “here’s something the Beatles made together that we’ve never heard before.” It is Lennon’s ideas from 45 years ago and Harrison’s from 30 and McCartney and Starr’s from the present, all welded together into an officially certified New Track from the Fab Four.”

I vividly remembered this particular vision of the future because just a few days earlier I’d had occasion to look it up – a March 1992 interview for Personal Computer World with the ILM animator Steve Williams, who the year before had led the team that produced the liquid metal man for the movie Terminator 2. Williams imagined CGI would become pervasive (as it has):

“…computer animation blends invisibly with live action to create an effect that has no counterpart in the real world. Williams sees a future in which directors can mix and match actors’ body parts at will. We could, he predicts, see footage of dead presidents giving speeches, films starring dead or retired actors, even wholly digital actors. The arguments recently seen over musicians who lip-synch to recordings during supposedly ‘live’ concerts are likely to be repeated over such movie effects.”

Williams’ latest work at the time was on Death Becomes Her. Among his calmer predictions was that as CGI became increasingly sophisticated the boundary between computer-generated characters and enhancements would become invisible. Thirty years on, the big excitement recently has been Harrison Ford’s deaging for Indiana Jones and the Dial of Destiny. That used CGI, AI, and other tools to digitally swap in his face from 1980s footage.

Side note: in talking about the Ford work to Wired, ILM supervisor Andrew Whitehurst, exactly like Williams in 1992, called the new technology “another pencil”.

Williams also predicted endless legal fights over copyright and other rights. That at least was spot-on; AI and the perpetual reuse of retained footage without further payment is part of what the recent SAG-AFTRA strikes were about.

Yet, the problem here isn’t really technology; it’s the incentives. The businessfolk of Hollywood’s eternal desire is to guarantee their return on investment, and they think recycling old successes is the safest way to do that. Closer to digital rights, law enforcement always wants greater access to private communications; the frustration is that incoming generations of politicians don’t understand the laws of mathematics any better than their predecessors in the 1990s.

Many of the speakers focused on the issue of getting government to listen to and understand the limits of technology. Increasingly, though, a new problem is that, as Bruce Schneier writes in his latest book, The Hacker’s Mind, everyone has learned to think like hackers and subvert the systems they’re supposed to protect. The Silicon Valley mantra of “ask forgiveness, not permission” has become pervasive, whether it’s a technology platform deciding to collect masses of data about us or a police force deciding to stick a live facial recognition pilot next to Oxford Circus tube station. Except no one asks for forgiveness either.

Five years ago, at FIPR’s 20th anniversary, when GDPR is new, Anderson predicted (correctly) that the battles over encryption would move to device access. Today, it’s less clear what’s next. Facial recognition represents a step change; it overrides consent and embeds distrust in our public infrastructure.

If I were to predict the battles of the next five years, I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentatioon because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is danagerous.

Illustrations: The liquid metal man in Terminator 2 reconstituting itself.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Faking it

I have finally figured out what benefit exTwitter gets from its new owner’s decision to strip out the headlines from linked third-party news articles: you cannot easily tell the difference between legitimate links and ads. Both have big unidentified pictures, and if you forget to look for the little “Ad” label at the top right or check the poster’s identity to make sure it’s someone you actually follow, it’s easy to inadvertently lessen the financial losses accruing to said owner by – oh, the shame and horror – clicking on that ad. This is especially true because the site has taken to injecting these ads with increasing frequency into the carefully curated feed that until recently didn’t have this confusion. Reader, beware.

***

In all the discussion of deepfakes and AI-generated bullshit texts, did anyone bring up the possibility of datafakes? Nature highlights a study in which researchers created a fake database to provide evidence for concluding that one of two surgical procedures is better than the other. This is nasty stuff. The rising numbers of retracted papers already showed serious problems with peer review (which are not new, but are getting worse). To name just a couple: reviewers are unpaid and often overworked, and what they look for are scientific advances, not fraud.

In the UK, Ben Goldacre has spearheaded initiatives to improve on the quality of published research. A crucial part of this is ensuring people state in advance the hypothesis they’re testing, and publish the results of all trials, not just the ones that produce the researcher’s (or funder’s) preferred result.

Science is the best process we have for establishing an edifice of reliable knowledge. We desperately need it to work. As the dust settles on the week of madness at OpenAI, whose board was supposed to care more about safety than about its own existence, we need to get over being distracted by the dramas and the fears of far-off fantasy technology and focus on the fact that the people running the biggest computing projects by and large are not paying attention to the real and imminent problems their technology is bringing.

***

Callum Cant reports at the Guardian that Deliveroo has won a UK Supreme Court ruling that its drivers are self-employed and accordingly do not have the right to bargain collectively for higher pay or better working conditions. Deliveroo apparently won this ruling because of a technicality – its insertion of a clause that allows drivers to send a substitute in their place, an option that is rarely used.

Cant notes the health and safety risks to the drivers themselves, but what about the rest of of us? A driver in his tenth hour of a seven-day-a-week grind doesn’t just put themselves at risk; they’re a risk to everyone they encounter on the roads. The way these things are going, if safety becomes a problem, instead of raising wages to allow drivers a more reasonable schedule and some rest, the likelihood is that these companies will turn to surveillance technology, as Amazon has.

In the US, this is what’s happened to truck drivers, and, as Karen Levy documents in her book, Data Driven, it’s counterproductive. Installing electronic logging devices into truckers’ cabs has led older, more experienced, and, above all, *safer* drivers to leave the profession, to be replaced with younger, less-experienced, and cheaper drivers with a higher appetite for risk. As Levy writes, improved safety won’t come from surveiling exhausted drivers; what’s needed is structural change to create better working conditions.

***

The UK’s covid inquiry has been livestreaming its hearings on government decision making for the last few weeks, and pretty horrifying they are, too. That’s true even if you don’t include former deputy medical officer Johnathan Van-Tam’s account of the threats of violence aimed at him and his family. They needed police protection for nine months and were advised to move out of their house – but didn’t want to leave their cat. Will anyone take the job of protecting public health if this is the price?

Chris Whitty, the UK’s Chief Medical Officer, said the UK was “woefully underprepared”, locked down too late, and made decisions too slowly. He was one of the polite ones.

Former special adviser Dominic Cummings (from whom no one expected politeness) said everyone called Boris Johnson a trolley, because, like a shopping trolley with the inevitable wheel pointing in the wrong direction, he was so inconsistent.

The government chief scientific adviser, Patrick Vallance had kept a contemporaneous diary, which provided his unvarnished thoughts at the time, some of which were read out. Among them: Boris Johnson was obsessed with older people accepting their fate, unable to grasp the concept of doubling times or comprehend the graphs on the dashboard, and intermittently uncertain if “the whole thing” was a mirage.

Our leader envy in April 2020 seems correctly placed. To be fair, though: Whitty and Vallance, citing their interactions with their counterparts in other countries, both said that most countries had similar problems. And for the same reason: the leaders of democratic countries are generally not well-versed in science. As the Economist’s health policy editor, Natasha Loder warned in early 2022, elect better leaders. Ask, she said, before you vote, “Are these serious people?” Words to keep in mind as we head toward the elections of 2024.

Illustrations: The medium Mina Crandon and the “materialized spirit hand” she produced during seances.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The one hundred

Among the highlights of this week’s hearings of the Covid Inquiry were comments made by Helen MacNamara, who was the deputy cabinet secretary during the relevant time, about the effect of the lack of diversity. The absence of women in the room, she said, led to a “lack of thought” about a range of issues, including dealing with childcare during lockdowns, the difficulties encountered by female medical staff in trying to find personal protective equipment that fit, and the danger lockdowns would inevitably pose when victims of domestic abuse were confined with their abusers. Also missing was anyone who could have identified issues for ethnic minorities, disabled people, and other communities. Even the necessity of continuing free school lunches was lost on the wealthy white men in charge, none of whom were ever poor enough to need them. Instead, MacNamara said, they spent “a disproportionate amount” of their time fretting about football, hunting, fishing, and shooting.

MacNamara’s revelations explain a lot. Of course a group with so little imagination about or insight into other people’s lives would leave huge, gaping holes. Arrogance would ensure they never saw those as failures.

I was listening to this while reading posts on Mastodon complaining that this week’s much-vaunted AI Safety Summit was filled with government representatives and techbros, but weak on human rights and civil society. I don’t see any privacy organizations on the guest list, for example, and only the largest technology platforms needed apply. Granted, the limit of 100 meant there wasn’t room for everyone. But these are all choices seemingly designed to make the summit look as important as possible.

From this distance, it’s hard to get excited about a bunch of bigwigs getting together to alarm us about a technology that, as even the UK government itself admits, may – even most likely – will never happen. In the event, they focused on a glut of disinformation and disruption to democratic polls. Lots of people are thinking about the first of these, and the second needs local solutions. Many technology and policy experts are advocating openness and transparency in AI regulation.

Me, I’d rather they’d given some thought to how to make “AI” (any definition) sustainable, given the massive resources today’s math-and-statistics systems demand. And I would strongly favor a joint resolution to stop using these systems for surveillance and eliminate predictive systems that pretend to be sble to spot potential criminals in advance or decide who are deserving of benefits, admission into retail stores, or parole. But this summit wasn’t about *us*.

***

A Mastodon post reminded me that November 2 – yesterday – was the 35th anniversary of the Morris Worm and therefore the 35th anniversary of the day I first heard of the Internet. Anniversaries don’t matter much, but any history of the Internet would include this now largely-fotgotten (or never-known) event.

Morris’s goals were pretty anodyne by today’s standards. He wanted, per Wikipedia, to highlight flaws in some computer systems. Instead, the worm replicated out of control and paralyzed parts of this obscure network that linked university and corporate research institutions, who now couldn’t work. It put the Internet on the front pages for the first time.

Morris became the first person to be convicted of a felony under the brand-new Computer Fraud and Abuse Act (1986); that didn’t stop him from becoming a tenured professor at MIT in 2006. The heroes of the day were the unsung people who worked hard to disable the worm and restore full functionality. But it’s the worm we remember.

It was another three years before I got online myself, in 1991, and two or three more years after that before I got direct Internet access via the now-defunct Demon Internet. Everyone has a different idea of when the Internet began, usually based on when they got online. For many of us, it was November 2, 1988, the day when the world learned how important this technology they had never heard of had already become.

***

This week also saw the first anniversary of Twitter’s takeover. Despite a variety of technical glitches and numerous user-hostile decisions, the site has not collapsed. Many people I used to follow are either gone or posting very little. Even though I’m not experiencing the increased abuse and disinformation I see widely reported, there’s diminishing reward for checking in.

There’s still little consensus on a replacement. About half of my Twitter list have settled in on Mastodon. Another third or so are populating Bluesky. I hear some are finding Threads useful, but until it has a desktop client I’m out (and maybe even then, given its ownership). A key issue, however, is that uncertainty about which site will survive (or “win”) leads many people to post the same thing on multiple services. But you don’t dare skip one just in case.

For both philosophical and practical reasons, I’m hoping more people will get comfortable on Mastodon. Any corporate-owned system will merely replicate the situation in which we become hostages to business interests who have as little interest in our welfare as Boris Johnson did according to MacNamara and other witnesses. Mastodon is not a safe harbor from horrible human behavior, but with no ads and no algorithm determining what you see, at least the system isn’t designed to profit from it.

Illustrations: Former deputy cabinet secretary Helen MacNamara testifying at the Covid Inquiry.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Other Pandemic

The Other Pandemic: How QAnon Contaminated the World
By James Ball
Bloomsbury Press
ISBN: 978-1-526-64255-4

One of the weirdest aspects of the January 6 insurrection at the US Capitol building was the mismatched variety of flags and causes represented: USA, Confederacy, Third Reich, Thin Blue Line, American Revolution, pirate, Trump. And in the midst: QAnon.

As journalist James Ball tells it in his new book, The Other Pandemic, QAnon is the perfect example of a modern, decentralized movement: it has no leader and no fixed ideology. Instead, it morphs to embrace the memes of the moment, drawing its force by renewing age-old conspiracy theories that never die. QAnon’s presence among all those flags – and popping up in demonstrations in many other countries – is a perfect example.

Charles Arthur’s 2021 book Social Warming used global warming as a metaphor for social media’s spread of anger and division. Ball prefers the metaphor of public health. The difference is subtle, but important: Arthur argued that social media became destabilizing because no one chose to stop it, where Ball’s characterization implies less agency. People have less choice about being infected with pathogens, no matter how careful they are.

Ball divides the book into four main sections reflecting the stages of a pandemic: emergence, infection, transmission, convalescence. He covers some of the same ground as Naomi Klein in her recent book Doppelganger. But Ball spent his adolescence goofing around on 4chan, where QAnon was later hatched, while Klein lets her personal story lead her into Internet fora. In other words, Klein writes about Internet culture from the outside in, while Ball writes from the inside out. Talia Lavin’s Culture Warlords, on the other hand, focused exclusively on investigating online hate..

“Goofing around” and “4chan” may sound incompatible, but as Ball tells it, in the early days after its founding in 2003, 4chan was anarchic and fun, with roots in gaming culture. Every online service I’ve known back to 1990 has had a corner like this, where ordinary rules of polite society were suspended and transgression was largely ironic, even if also obnoxious. The difference: 4chan’s culture spread well beyond its borders, and its dark side fuelled a global threat. The original QAnon posting arrived on 4chan in 2017, followed quickly by others. Detailed, seemingly knowledgeable, and full of questions for readers to “research”, they quickly attracted backers who propagated them onto much bigger sites like YouTube, which turned a niche audience of thousands into a mass audience of millions.

A key element of Ball’s metaphor is Richard Dawkins’ 1976 concept of memes: scraps of ideas that use us to replicate themselves, as biological viruses do. To extend the analogy, Ball argues that we shouldn’t blame – or dismiss as stupid – the people who get “infected” by QAnon.

This book represents an evolution for Ball. In 2017’s Post-Truth, he advocated fact-checking and teaching media literacy as key elements of the solution to the spread of misinformation. Here, he acknowledges that this approach is only a small part of containing a social movement that feeds on emotional engagement and doesn’t care about facts. In his conclusion, where he advocates prevention rather than cure and the adoption of multi-pronged strategies analogous to those we use to fight diseases like malaria, however, there are echoes of that trust in authority. I continue to believe the essential approach will be nearer to that of modern cybersecurity, similarly decentralized and mixing economics, the social sciences, psychology, and technology, among others. But this challenge is so big that no one metaphor is enough to contain it.

The two of us

The-other-Wendy-Grossman-who-is-a-journalist came to my attention in the 1990s by writing a story about something Internettish while a student at Duke University. Eventually, I got email for her (which I duly forwarded) and, once, a transatlantic phone call from a very excited but misinformed PR person. She got married, changed her name, and faded out of my view.

By contrast, Naomi Klein‘s problem has only inflated over time. The “doppelganger” in her new book, Doppelganger: A Trip into the Mirror World, is “Other Naomi” – that is, the American author Naomi Wolf, whose career launched in 1990 with The Beauty Myth . “Other Naomi” has spiraled into conspiracy theories, anti-government paranoia, and wild unscientific theories. Klein is Canadian; her books include No Logo (1999) and The Shock Doctrine (2007). There is, as Klein acknowledges a lot of *seeming* overlap in that a keyword search might surface both.

I had them confused myself until Wolf’s 2019 appearance on BBC radio, when a historian dished out a live-on-air teardown of the basis of her latest book. This author’s nightmare is the inciting incident Klein believes turned Wolf from liberal feminist author into a right-wing media star. The publisher withdrew and pulped the book, and Wolf herself was globally mocked. What does a high-profile liberal who’s lost her platform do now?

When the covid pandemic came, Wolf embraced every available mad theory and her liberal past made her a darling of the extremist right wing media. Increasingly obsessed with following Wolf’s exploits, which often popped up in her online mentions, Klein discovered that social media algorithms were exacerbating the confusion. She began to silence herself, fearing that any response she made would increase the algorithms’ tendency to conflate Naomis. She also abandoned an article deploring Bill Gates’s stance protecting corporate patents instead of spreading vaccines as widely as possible (The Gates Foundation later changed its position.)

Klein tells this story honestly, admitting to becoming addictively obsessed, promising to stop, then “relapsing” the first time she was alone in her car.

The appearance of overlap through keyword similarities is not limited to the two Naomis, as Klein finds on further investigation. YouTube stars like Steve Bannon, who founded Breitbart and served as Donald Trump’s chief strategist during his first months in the White House, wrote this playbook: seize on under-acknowledged legitimate grievances, turn them into right wing talking points, and recruit the previously-ignored victims as allies and supporters. The lab leak hypohesis, the advice being given by scientific authorities, why shopping malls were open when schools were closed, the profiteering (she correctly calls out the UK), the behavior of corporate pharma – all of these were and are valid topics for investigation, discussion, and debate. Their twisted adoption as right-wing causes made many on the side of public health harden their stance to avoid sounding like “one of them”. The result: words lost their meaning and their power.

These are problems no amount of content moderation or online safety can solve. And even if it could, is it right to ask underpaid workers in what Klein terms the “Shadowlands” to clean up our society’s nasty side so we don’t have to see it?

Klein begins with a single doppelganger, then expands into psychology, movies, TV, and other fiction, and ends by navigating expanding circles; the extreme right-wing media’s “Mirror World” is our society’s Mr Hyde. As she warns, those who live in what a friend termed “my blue bubble” may never hear about the media and commentators she investigates. After Wolf’s disgrace on the BBC, she “disappeared”, in reality going on to develop a much bigger platform in the Mirror World. But “they” know and watch us, and use our blind spots to expand their reach and recruit new and unexpected sectors of the population. Klein writes that she encounters many people who’ve “lost” a family member to the Mirror World.

This was the ground explored in 2015 by the filmmaker Jen Senko, who found the smae thing when researching her documentary The Brainwashing of My Dad. Senko’s exploration leads from the 1960s John Birch Society through to Rush Limbaugh and Roger Ailes’s intentional formation of Fox News. Klein here is telling the next stage of that same story. Mirror World is not an accident of technology; it was a plan, then technology came along and helped build it further in new directions.

As Klein searches for an explanation for what she calls “diagnonalism” – the phenomenon that sees a former Obama voter now vote for Trump, or a former liberal feminist shrug at the Dobbs decision – she finds it possible to admire the Mirror World’s inhabitants for one characteristic: “they still believe in the idea of changing reality”.

This is the heart of much of the alienation I see in some friends: those who want structural change say today’s centrist left wing favors the status quo, while those who are more profoundly disaffected dismiss the Bidens and Clintons as almost as corrupt as Trump. The pandemic increased their discontent; it did not take long for early optimistic hopes of “build back better” to fade into “I want my normal”.

Klein ends with hope. As both the US and UK wind toward the next presidential/general election, it’s in scarce supply.

Illustrations: Charlie Chaplin as one of his doppelgangers in The Great Dictator (1940).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: Sorry, Sorry, Sorry

Sorry, Sorry, Sorry: The Case for Good Apologies
By Marjorie Ingalls and Susan McCarthy
Gallery Books
ISBN: 978-1-9821-6349-5

Years ago, a friend of mine deplored apologies: “People just apologize because they want you to like them,” he observed.

That’s certainly true at least some of the time, but as Marjorie Ingalls and Susan McCarthy argue at length in their book Sorry, Sorry, Sorry, well-constructed and presented apologies can make the world a better place. For the recipient, they can remove the sting of old wrongs; for the giver, they can ease the burden of old shames.

What you shouldn’t do, when apologizing, is what self-help groups sometimes describe as “plan the outcome”. That is, you present your apology and you take your chances. Follow Ingalls’ and McCarthy’s six steps to construct your apology, then hope for, but do not demand, forgiveness, and don’t mess the whole thing up by concluding with, “So, we’re good?”

Their six steps to a good apology:
1. Say you’re sorry.
2. For what you did.
3. Show you understand why it was bad.
4. Only explain if you need to; don’t make excuses.
5. Say why it won’t happen again.
6. Offer to make up for it.
Six and a half. Listen.

It’s certainly true that many apologies don’t have the desired effect. Often, it’s because the apology itself is terrible. Through their Sorry Watch blog, Ingalls and McCarthy have been collecting and analyzing bad public apologies for years (obDisclosure: I send in tips on apologies in tennis and British politics). Many of these appear in the book, organized into chapters on apologies from doctors and medical establishments, large corporations, and governments and nation-states. Alongside these are chapters on the psychology of apologies, teaching children to apologize, practical realities relating to gender, race, and other disparities. Women, for example, are more likely to apologize well, but take greater risk when they do – and are less likely to be forgiven.

Some templates for *bad* apologies when you’ve done something hurtful (do not try this at home!): “I’m sorry if…”, “I’m sorry that you felt…”, “I regret…”, and, of course, the often-used classic, “This is not who we are.”

These latter are, in Ingalls’ and McCarthy’s parlance “apology-shaped objects”, but not actually apologies. They explain this in detail with plenty of wit – and no less than five Bad Apology bingo cards.

Even for readers of the blog, there’s new information. I was particularly interested to learn that malpractice lawyers are likely wrong in telling doctors not to apologize because admitting fault invites a lawsuit. A 2006 Harvard hospital system report found little evidence for this contention – as long as the apologies are good ones. It’s the failure to communicate and the refusal to take responsibility that are much more anger-provoking. In other words, the problem there, as everywhere else, is *bad* apologies.

A lot of this ought to be common sense. But as Ingalls and McCarthy make plain, it may be sense but it’s not as common as any of us would like.

Power cuts

In the latest example of corporate destruction, the Guardian reports on the disturbing trend in which streaming services like Disney and Warner Bros Discovery are deleting finished, even popular, shows for financial reasons. It’s like Douglas Adams’ rock star Hotblack Desiato spending a year dead for tax reasons.

Given that consumers’ budgets are stretched so thin that many are reevaluating the streaming services they’re paying for, you would think this would be the worst possible time to delete popular entertainments. Instead, the industry seems to be possessed by a death wish in which it’s making its offerings *less* attractive. Even worse, the promise they appeared to offer to showrunners was creative freedom and broad and permanent access to their work. The news that Disney+ is even canceling finished shows (Nautilus) shortly before their scheduled release in order to pay less *tax* should send a chill through every creator’s spine. No one wants to spend years of their life – for almost *any* amount of money – making things that wind up in the corporate equivalent of the warehouse at the end of Raiders of the Lost Ark.

It’s time, as the Masked Scheduler suggested recently on Mastodon, for the emergence of modern equivalents of creator-founded studios United Artists and Desilu.

***

Many of us were skeptical about Meta’s Oversight Board; it was easy to predict that Facebook would use it to avoid dealing with the PR fallout from controversial cases, but never relinquish control. And so it is proving.

This week, Meta overruled the Board‘s recommendation of a six-month suspension of the Facebook account belonging to former Cambodian prime minister Hun Sen. At issue was a video of one of Sen’s speeches, which everyone agreed incited violence against his opposition. Meta has kept the video up on the grounds of “newsworthiness”; Meta also declined to follow the Board’s recommendation to clarify its rules for public figures in “contexts in which citizens are under continuing threat of retaliatory violence from their governments”.

In the Platformer newsletter Casey Newton argues that the Board’s deliberative process is too slow to matter – it took eight months to decide this case, too late to save the election at stake or deter the political violence that has followed. Newton also concludes from the list of decisions that the Board is only “nibbling round the edges” of Meta’s policies.

A company with shareholders, a business model, and a king is never going to let an independent group make decisions that will profoundly shape its future. From Kate Klonick’s examination, we know the Board members are serious people prepared to think deeply about content moderation and its discontents. But they were always in a losing position. Now, even they must know that.

***

It should go without saying that anything that requires an Internet connection should be designed for connection failures, especially when the connected devices are required to operate the physical world. The downside was made clear by the 2017 incident, when lost signal meant a Tesla-owning venture capitalist couldn’t restart his car. Or the one in 2021, when a bunch of Tesla owners found their phone app couldn’t unlock their car doors. Tesla’s solution both times was to tell car owners to make sure they always had their physical car keys. Which, fine, but then why have an app at all?

Last week, Bambu 3D printers began printing unexpectedly when they got disconnected from the cloud. The software managing the queue of printer jobs lost the ability to monitor them, causing some to be restarted multiple times. Given the heat and extruded material 3D printers generate, this is dangerous for both themselves and their surroundings.

At TechRadar, Bambu’s PR acknowledges this: “It is difficult to have a cloud service 100% reliable all the time, but we should at least have designed the system more carefully to avoid such embarrassing consequences.” As TechRadar notes, if only embarrassment were the worst risk.

So, new rule: before installation test every new “smart” device by blocking its Internet connection to see how it behaves. Of course, companies should do this themselves, but as we/’ve seen, you can’t rely on that either.

***

Finally, in “be careful what you legislate for”, Canada is discovering the downside of C-18, which became law in June. and requires the biggest platforms to pay for the Canadian news content they host. Google and Meta warned all along that they would stop hosting Canadian news rather than pay for it. Experts like law professor Michael Geist predicted that the bill would merely serve to dramatically cut traffic to news sites.

On August 1, Meta began adding blocks for news links on Facebook and Instagram. A coalition of Canadian news outlets quickly asked the Competition Bureau to mount an inquiry into Meta’s actions. At TechDirt Mike Masnick notes the irony: first legacy media said Meta’s linking to news was anticompetitive; now they say not linking is anticompetitive.

However, there are worse consequences. Prime minister Justin Trudeau complains that Meta’s news block is endangering Canadians, who can’t access or share local up-to-date information about the ongoing wildfires.

In a sensible world, people wouldn’t rely on Facebook for their news, politicians would write legislation with greater understanding, and companies like Meta would wield their power responsibly. In *this* world, a we have a perfect storm.

Illustrations:XKCD’s Dependency.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Infrastructure, Intellectual Property, Law, Media, Net lifeTags , , Leave a comment on Power cuts

Review: Should You Believe Wikipedia?

Should You Believe Wikipedia? Online Communities and the Construction of Knowledge
By Amy S. Bruckman
Publisher: Cambridge
Print publication year: 2022
ISBN: 978-1-108780-704

Every Internet era has had its new-thing obsession. For a time in the mid-1990s, it was “community”. Every business, some industry thinkers insisted, would need to build a community of customers, suppliers, and partners. Many tried, and the next decade saw the proliferation of blogs, web boards, and, soon, multi-player online games. We learned that every such venture of any size attracts abuse that requires human moderators to solve. We learned that community does not scale. Then came Facebook and other modern social media, fueled by mobile phones, and the business model became data collection to support advertising.

Back at the beginning, Amy S. Bruckman, now a professor at Georgia Tech but then a student at MIT, set up the education-oriented MOO Crossing, in which children could collaborate on building objects as a way of learning to code. For 20 years, she has taught a course on designing communities. In Should You Believe Wikipedia?, Bruckman distills the lessons she’s learned over all that time, combining years of practical online experience with readable theoretical analysis based on sociology, psychology, and epistemology. Whether or not to trust Wikipedia is just one chapter in her study of online communities and the issues they pose.

Like pubs, cafes, and town squares, online communities are third spaces – that is, neutral ground where people can meet on equal terms. Clearly not neutral: many popular blogs, which tend to be personal or promotional, or the X formerly known as Twitter. Third places also need to be enclosed but inviting, visible from surrounding areas, and offering affordances for activity. In that sense, two of the most successful online communities are Wikipedia and OpenStreetMap, both of which pursue a common enterprise that contributors can feel is of global value. Facebook is home to probably hundreds of thousands of communities – families, activists, support groups, and so on – but itself is too big, too diffuse, and too lacking in shared purpose to be a community. Bruckman also cites as examples of productive communities open source software projects and citizen science.

Bruckman’s book has arrived at a moment that we may someday see as a watershed. Numerous factors – Elon Musk’s takeover and remaking of Twitter, debates about regulation and antitrust, increased privacy awareness – are making many people reevaluate what they want from online social spaces. It is a moment when new experiments might thrive.

Something like that is needed, Bruckman concludes: people are not being well served by the free market’s profit motives and current business models. She would like to see more of the Internet populated by non-profits, but elides the key hard question: what are the sustainable models for supporting such endeavors? Mozilla, one of the open source software-building communities she praises, is sustained by payments from Google, making it still vulnerable to the dictates of shareholders, albeit at one remove. It remains an open question if the Fediverse, currently chiefly represented by Mastodon, can grow and prosper in the long term under its present structure of volunteer administrators running their own servers and relying on users’ donations to pay expenses. Other established commercial community hosts, such as Reddit, where Bruckman is a moderator, have long failed to find financial sustainability.

Bruckman never quite answers the question in the title. It reflects the skepticism at Wikipedia’s founding that an encyclopedia edited by anyone who wanted to participate could be any good. As she explains, however, the fact that every page has its Talk page that details disputes and exposes prior versions provides transparency the search engines don’t offer. It may not be clear if we *should* believe Wikipedia, whose quality varies depending on the subject, but she does make clear why we *can* when we do.