Review: A History of Fake Things on the Internet

A History of Fakes on the Internet
By Walter J. Scheirer
Stanford University Press
ISBN 2023017876

One of Agatha Christie’s richest sources of plots was the uncertainty of identity in England’s post-war social disruption. Before then, she tells us, anyone arriving to take up residence in a village brought a letter of introduction; afterwards, old-time residents had to take newcomers at their own valuation. Had she lived into the 21st century, the arriving Internet would have given her whole new levels of uncertainty to play with.

In his recent book A History of Fake Things on the Internet, University of Notre Dame professor Walter J. Scheirer describes creating and detecting online fakes as an ongoing arms race. Where many people project doomishly that we will soon lose the ability to distinguish fakery from reality, Scheirer is more optimistic. “We’ve had functional policies in the past; there is no good reason we can’t have them again,” he concludes, adding that to make this happen we need a better understanding of the media that support the fakes.

I have a lot of sympathy with this view; as I wrote recently, things that fool people when a medium is new are instantly recognizable as fake once they become experienced. We adapt. No one now would be fooled by the images that looked real in the early days of photography. Our perceptions become more sophisticated, and we learn to examine context. Early fakes often work simply because we don’t know yet that such fakes are possible. Once we do know, we exercise much greater caution before believing. Teens who’ve grown up applying filters to the photos and videos they upload to Instagram and TikTok, see images very differently than those of us who grew up with TV and film.

Schierer begins his story with the hacker counterculture that saw computers as a source of subversive opportunities. His own research into media forensics began with Photoshop. At the time, many, especially in the military, worried that nation-states would fake content in order to deceive and manipulate. What they found, in much greater volume, was memes and what Schierer calls “participatory fakery” – that is, the cultural outpouring of fakes for entertainment and self-expression, most of it harmless. Further chapters consider cheat codes in games, the slow conversion of hackers into security practitioners, adversarial algorithms and media forensics, shock-content sites, and generative AI.

Through it all, Schierer remains optimistic that the world we’re moving into “looks pretty good”. Yes, we are discovering hundreds of scientific papers with faked data, faked results, or faked images, but we also have new analysis tools to use to detect them and Retraction Watch to catalogue them. The same new tools that empower malicious people enable many more positive uses for storytelling, collaboration, and communication. Perhaps forgetting that the computer industry relentlessly ignores its own history, he writes that we should learn from the past and react to the present.

The mention of scientific papers raises an issue Schierer seems not to worry about: waste. Every retracted paper represents lost resources – public funding, scientists’ time and effort, and the same multiplied into the future for anyone who attempts to build on that paper. Figuring out how to automate reliable detection of chatbot-generated text does nothing to lessen the vast energy, water, and human resources that go into building and maintaining all those data centers and training models (see also filtering spam). Like Scheirer, I’m largely optimistic about our ability to adapt to a more slippery virtual reality. But the amount of wasted resources is depressing and, given climate change, dangerous.

Deja news

At the first event organized by the University of West London group Women Into Cybersecurity, a questioner asked how the debates around the Internet have changed since I wrote the original 1997 book net.wars..

Not much, I said. Some chapters have dated, but the main topics are constants: censorship, freedom of speech, child safety, copyright, access to information, digital divide, privacy, hacking, cybersecurity, and always, always, *always* access to encryption. Around 2010, there was a major change when the technology platforms became big enough to protect their users and business models by opposing government intrusion. That year Google launched the first version of its annual transparency report, for example. More recently, there’s been another shift: these companies have engorged to the point where they need not care much about their users or fear regulatory fines – the stage Ed Zitron calls the rot economy and Cory Doctorow dubs enshittification.

This is the landscape against which we’re gearing up for (yet) another round of recursion. April 25 saw the passage of amendments to the UK’s Investigatory Powers Act (2016). These are particularly charmless, as they expand the circumstances under which law enforcement can demand access to Internet Connection Records, allow the government to require “exceptional lawful access” (read: backdoored encryption) and require technology companies to get permission before issuing security updates. As Mark Nottingham blogs, no one should have this much power. In any event, the amendments reanimate bulk data surveillance and backdoored encryption.

Also winding through Parliament is the Data Protection and Digital Information bill. The IPA amendments threaten national security by demanding the power to weaken protective measures; the data bill threatens to undermine the adequacy decision under which the UK’s data protection law is deemed to meet the requirements of the EU’s General Data Protection Regulation. Experts have already put that adequacy at risk. If this government proceeds, as it gives every indication of doing, the next, presumably Labour, government may find itself awash in an economic catastrophe as British businesses become persona-non-data to their European counterparts.

The Open Rights Group warns that the data bill makes it easier for government, private companies, and political organizations to exploit our personal data while weakening subject access rights, accountability, and other safeguards. ORG is particularly concerned about the impact on elections, as the bill expands the range of actors who are allowed to process personal data revealing political opinions on a new “democratic engagement activities” basis.

If that weren’t enough, another amendment also gives the Department of Work and Pensions the power to monitor all bank accounts that receive payments, including the state pension – to reduce overpayments and other types of fraud, of course. And any bank account connected to those accounts, such as landlords, carers, parents, and partners. At Computer Weekly, Bill Goodwin suggests that the upshot could be to deter landlords from renting to anyone receiving state benefits or entitlements. The idea is that banks will use criteria we can’t access to flag up accounts for the DWP to inspect more closely, and over the mass of 20 million accounts there will be plenty of mistakes to go around. Safe prediction: there will be horror stories of people denied benefits without warning.

And in the EU… Techcrunch reports that the European Commission (always more surveillance-happy and less human rights-friendly than the European Parliament) is still pursuing its proposal to require messaging platforms to scan private communications for child sexual abuse material. Let’s do the math of truly large numbers: billions of messages, even a teeny-tiny percentage of inaccuracy, literally millions of false positives! On Thursday, a group of scientists and researchers sent an open letter pointing out exactly this. Automated detection technologies perform poorly, innocent images may occur in clusters (as when a parent sends photos to a doctor), and such a scheme requires weakening encryption, and in any case, better to focus on eliminating child abuse (taking CSAM along with it).

Finally, age verification, which has been pending in the UK ever since at least 2016, is becoming a worldwide obsession. At least eight US states and the EU have laws mandating age checks, and the Age Verification Providers Association is pushing to make the Internet “age-aware persistently”. Last month, the BSI convened a global summit to kick off the work of developing a worldwide standard. These moves are the latest push against online privacy; age checks will be applied to *everyone*, and while they could be designed to respect privacy and anonymity, the most likely is that they won’t be. In 2022, the French data protection regulator, CNIL, found that current age verification methods are both intrusive and easily circumvented. In the US, Casey Newton is watching a Texas case about access to online pornography and age verification that threatens to challenge First Amendment precedent in the Supreme Court.

Because the debates are so familiar – the arguments rarely change – it’s easy to overlook how profoundly all this could change the Internet. An age-aware Internet where all web use is identified and encrypted messaging services have shut down rather than compromise their users and every action is suspicious until judged harmless…those are the stakes.

Illustrations: Angel sensibly smashes the ring that makes vampires impervious (in Angel, “In the Dark” (S01e03)).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The toast bubble

To The Big Bang Theory (“The Russian Rocket Reaction”, S5e05):

Howard: Someone has to go up with the telescope as a payload specialist, and guess who that someone is!
Sheldon: Muhammed Li.
Howard: Who’s Muhammed Li?
Sheldon: Muhammed is the most common first name in the world, Li the most common surname, and as I didn’t know the answer I thought that gave me a mathematical edge.

Experts tell me that exchange doesn’t perfectly explain how generative AI works; it’s too simplistic. Generative AI – or a Sheldon made more nuanced by his writers – takes into account contextual information to calculate the probable next word. So it wouldn’t pick from all the first names and surnames in the world. It might, however, pick from the names of all the payload specialists or some other group it correlated, or confect one.

More than a year on, I still can’t find a use for generative “AI” that is so unreliable and inscrutable. At Exponential View, Azeem Azhar has written about the “answer engine” Perplexity.ai. While it’s helpful that Perplexity provides references for its answers, it was producing misinformation by the third question I asked it, and offered no improvement when challenged. Wikipedia spent many years being accused of unreliability, too, but at least there you can read the talk page and understand how the editors arrived at the text they present.

On The Daily Show this week, Jon Stewart ranted about AI and interviewed FTC chair Lina Khan. Well-chosen video clips showed AI company heads’ true colors, telling the public AI is an assistant for humans while telling money people and each other that AI will enable greater productivity with fewer workers and help eliminate “the people tax”.

More interesting, however, was Khan’s note that the FTC is investigating the investments and partnerships in AI to understand if they’re giving current technology giants undue influence in the marketplace. If, in her example, all the competitors in a market outsource their pricing decisions to the same algorithm they may be guilty of price fixing even if they’re not actively colluding. And these markets are consolidating at an ever-earlier stage. Snapchat and WhatsApp had millions of users by the time Facebook thought it prudent to buy them rather than let them become dangerous competitors. AI is pre-consolidating: the usual suspects have been buying up AI startups and models at pace.

“More profound than fire or electricity,” Google CEO Sundar Pichai tells a camera at one point, speaking about AI. The last time I heard this level of hyperbole it was about the Internet in the 1990s, shortly before the bust. A friend’s answer to this sort of thing has never varied: “I’d rather have indoor plumbing.”

***

Last week the Federal District Court in Manhattan sentenced FTX CEO Sam Bankman-Fried to 25 years in prison for stealing $8 billion. In the end, you didn’t have to understand anything complicated about cryptocurrencies; it was just good old embezzlement.

And then the price of bitcoin went *up*. At the Guardian, Molly White explains that this is because cryptoevangelists are pushing the idea that the sector can reach its full potential, now that Bankman-Fried and other bad apples have been purged. But, as she says, nothing has really changed. No new use case has come along to make cryptocurrencies more useful, more valuable, or more trustworthy.

Both cryptocurrencies and generative AI are bubbles. The difference is that the AI bubble will likely leave behind it some technologies and knowledge that are genuinely useful; it will be like the Internet, which boomed and busted before settling in to change the world. Cryptocurrencies are more like the Dutch tulips. Unfortunately, in the meantime both these bubbles are consuming energy at an insane rate. How many wildfires is bitcoin worth?

**

I’ve seen a report suggesting that the last known professional words of the late Ross Anderson may have been, “Do they take us for fools?”

He was referring to the plans, debated in the House of Commons on March 25, to amend the Investigatory Powers Act to allow the government to pre-approve (or disapprove) new security features technology firms want to intorduce. The government is of course saying it’s all perfectly innocent, intended to keep the country safe. But recent clashes in the decades-old conflict over strong encryption have seen the technology companies roll out features like end-to-end encryption (Meta) and decide not to implement others, like client-side scanning (Apple). The latest in a long line of UK governments that want access to encrypted text was hardly going to take that quietly. So here we are, debating this yet again. Yet the laws of mathematics still haven’t changed: there is no such thing as a security hole that only “good guys” can use.

***

Returning to AI, it appears that costs may lead Google to charge for access to its AI-enhanced search, as Alex Hern reports at the Guardian. Hern thinks this is good news for its AI-focused startup competitors, which already charge for top-tier tools and who are at risk of being undercut by Google. I think it’s good for users by making it easy to avoid the AI “enhancement”. Of course, DuckDuckGo already does this without all the tracking and monopoly mishegoss.

Illustrations: Jon Stewart uninspired by Mark Zuckerberg’s demonstration of AI making toast.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Anachronistics

“In my mind, computers and the Internet arrived at the same time,” my twenty-something companion said, delivering an entire mindset education in one sentence.

Just a minute or two earlier, she had asked in some surprise, “Did bulletin board systems predate the Internet?” Well, yes: BBSs were a software package running on a single back room computer with a modem users dialed into, whereas the Internet is this giant sprawling mess of millions of computers connected together…simple first, complex later.

Her confusion is understandable: from her perspective, computers and the Internet did arrive at the same time, since her first conscious encounters with them were simultaneous.

But still, speaking as someone who first programmed a (mainframe, with punch cards) computer in 1972 as a student, who got her first personal computer in 1982, and got online in 1991 by modem and 1999 by broadband and to whom the sequence of events is memorable: wow.

A 25-year-old today was born in 1999 (the year I got broadband). Her counterpart 15 years hence (born 2014, the year a smartphone replaced my personal digital assistant) may think smart phones and the Internet were simultaneous. And sometime around 2045 *her* counterpart born in 2020 (two years before ChatGPT was released) might think generative text and image systems were contemporaneous with the first computers.

I think this confusion must have something to do with the speed of change in a relatively narrow sector. I’m sure that even though they all entered my life simultaneously, by the time I was 25 I knew that radio preceded TV (because my parents grew up with radio), bicycles preceded cars, and that handwritten manuscripts predated printed books (because medieval manuscripts). But those transitions played out over multiple lifetimes, if not centuries, and all those memories were personal. Few of us reminisce about the mainframes of the 1960s because most of us didn’t have access to them.

And yet, understanding the timeline of earlier technologies probably mattered less than not understanding the sequence of events in information technology. Jumbling the arrival dates of the pieces of information technology means failing to understand dependencies. What currently passes for “AI” could not exist without being able to train models on giant piles of data that the Internet and the web made possible, and that took 20 years to build. Neural networks pioneer Geoff Hinton came up with the ideas for convolutional neural networks as long ago as the 1980s, but it took until the last decade for them to become workable. That’s because it took that long to build sufficiently powerful computers and to amass enough training data. How do you understand the ongoing battle between those who wish to protect privacy via data protection laws and those who want data to flow freely without hindrance if you do not understand what those masses of data are important for?

This isn’t the only such issue. A surprising number of people who should know better seem to believe that the solution to all our ills with social media is to destroy Section 230, apparently believing that if S230 allowed Big Tech to get big, it must be wrong. Instead, the reality is also that it allows small sites to exist and it is the legal framework that allows content moderation. Improve it by all means, but understand its true purpose first.

Reviewing movies and futurist projections such as Vannevar Bush’s 1946 essay As We May Think (PDF) and Alan Turing’s lecture, Computing Machinery and Intelligence? (PDF) doesn’t really help because so many ideas arrive long before they’re feasible. The crew in the original 1966 Star Trek series (to say nothing of secret agent Maxwell Smart in 1965) were talking over wireless personal communicators. A decade earlier, Arthur C. Clarke (in The Nine Billion Names of God) and Isaac Asimov (in The Last Question) were putting computers – albeit analog ones – in their stories. Asimov in particular imagined a sequence that now looks prescient, beginning with something like a mainframe, moving on to microcomputers, and finishing up with a vast fully interconnected network that can only be held in hyperspace. (OK, it took trillions of years, starting in 2061, but still..) Those writings undoubtedly inspired the technologists of the last 50 years when they decided what to invent.

This all led us to fakes: as the technology to create fake videos, images, and texts continues to improve, she wondered if we will ever be able to keep up. Just about every journalism site is asking some version of that question; they’re all awash in stories about new levels of fakery. My 25-year-old discussant believes the fakes will always be improving faster than our methods of detection – an arms race like computer security, to which I’ve compared problems of misinformation / disinformation before.

I’m more optimistic. I bet even a few years from now today’s versions of generative “AI” will look as primitive to us as the special effects in a 1963 episode of Dr Who or the magic lantern used to create the Knock apparitions do to generations raised on movies, TV, and computer-generated imagery. Humans are adaptable; we will find ways to identify what is authentic that aren’t obvious in the shock of the new. We might even go back to arguing in pubs.

Illustrations: Secret agent Maxwell Smart (Don Adams) talking on his shoe phone (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The bridge

Seven months ago, Mastodon was fretting about Meta’s newly-launched Threads. The issue: Threads, which was built on top of Instagram’s user database, had said it complied with the Activity Pub protocol, which allows Mastodon servers (“instances”) to federate with any other service that also uses that protocol. The potential threat that Threads would become interoperable and that potentially millions of Threads users would swamp Mastodon, ignoring its existing social norms and culture created an existential dilemma: to federate or not to federate?

Today, Threads’ integration is still just a plan.

Instead, it seems the first disruptive arrival looks set to be Bluesky, created by a team backed by Twitter co-founder Jack Dorsey and facilitated by a third party. Bluesky wrote a new open source protocol, AT, so the proposal isn’t federation with Mastodon but a bridge, as Amanda Silberling reports at TechCrunch. According to Silberling’s numbers, year-old Bluesky stands at 4.8 million users to Mastodon’s 8.7 million. Anyone familiar with the history of AOL’s gateway to Usenet will tell you that’s big enough to disrupt existing social norms. The AOL exercise was known as Eternal September (because every September Usenet had to ingest a new generation of incoming university freshmen).

There are two key differences, however. First, a third of those Blusky users are new to that system, only joining last week, when the service opened fully to the public. They will bring challenges to the culture Bluesky has so far developed. Second, AOL’s gateway was unidirectional: AOLers could read and post to Usenet newsgroups, but Usenet posters could not read anything on AOL without paying for access. The Bluesky-Mastodon bridge is planned to be bidirectional, so anything posted publicly on one service would be accessible to both – or to outsiders using BridgyFed to connect via website feeds.

I haven’t spent a lot of time on Bluesky, but it’s clear it and Mastodon have different cultures. Friends who spend more time there say Bluesky has a “weirdness” they like and is less “scoldy” than Mastodon, where long-time users tended to school incoming ex-Twitter users in 2022 on their mistakes. That makes sense, when you consider that Mastodon has had time since its 2016 founding to develop an existing culture that newcomers are joining, where Bluesky has been a closed beta group until last week, and its users to date were the ones defining its culture for the future. The newcomers of the past week may have a very different experience.

Even if they don’t, there’s a fundamental economic difference that no technology can bridge: Mastodon is a non-profit cooperative endeavor, while Bluesky is has venture capital funding, although the list of investors is not the usual suspects. Social media users have often been burned by corporate business decisions. It’s therefore easy to believe that the $8 million in seed funding will lead inevitably to user data exploitation, no matter what they say now about being determined to find a different and more sustainable business model based on selling ancillary servicesx. Even if that strategy works, later owners or the dictates of shareholders may demand higher profits via a pivot to advertising, just as the Netflix and Amazon Prime streaming services are doing now.

Designing any software involves making rules for how it will operate and setting defaults. Here’s where the project hit trouble: should it be opt-out, so that users who don’t want their posts to be visible outside their home system have to specifically turn it off, or opt-in, so that users who want their posts published far and wide have to turn it on? BridgyFed’s creator, Ryan Barrett chose opt-out. It was immediately divisive: privacy versus openness.

Silberman reports that Barrett has fashioned a solution, giving users warning pop-ups and a chance to decline if someone from another service tries to follow them, and is thinking more carefully about the risks to safety his bridge might bring.

That’s great, but the next guy may not be so willing to reconsider. As we’ve observed before, there is no way to restrict the use of open protocols without closing them and putting them under centralized control – which is the opposite of the federated, decentralized systems Mastodon and Bluesky were created to build.

In a federated system anything one person can open another can close. Individual admins will decide for their users how their instances will operate. Those who don’t like their choice will be told they can port their accounts to one whose policies they prefer. That’s true, but unsatisfying as an answer. As the “Fediverse” grows, it must accommodate millions of mainstream users for whom moving servers is too complicated.

The key point, however, is that the illusion of control Mastodon seemed to offer is being punctured. Usenet users could have warned them: from its creation in 1979, users believed their postings were readable for a few weeks before expiring and being expunged. Then, in 1995, Steve Madere created the Deja News archive from scattered collections. Overnight, those “ephemeral” postings became permanent and searchable – and even more so, after 2001, when Google bought the archive (see groups.google.com).

The upshot: privacy in public networks is only ever illusory. Assume you have no control over anything you post, no matter how cozy and personal the network seems. As we’ve said before, the privacy-in-public afforded by the physical world has no online counterpart.

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Relativity

“Status: closed,” the website read. It gave the time as 10:30 p.m.

Except it wasn’t. It was 5:30 p.m., and the store was very much open. The website, instead of consulting the time zone the store – I mean, the store’s particular branch whose hours and address I had looked up – was in was taking the time from my laptop. Which I hadn’t bothered to switch to the US east coat from Britain because I can subtract five hours in my head and why bother?

Years ago, I remember writing a rant (which I now cannot find) about the “myness” of modern computers: My Computer, My Documents. My account. And so on, like a demented two-year-old who needed to learn to share. The notion that the time on my laptop determined whether or not the store was open had something of the same feel: the computational universe I inhabit is designed to revolve around me, and any dispute with reality is someone else’s problem.

Modern social media have hardened this approach. I say “modern” because back in the days of bulletin board systems, information services, and Usenet, postings were time- and date-stamped with when they were sent and specifying a time zone. Now, every post is labelled “2m” or “30s” or “1d”, so the actual date and time are hidden behind their relationship to “now”. It’s like those maps that rotate along with you so wherever you’re pointed physically is at the top. I guess it works for some people, but I find it disorienting; instead of the map orienting itself to me, I want to orient myself to the map. This seems to me my proper (infinitesimal) place in the universe.

All of this leads up to the revival of software agents. This was a Big Idea in the late 1990s/early 2000s, when it was commonplace to think that the era of having to make appointments and book train tickets was almost over. Instead, software agents configured with your preferences would do the negotiating for you. Discussions of this sort of thing died away as the technology never arrived. Generative AI has brought this idea back, at least to some extent, particularly in the financial area, where smart contracts can be used to set rules and then run automatically. I think only people who never have to worry about being able to afford anything will like this. But they may be the only ones the “market” cares about.

Somewhere during the time when software agents were originally mooted, I happened to sit at a conference dinner with the University of Maryland human-computer interaction expert Ben Shneiderman. There are, he said, two distinct schools of thought in software. In one, software is meant to adapt to the human using it – think of predictive text and smartphones as an example. In the other, software is consistent, and while using it may be repetitive, you always know that x command or action will produce y result. If I remember correctly, both Shneiderman and I were of the “want consistency” school.

Philosophically, though, these twin approaches have something in common with seeing the universe as if the sun went around the earth as against the earth going around the sun. The first of those makes our planet and, by extension, us far more important in the universe than we really are. The second cuts us down to size. No surprise, then, if the techbros who build these things, like the Catholic church in Galileo’s day, prefer the former.

***

Politico has started the year by warning that the UK is seeking to expand its surveillance regime even further by amending the 2016 Investigatory Powers Act. Unnoticed in the run-up to Christmas, the industry body techUK sent a letter to “express our concerns”. The short version: the bill expands the definition of “telecommunications operator” to include non-UK providers when operating outside the UK; allows the Home Office to require companies to seek permission before making changes to a privately and uniquely specified list of services; and the government wants to whip it through Parliament as fast as possible.

No, no, Politico reports the Home Office told the House of Lords, it supports innovation and isn’t threatening encryption. These are minor technical changes. But: “public safety”. With the ink barely dry on the Online Safety Act, here we go again.

***

As data breaches go, the one recently reported by 23andMe is alarming. By using passwords exposed in previous breaches (“credential stuffing”) to break into 14,000 accounts, attackers gained access to 6.9 million account profiles. The reason is reminiscent of the Cambridge Analytica scandal, where access to a few hundred thousand Facebook accounts was leveraged to obtain the data of millions: people turned on “DNA Relatives to allow themselves to be found by those searching for genetic relatives. The company, which afterwards turned on a requireme\nt for two-factor authentication, is fending off dozens of lawsuits by blaming the users for reusing passwords. According to Gizmodo, the legal messiness is considerable, as the company recently changed its terms and conditions to make arbitration more difficult and litigation almost impossible.

There’s nothing good to say about a data breach like this or a company that handles such sensitive data with such disdainx. But it’s yet one more reason why putting yourself at the center of the universe is bad hoodoo.

Illustrations: DNA strands (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The good fight

This week saw a small gathering to celebrate the 25th anniversary (more or less) of the Foundation for Information Policy Research, a think tank led by Cambridge and Edinburgh University professor Ross Anderson. FIPR’s main purpose is to produce tools and information that campaigners for digital rights can use. Obdisclosure: I am a member of its advisory council.

What, Anderson asked those assembled, should FIPR be thinking about for the next five years?

When my turn came, I said something about the burnout that comes to many campaigners after years of fighting the same fights. Digital rights organizations – Open Rights Group, EFF, Privacy International, to name three – find themselves trying to explain the same realities of math and technology decade after decade. Small wonder so many burn out eventually. The technology around the debates about copyright, encryption, and data protection has changed over the years, but in general the fundamental issues have not.

In part, this is because what people want from technology doesn’t change much. A tangential example of this presented itself this week, when I read the following in the New York Times, written by Peter C Baker about the “Beatles'” new mash-up recording:

“So while the current legacy-I.P. production boom is focused on fictional characters, there’s no reason to think it won’t, in the future, take the form of beloved real-life entertainers being endlessly re-presented to us with help from new tools. There has always been money in taking known cash cows — the Beatles prominent among them — and sprucing them up for new media or new sensibilities: new mixes, remasters, deluxe editions. But the story embedded in “Now and Then” isn’t “here’s a new way of hearing an existing Beatles recording” or “here’s something the Beatles made together that we’ve never heard before.” It is Lennon’s ideas from 45 years ago and Harrison’s from 30 and McCartney and Starr’s from the present, all welded together into an officially certified New Track from the Fab Four.”

I vividly remembered this particular vision of the future because just a few days earlier I’d had occasion to look it up – a March 1992 interview for Personal Computer World with the ILM animator Steve Williams, who the year before had led the team that produced the liquid metal man for the movie Terminator 2. Williams imagined CGI would become pervasive (as it has):

“…computer animation blends invisibly with live action to create an effect that has no counterpart in the real world. Williams sees a future in which directors can mix and match actors’ body parts at will. We could, he predicts, see footage of dead presidents giving speeches, films starring dead or retired actors, even wholly digital actors. The arguments recently seen over musicians who lip-synch to recordings during supposedly ‘live’ concerts are likely to be repeated over such movie effects.”

Williams’ latest work at the time was on Death Becomes Her. Among his calmer predictions was that as CGI became increasingly sophisticated the boundary between computer-generated characters and enhancements would become invisible. Thirty years on, the big excitement recently has been Harrison Ford’s deaging for Indiana Jones and the Dial of Destiny. That used CGI, AI, and other tools to digitally swap in his face from 1980s footage.

Side note: in talking about the Ford work to Wired, ILM supervisor Andrew Whitehurst, exactly like Williams in 1992, called the new technology “another pencil”.

Williams also predicted endless legal fights over copyright and other rights. That at least was spot-on; AI and the perpetual reuse of retained footage without further payment is part of what the recent SAG-AFTRA strikes were about.

Yet, the problem here isn’t really technology; it’s the incentives. The businessfolk of Hollywood’s eternal desire is to guarantee their return on investment, and they think recycling old successes is the safest way to do that. Closer to digital rights, law enforcement always wants greater access to private communications; the frustration is that incoming generations of politicians don’t understand the laws of mathematics any better than their predecessors in the 1990s.

Many of the speakers focused on the issue of getting government to listen to and understand the limits of technology. Increasingly, though, a new problem is that, as Bruce Schneier writes in his latest book, The Hacker’s Mind, everyone has learned to think like hackers and subvert the systems they’re supposed to protect. The Silicon Valley mantra of “ask forgiveness, not permission” has become pervasive, whether it’s a technology platform deciding to collect masses of data about us or a police force deciding to stick a live facial recognition pilot next to Oxford Circus tube station. Except no one asks for forgiveness either.

Five years ago, at FIPR’s 20th anniversary, when GDPR is new, Anderson predicted (correctly) that the battles over encryption would move to device access. Today, it’s less clear what’s next. Facial recognition represents a step change; it overrides consent and embeds distrust in our public infrastructure.

If I were to predict the battles of the next five years, I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentatioon because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is danagerous.

Illustrations: The liquid metal man in Terminator 2 reconstituting itself.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Faking it

I have finally figured out what benefit exTwitter gets from its new owner’s decision to strip out the headlines from linked third-party news articles: you cannot easily tell the difference between legitimate links and ads. Both have big unidentified pictures, and if you forget to look for the little “Ad” label at the top right or check the poster’s identity to make sure it’s someone you actually follow, it’s easy to inadvertently lessen the financial losses accruing to said owner by – oh, the shame and horror – clicking on that ad. This is especially true because the site has taken to injecting these ads with increasing frequency into the carefully curated feed that until recently didn’t have this confusion. Reader, beware.

***

In all the discussion of deepfakes and AI-generated bullshit texts, did anyone bring up the possibility of datafakes? Nature highlights a study in which researchers created a fake database to provide evidence for concluding that one of two surgical procedures is better than the other. This is nasty stuff. The rising numbers of retracted papers already showed serious problems with peer review (which are not new, but are getting worse). To name just a couple: reviewers are unpaid and often overworked, and what they look for are scientific advances, not fraud.

In the UK, Ben Goldacre has spearheaded initiatives to improve on the quality of published research. A crucial part of this is ensuring people state in advance the hypothesis they’re testing, and publish the results of all trials, not just the ones that produce the researcher’s (or funder’s) preferred result.

Science is the best process we have for establishing an edifice of reliable knowledge. We desperately need it to work. As the dust settles on the week of madness at OpenAI, whose board was supposed to care more about safety than about its own existence, we need to get over being distracted by the dramas and the fears of far-off fantasy technology and focus on the fact that the people running the biggest computing projects by and large are not paying attention to the real and imminent problems their technology is bringing.

***

Callum Cant reports at the Guardian that Deliveroo has won a UK Supreme Court ruling that its drivers are self-employed and accordingly do not have the right to bargain collectively for higher pay or better working conditions. Deliveroo apparently won this ruling because of a technicality – its insertion of a clause that allows drivers to send a substitute in their place, an option that is rarely used.

Cant notes the health and safety risks to the drivers themselves, but what about the rest of of us? A driver in his tenth hour of a seven-day-a-week grind doesn’t just put themselves at risk; they’re a risk to everyone they encounter on the roads. The way these things are going, if safety becomes a problem, instead of raising wages to allow drivers a more reasonable schedule and some rest, the likelihood is that these companies will turn to surveillance technology, as Amazon has.

In the US, this is what’s happened to truck drivers, and, as Karen Levy documents in her book, Data Driven, it’s counterproductive. Installing electronic logging devices into truckers’ cabs has led older, more experienced, and, above all, *safer* drivers to leave the profession, to be replaced with younger, less-experienced, and cheaper drivers with a higher appetite for risk. As Levy writes, improved safety won’t come from surveiling exhausted drivers; what’s needed is structural change to create better working conditions.

***

The UK’s covid inquiry has been livestreaming its hearings on government decision making for the last few weeks, and pretty horrifying they are, too. That’s true even if you don’t include former deputy medical officer Johnathan Van-Tam’s account of the threats of violence aimed at him and his family. They needed police protection for nine months and were advised to move out of their house – but didn’t want to leave their cat. Will anyone take the job of protecting public health if this is the price?

Chris Whitty, the UK’s Chief Medical Officer, said the UK was “woefully underprepared”, locked down too late, and made decisions too slowly. He was one of the polite ones.

Former special adviser Dominic Cummings (from whom no one expected politeness) said everyone called Boris Johnson a trolley, because, like a shopping trolley with the inevitable wheel pointing in the wrong direction, he was so inconsistent.

The government chief scientific adviser, Patrick Vallance had kept a contemporaneous diary, which provided his unvarnished thoughts at the time, some of which were read out. Among them: Boris Johnson was obsessed with older people accepting their fate, unable to grasp the concept of doubling times or comprehend the graphs on the dashboard, and intermittently uncertain if “the whole thing” was a mirage.

Our leader envy in April 2020 seems correctly placed. To be fair, though: Whitty and Vallance, citing their interactions with their counterparts in other countries, both said that most countries had similar problems. And for the same reason: the leaders of democratic countries are generally not well-versed in science. As the Economist’s health policy editor, Natasha Loder warned in early 2022, elect better leaders. Ask, she said, before you vote, “Are these serious people?” Words to keep in mind as we head toward the elections of 2024.

Illustrations: The medium Mina Crandon and the “materialized spirit hand” she produced during seances.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The one hundred

Among the highlights of this week’s hearings of the Covid Inquiry were comments made by Helen MacNamara, who was the deputy cabinet secretary during the relevant time, about the effect of the lack of diversity. The absence of women in the room, she said, led to a “lack of thought” about a range of issues, including dealing with childcare during lockdowns, the difficulties encountered by female medical staff in trying to find personal protective equipment that fit, and the danger lockdowns would inevitably pose when victims of domestic abuse were confined with their abusers. Also missing was anyone who could have identified issues for ethnic minorities, disabled people, and other communities. Even the necessity of continuing free school lunches was lost on the wealthy white men in charge, none of whom were ever poor enough to need them. Instead, MacNamara said, they spent “a disproportionate amount” of their time fretting about football, hunting, fishing, and shooting.

MacNamara’s revelations explain a lot. Of course a group with so little imagination about or insight into other people’s lives would leave huge, gaping holes. Arrogance would ensure they never saw those as failures.

I was listening to this while reading posts on Mastodon complaining that this week’s much-vaunted AI Safety Summit was filled with government representatives and techbros, but weak on human rights and civil society. I don’t see any privacy organizations on the guest list, for example, and only the largest technology platforms needed apply. Granted, the limit of 100 meant there wasn’t room for everyone. But these are all choices seemingly designed to make the summit look as important as possible.

From this distance, it’s hard to get excited about a bunch of bigwigs getting together to alarm us about a technology that, as even the UK government itself admits, may – even most likely – will never happen. In the event, they focused on a glut of disinformation and disruption to democratic polls. Lots of people are thinking about the first of these, and the second needs local solutions. Many technology and policy experts are advocating openness and transparency in AI regulation.

Me, I’d rather they’d given some thought to how to make “AI” (any definition) sustainable, given the massive resources today’s math-and-statistics systems demand. And I would strongly favor a joint resolution to stop using these systems for surveillance and eliminate predictive systems that pretend to be sble to spot potential criminals in advance or decide who are deserving of benefits, admission into retail stores, or parole. But this summit wasn’t about *us*.

***

A Mastodon post reminded me that November 2 – yesterday – was the 35th anniversary of the Morris Worm and therefore the 35th anniversary of the day I first heard of the Internet. Anniversaries don’t matter much, but any history of the Internet would include this now largely-fotgotten (or never-known) event.

Morris’s goals were pretty anodyne by today’s standards. He wanted, per Wikipedia, to highlight flaws in some computer systems. Instead, the worm replicated out of control and paralyzed parts of this obscure network that linked university and corporate research institutions, who now couldn’t work. It put the Internet on the front pages for the first time.

Morris became the first person to be convicted of a felony under the brand-new Computer Fraud and Abuse Act (1986); that didn’t stop him from becoming a tenured professor at MIT in 2006. The heroes of the day were the unsung people who worked hard to disable the worm and restore full functionality. But it’s the worm we remember.

It was another three years before I got online myself, in 1991, and two or three more years after that before I got direct Internet access via the now-defunct Demon Internet. Everyone has a different idea of when the Internet began, usually based on when they got online. For many of us, it was November 2, 1988, the day when the world learned how important this technology they had never heard of had already become.

***

This week also saw the first anniversary of Twitter’s takeover. Despite a variety of technical glitches and numerous user-hostile decisions, the site has not collapsed. Many people I used to follow are either gone or posting very little. Even though I’m not experiencing the increased abuse and disinformation I see widely reported, there’s diminishing reward for checking in.

There’s still little consensus on a replacement. About half of my Twitter list have settled in on Mastodon. Another third or so are populating Bluesky. I hear some are finding Threads useful, but until it has a desktop client I’m out (and maybe even then, given its ownership). A key issue, however, is that uncertainty about which site will survive (or “win”) leads many people to post the same thing on multiple services. But you don’t dare skip one just in case.

For both philosophical and practical reasons, I’m hoping more people will get comfortable on Mastodon. Any corporate-owned system will merely replicate the situation in which we become hostages to business interests who have as little interest in our welfare as Boris Johnson did according to MacNamara and other witnesses. Mastodon is not a safe harbor from horrible human behavior, but with no ads and no algorithm determining what you see, at least the system isn’t designed to profit from it.

Illustrations: Former deputy cabinet secretary Helen MacNamara testifying at the Covid Inquiry.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Review: The Other Pandemic

The Other Pandemic: How QAnon Contaminated the World
By James Ball
Bloomsbury Press
ISBN: 978-1-526-64255-4

One of the weirdest aspects of the January 6 insurrection at the US Capitol building was the mismatched variety of flags and causes represented: USA, Confederacy, Third Reich, Thin Blue Line, American Revolution, pirate, Trump. And in the midst: QAnon.

As journalist James Ball tells it in his new book, The Other Pandemic, QAnon is the perfect example of a modern, decentralized movement: it has no leader and no fixed ideology. Instead, it morphs to embrace the memes of the moment, drawing its force by renewing age-old conspiracy theories that never die. QAnon’s presence among all those flags – and popping up in demonstrations in many other countries – is a perfect example.

Charles Arthur’s 2021 book Social Warming used global warming as a metaphor for social media’s spread of anger and division. Ball prefers the metaphor of public health. The difference is subtle, but important: Arthur argued that social media became destabilizing because no one chose to stop it, where Ball’s characterization implies less agency. People have less choice about being infected with pathogens, no matter how careful they are.

Ball divides the book into four main sections reflecting the stages of a pandemic: emergence, infection, transmission, convalescence. He covers some of the same ground as Naomi Klein in her recent book Doppelganger. But Ball spent his adolescence goofing around on 4chan, where QAnon was later hatched, while Klein lets her personal story lead her into Internet fora. In other words, Klein writes about Internet culture from the outside in, while Ball writes from the inside out. Talia Lavin’s Culture Warlords, on the other hand, focused exclusively on investigating online hate..

“Goofing around” and “4chan” may sound incompatible, but as Ball tells it, in the early days after its founding in 2003, 4chan was anarchic and fun, with roots in gaming culture. Every online service I’ve known back to 1990 has had a corner like this, where ordinary rules of polite society were suspended and transgression was largely ironic, even if also obnoxious. The difference: 4chan’s culture spread well beyond its borders, and its dark side fuelled a global threat. The original QAnon posting arrived on 4chan in 2017, followed quickly by others. Detailed, seemingly knowledgeable, and full of questions for readers to “research”, they quickly attracted backers who propagated them onto much bigger sites like YouTube, which turned a niche audience of thousands into a mass audience of millions.

A key element of Ball’s metaphor is Richard Dawkins’ 1976 concept of memes: scraps of ideas that use us to replicate themselves, as biological viruses do. To extend the analogy, Ball argues that we shouldn’t blame – or dismiss as stupid – the people who get “infected” by QAnon.

This book represents an evolution for Ball. In 2017’s Post-Truth, he advocated fact-checking and teaching media literacy as key elements of the solution to the spread of misinformation. Here, he acknowledges that this approach is only a small part of containing a social movement that feeds on emotional engagement and doesn’t care about facts. In his conclusion, where he advocates prevention rather than cure and the adoption of multi-pronged strategies analogous to those we use to fight diseases like malaria, however, there are echoes of that trust in authority. I continue to believe the essential approach will be nearer to that of modern cybersecurity, similarly decentralized and mixing economics, the social sciences, psychology, and technology, among others. But this challenge is so big that no one metaphor is enough to contain it.