Selective enforcement

This week, as a rider to the 21st Century Peace Through Strength Act, which provides funding for defense in Ukraine, Israel, and Taiwan, the US Congress passed provisions for banning the distribution of TikTok if owner ByteDance has not divested it within 270 days. President Joe Biden signed it into law on Wednesday, and, as Mike Masnick says at Techdirt, ByteDance’s lawsuit is imminently expected, largely on First Amendment grounds. ACLU agrees. Similar arguments won when ByteDance challenged a 2023 Montana law.

For context: Pew Research says TikTok is the fifth-most popular social media service in the US. An estimated 150 million Americans – and 62% of 18-29-year-olds – use it.

The ban may not be a slam-dunk to fail in court. US law, including the constitution, includes many restrictions on foreign influence, from requiring registration for those acting as agents to requiring presidents to have been born US citizens. Until 2017, foreigners were barred from owning US broadcast networks.

So it seems to this non-lawyer as though a lot hinges on how the court defines TikTok and what precedents apply. This is the kind of debate that goes back to the dawn of the Internet: is a privately-owned service built of user-generated content more like a town square, a broadcaster, a publisher, or a local pub? “Broadcast”, whether over the air or via cable, implies being assigned a channel on a limited resource; this clearly doesn’t apply to apps and services carried over the presumably-infinite Internet. Publishing implies editorial control, which social media lacks. A local pub might be closest: privately owned, it’s where people go to connect with each other. “Congress may make no law…abridging the freedom of speech”…but does that cover denying access to one “place” where speech takes place when there are many other options?

TikTok is already banned in Pakistan, Nepal, and Afghanistan, and also India, where it is one of 500 apps that have been banned since 2020. ByteDance will argue that the ban hurts US creators who use TikTok to build businesses. But as NPR reports, in India YouTube and Instagram rolled out short video features to fill the gap for hyperlocal content that the loss of TikTok opened up, and four years on creators have adapted to other outlets.

It will be more interesting if ByteDance claims the company itself has free speech rights. In a country where commercial companies and other organizations are deemed to have “free speech” rights entitling them to donate as much money as they want to political causes (as per the Supreme Court’s ruling in Citizens United v. Federal Election Commission), that might make a reasonable argument.

On the other hand, there is no question that this legislation is full of double standards. If another country sought to ban any of the US-based social media, American outrage would be deafening. If the issue is protecting the privacy of Americans against rampant data collection, then, as Free Press argues, pass a privacy law that will protect Americans from *every* service, not just this one. The claim that the ban is to protect national security is weakened by the fact that the Chinese government, like apparently everyone else, can buy data on US citizens even if it’s blocked from collecting it directly from ByteDance.

Similarly, if the issue is the belief that social media inevitably causes harm to teenagers, as author and NYU professor Jonathan Haidt insists in his new book, then again, why only pick on TikTok? Experts who have really studied this terrain, such as Danah Boyd and others, insist that Haidt is oversimplifying and pushing parents to deny their children access to technologies whose influence is largely positive. I’m inclined to agree; between growing economic hardship, expanding wars, and increasing climate disasters young people have more important things to be anxious about than social media. In any case, where’s the evidence that TikTok is a bigger source of harm than any other social medium?

Among digital rights activists, the most purely emotional argument against the TikTok ban revolves around the original idea of the Internet as an open network. Banning access to a service in one country (especially the country that did the most to promote the Internet as a vector for free speech and democratic values) is, in this view, a dangerous step toward the government control John Perry Barlow famously rejected in 1996. And yet, to increasing indifference, no-go signs are all over the Internet. *Six* years after GDPR came into force, Europeans are still blocked from many US media sites that can’t be bothered to comply with it. Many other media links don’t work because of copyright restrictions, and on and on.

The final double standard is this: a big element in the TikTok ban is the fear that the Chinese government, via its control over companies hosted there, will have access to intimate personal information about Americans. Yet for more than 20 years this has been the reality for non-Americans using US technology services outside the US: their data is subject to NSA surveillance. This, and the lack of redress for non-Americans, is what Max Schrems’ legal cases have been about. Do as we say, not as we do?

Illustrations: TikTok CEO Shou Zi Chew, at the European Commission in 2024 (by Lukasz Kobus at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Alabama never got the bomb

There is this to be said for nuclear weapons: they haven’t scaled. Since 1969, when Tom Lehrer warned about proliferation (“We’ll try to stay serene and calm | When Alabama gets the bomb”), a world of treaties, regulation, and deterrents has helped, but even if it hadn’t, building and updating nuclear weapons remains stubbornly expensive. (That said, the current situation is scary enough.)

The same will not be true of drones, James Patton Rogers explained in a recent talk at Kings College London about his new book, Precision: A History of American Warfare. Already, he says, drones are within reach for non-governmental actors such as Mexican drug cartels. At the BBC, Jonathan Marcus estimated in February 2022 that more than 100 nations and non-state actors already have combat drones and these systems are proliferating rapidly. The brief moment in which the US and Israel had an exclusive edge is already gone; Rogers says Iran and Turkey are “drone powers”. Back to the BBC in 2022: Marcus writes that some terrorist groups had already been able to build attack drone systems using commercial components for a few hundred dollars. Rogers put the number of countries with drone capability in 2023 at 113, plus 65 armed groups. He also called them one of the “greatest threats to state security”, noting the speed and abruptness with which they’ve flipped from being protective and their potential for “assassinations, strikes, saturation attacks”.

Rogers, who calls his book an “intellectual history”, traces the beginnings of precision to the end of the long, muddy, casualty-filled conflict of World War I. Never again: instead, remote attacks on military-industrial targets that limit troops on the ground and loss of life. The arrival of the atomic bomb and Russia’s development of same changed focus to the Dr Strangelove-style desire for the technology to mount massive retaliation. John F. Kennedy successfully campaigned on the missile gap. (In this part of Rogers’ presentation, it was impossible not to imagine how effective this amount of energy could have been if directed toward climate change…)

The 1990s and the Gulf War brought a revival of precision in the form of the first cruise missiles and the first drones. But as long ago as 1988 there were warnings that the US could not monopolize drones and they would become a threat. “We need an international accord to control drone proliferation,” Rogers said.

But the threat to state security was not Rogers’ answer when an audience member asked him, “What keeps you awake at night?”

“Drone mass killings targeting ethnic diasporas in cities.”

Authoritarian governments have long reached out to control opposition outside their borders. In 1974, I rented an apartment from the Greek owner of a local highly-regarded restaurant. A day later, a friend reacted in horror: didn’t I know that restaurateur was persona-non-patronize because he had reported Greek student protesters in Ithaca, New York to the military junta then in power and there had been consequences for their families back home? No, I did not.

As an informant, landlord’s powers were limited, however. He could go to and photograph protests; if he couldn’t identify the students he could still send their pictures. But he couldn’t amass comprehensive location data tracking their daily lives, operate a facial recognition system, or monitor them on social media and infer their social graphs. A modern authoritarian government equipped with Internet connections can do all of that and more, and the data it can’t gather itself it can obtain by purchase, contract, theft, hacking, or compulsion.

In Canada, opponents of Chinese Communist Party policies report harassment and intimidation. Freedom House reports that China’s transnational repression also includes spyware, digital threats, physical assault, and cooption of other countries, all escalating since 2014. There’s no reason for this sort of thing to be limited to the Chinese (and Russians); Citizen Lab has myriad examples of governments’ use of spyware to target journalists, political opponents, and activists, inside or outside the countries where they’re active.

Today, even in democratic countries there is an ongoing trend toward increased and more militaristic surveillance of migrants and borders. In 2021, Statewatch reported on the militarization of the EU’s borders along the Mediterranean, including a collaboration between Airbus and two Israeli companies to use drones to intercept migrant vessels Another workshop that same year made plain the way migrants are being dataveilled by both governments and the aid agencies they rely on for help. In 2022, the courts ordered the UK government to stop seizing the smartphones belonging to migrants arriving in small boats.

Most people remain unaware of this unless some poliitician boasts about it as part of a tough-on-immigration platform. In general, rights for any kind of foreigners – immigrants, ethnic minorities – are a hard sell, if only because non-citizens have no vote, and an even harder one against the headwind of “they are not us” rhetoric. Threats of the kind Rogers imagined are not the sort nations are in the habit of protecting against.

It isn’t much of a stretch to imagine all those invasive technologies being harnessed to build a detailed map of particular communities. From there, given affordable drones, you just need to develop enough malevolence to want to kill them off, and be the sort of country that doesn’t care if the rest of the world despises you for it.

Illustrations: British migrants to Australia in 1949 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The good fight

This week saw a small gathering to celebrate the 25th anniversary (more or less) of the Foundation for Information Policy Research, a think tank led by Cambridge and Edinburgh University professor Ross Anderson. FIPR’s main purpose is to produce tools and information that campaigners for digital rights can use. Obdisclosure: I am a member of its advisory council.

What, Anderson asked those assembled, should FIPR be thinking about for the next five years?

When my turn came, I said something about the burnout that comes to many campaigners after years of fighting the same fights. Digital rights organizations – Open Rights Group, EFF, Privacy International, to name three – find themselves trying to explain the same realities of math and technology decade after decade. Small wonder so many burn out eventually. The technology around the debates about copyright, encryption, and data protection has changed over the years, but in general the fundamental issues have not.

In part, this is because what people want from technology doesn’t change much. A tangential example of this presented itself this week, when I read the following in the New York Times, written by Peter C Baker about the “Beatles'” new mash-up recording:

“So while the current legacy-I.P. production boom is focused on fictional characters, there’s no reason to think it won’t, in the future, take the form of beloved real-life entertainers being endlessly re-presented to us with help from new tools. There has always been money in taking known cash cows — the Beatles prominent among them — and sprucing them up for new media or new sensibilities: new mixes, remasters, deluxe editions. But the story embedded in “Now and Then” isn’t “here’s a new way of hearing an existing Beatles recording” or “here’s something the Beatles made together that we’ve never heard before.” It is Lennon’s ideas from 45 years ago and Harrison’s from 30 and McCartney and Starr’s from the present, all welded together into an officially certified New Track from the Fab Four.”

I vividly remembered this particular vision of the future because just a few days earlier I’d had occasion to look it up – a March 1992 interview for Personal Computer World with the ILM animator Steve Williams, who the year before had led the team that produced the liquid metal man for the movie Terminator 2. Williams imagined CGI would become pervasive (as it has):

“…computer animation blends invisibly with live action to create an effect that has no counterpart in the real world. Williams sees a future in which directors can mix and match actors’ body parts at will. We could, he predicts, see footage of dead presidents giving speeches, films starring dead or retired actors, even wholly digital actors. The arguments recently seen over musicians who lip-synch to recordings during supposedly ‘live’ concerts are likely to be repeated over such movie effects.”

Williams’ latest work at the time was on Death Becomes Her. Among his calmer predictions was that as CGI became increasingly sophisticated the boundary between computer-generated characters and enhancements would become invisible. Thirty years on, the big excitement recently has been Harrison Ford’s deaging for Indiana Jones and the Dial of Destiny. That used CGI, AI, and other tools to digitally swap in his face from 1980s footage.

Side note: in talking about the Ford work to Wired, ILM supervisor Andrew Whitehurst, exactly like Williams in 1992, called the new technology “another pencil”.

Williams also predicted endless legal fights over copyright and other rights. That at least was spot-on; AI and the perpetual reuse of retained footage without further payment is part of what the recent SAG-AFTRA strikes were about.

Yet, the problem here isn’t really technology; it’s the incentives. The businessfolk of Hollywood’s eternal desire is to guarantee their return on investment, and they think recycling old successes is the safest way to do that. Closer to digital rights, law enforcement always wants greater access to private communications; the frustration is that incoming generations of politicians don’t understand the laws of mathematics any better than their predecessors in the 1990s.

Many of the speakers focused on the issue of getting government to listen to and understand the limits of technology. Increasingly, though, a new problem is that, as Bruce Schneier writes in his latest book, The Hacker’s Mind, everyone has learned to think like hackers and subvert the systems they’re supposed to protect. The Silicon Valley mantra of “ask forgiveness, not permission” has become pervasive, whether it’s a technology platform deciding to collect masses of data about us or a police force deciding to stick a live facial recognition pilot next to Oxford Circus tube station. Except no one asks for forgiveness either.

Five years ago, at FIPR’s 20th anniversary, when GDPR is new, Anderson predicted (correctly) that the battles over encryption would move to device access. Today, it’s less clear what’s next. Facial recognition represents a step change; it overrides consent and embeds distrust in our public infrastructure.

If I were to predict the battles of the next five years, I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentatioon because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is danagerous.

Illustrations: The liquid metal man in Terminator 2 reconstituting itself.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Faking it

I have finally figured out what benefit exTwitter gets from its new owner’s decision to strip out the headlines from linked third-party news articles: you cannot easily tell the difference between legitimate links and ads. Both have big unidentified pictures, and if you forget to look for the little “Ad” label at the top right or check the poster’s identity to make sure it’s someone you actually follow, it’s easy to inadvertently lessen the financial losses accruing to said owner by – oh, the shame and horror – clicking on that ad. This is especially true because the site has taken to injecting these ads with increasing frequency into the carefully curated feed that until recently didn’t have this confusion. Reader, beware.

***

In all the discussion of deepfakes and AI-generated bullshit texts, did anyone bring up the possibility of datafakes? Nature highlights a study in which researchers created a fake database to provide evidence for concluding that one of two surgical procedures is better than the other. This is nasty stuff. The rising numbers of retracted papers already showed serious problems with peer review (which are not new, but are getting worse). To name just a couple: reviewers are unpaid and often overworked, and what they look for are scientific advances, not fraud.

In the UK, Ben Goldacre has spearheaded initiatives to improve on the quality of published research. A crucial part of this is ensuring people state in advance the hypothesis they’re testing, and publish the results of all trials, not just the ones that produce the researcher’s (or funder’s) preferred result.

Science is the best process we have for establishing an edifice of reliable knowledge. We desperately need it to work. As the dust settles on the week of madness at OpenAI, whose board was supposed to care more about safety than about its own existence, we need to get over being distracted by the dramas and the fears of far-off fantasy technology and focus on the fact that the people running the biggest computing projects by and large are not paying attention to the real and imminent problems their technology is bringing.

***

Callum Cant reports at the Guardian that Deliveroo has won a UK Supreme Court ruling that its drivers are self-employed and accordingly do not have the right to bargain collectively for higher pay or better working conditions. Deliveroo apparently won this ruling because of a technicality – its insertion of a clause that allows drivers to send a substitute in their place, an option that is rarely used.

Cant notes the health and safety risks to the drivers themselves, but what about the rest of of us? A driver in his tenth hour of a seven-day-a-week grind doesn’t just put themselves at risk; they’re a risk to everyone they encounter on the roads. The way these things are going, if safety becomes a problem, instead of raising wages to allow drivers a more reasonable schedule and some rest, the likelihood is that these companies will turn to surveillance technology, as Amazon has.

In the US, this is what’s happened to truck drivers, and, as Karen Levy documents in her book, Data Driven, it’s counterproductive. Installing electronic logging devices into truckers’ cabs has led older, more experienced, and, above all, *safer* drivers to leave the profession, to be replaced with younger, less-experienced, and cheaper drivers with a higher appetite for risk. As Levy writes, improved safety won’t come from surveiling exhausted drivers; what’s needed is structural change to create better working conditions.

***

The UK’s covid inquiry has been livestreaming its hearings on government decision making for the last few weeks, and pretty horrifying they are, too. That’s true even if you don’t include former deputy medical officer Johnathan Van-Tam’s account of the threats of violence aimed at him and his family. They needed police protection for nine months and were advised to move out of their house – but didn’t want to leave their cat. Will anyone take the job of protecting public health if this is the price?

Chris Whitty, the UK’s Chief Medical Officer, said the UK was “woefully underprepared”, locked down too late, and made decisions too slowly. He was one of the polite ones.

Former special adviser Dominic Cummings (from whom no one expected politeness) said everyone called Boris Johnson a trolley, because, like a shopping trolley with the inevitable wheel pointing in the wrong direction, he was so inconsistent.

The government chief scientific adviser, Patrick Vallance had kept a contemporaneous diary, which provided his unvarnished thoughts at the time, some of which were read out. Among them: Boris Johnson was obsessed with older people accepting their fate, unable to grasp the concept of doubling times or comprehend the graphs on the dashboard, and intermittently uncertain if “the whole thing” was a mirage.

Our leader envy in April 2020 seems correctly placed. To be fair, though: Whitty and Vallance, citing their interactions with their counterparts in other countries, both said that most countries had similar problems. And for the same reason: the leaders of democratic countries are generally not well-versed in science. As the Economist’s health policy editor, Natasha Loder warned in early 2022, elect better leaders. Ask, she said, before you vote, “Are these serious people?” Words to keep in mind as we head toward the elections of 2024.

Illustrations: The medium Mina Crandon and the “materialized spirit hand” she produced during seances.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The two of us

The-other-Wendy-Grossman-who-is-a-journalist came to my attention in the 1990s by writing a story about something Internettish while a student at Duke University who had written a story about something Internettish. Eventually, I got email for her (which I duly forwarded) and, once, a transatlantic phone call from a very excited but misinformed PR person. She got married, changed her name, and faded out of my view.

By contrast, Naomi Klein‘s problem has only inflated over time. The “doppelganger” in her new book, Doppelganger: A Trip into the Mirror World, is “Other Naomi” – that is, the American author Naomi Wolf, whose career launched in 1990 with The Beauty Myth . “Other Naomi” has spiraled into conspiracy theories, anti-government paranoia, and wild unscientific theories. Klein is Canadian; her books include No Logo (1999) and The Shock Doctrine (2007). There is, as Klein acknowledges a lot of *seeming* overlap in that a keyword search might surface both.

I had them confused myself until Wolf’s 2019 appearance on BBC radio, when a historian dished out a live-on-air teardown of the basis of her latest book. This author’s nightmare is the inciting incident Klein believes turned Wolf from liberal feminist author into a right-wing media star. The publisher withdrew and pulped the book, and Wolf herself was globally mocked. What does a high-profile liberal who’s lost her platform do now?

When the covid pandemic came, Wolf embraced every available mad theory and her liberal past made her a darling of the extremist right wing media. Increasingly obsessed with following Wolf’s exploits, which often popped up in her online mentions, Klein discovered that social media algorithms were exacerbating the confusion. She began to silence herself, fearing that any response she made would increase the algorithms’ tendency to conflate Naomis. She also abandoned an article deploring Bill Gates’s stance protecting corporate patents instead of spreading vaccines as widely as possible (The Gates Foundation later changed its position.)

Klein tells this story honestly, admitting to becoming addictively obsessed, promising to stop, then “relapsing” the first time she was alone in her car.

The appearance of overlap through keyword similarities is not limited to the two Naomis, as Klein finds on further investigation. YouTube stars like Steve Bannon, who founded Breitbart and served as Donald Trump’s chief of strategist during his first months in the White House, wrote this playbook: seize on under-acknowledged legitimate grievances, turn them into right wing talking points, and recruit the previously-ignored victims as allies and supporters. The lab leak hypohesis, the advice being given by scientific authorities, why shopping malls were open when schools were closed, the profiteering (she correctly calls out the UK), the behavior of corporate pharma – all of these were and are valid topics for investigation, discussion, and debate. Their twisted adoption as right-wing causes made many on the side of public health harden their stance to avoid sounding like “one of them”. The result: words lost their meaning and their power.

These are problems no amount of content moderation or online safety can solve. And even if it could, is it right to ask underpaid workers in the what Klein terms the “Shadowlands” to clean up our society’s nasty side so we don’t have to see it?

Klein begins with a single doppelganger, then expands into psychology, movies, TV, and other fiction, and ends by navigating expanding circles; the extreme right-wing media’s “Mirror World” is our society’s Mr Hyde. As she warns, those who live in what a friend termed “my blue bubble” may never hear about the media and commentators she investigates. After Wolf’s disgrace on the BBC, she “disappeared”, in reality going on to develop a much bigger platform in the Mirror World. But “they” know and watch us, and use our blind spots to expand their reach and recruit new and unexpected sectors of the population. Klein writes that she encounters many people who’ve “lost” a family member to the Mirror World.

This was the ground explored in 2015 by the filmmaker Jen Senko, who found the smae thing when researching her documentary The Brainwashing of My Dad. Senko’s exploration leads from the 1960s John Birch Society through to Rush Limbaugh and Roger Ailes’s intentional formation of Fox News. Klein here is telling the next stage of that same story. Mirror World is not an accident of technology; it was a plan, then technology came along and helped build it further in new directions.

As Klein searches for an explanation for what she calls “diagnonalism” – the phenomenon that sees a former Obama voter now vote for Trump, or a former liberal feminist shrug at the Dobbs decision – she finds it possible to admire the Mirror World’s inhabitants for one characteristic: “they still believe in the idea of changing reality”.

This is the heart of much of the alienation I see in some friends: those who want structural change say today’s centrist left wing favors the status quo, while those who are more profoundly disaffected dismiss the Bidens and Clintons as almost as corrupt as Trump. The pandemic increased their discontent; it did not take long for early optimistic hopes of “build back better” to fade into “I want my normal”.

Klein ends with hope. As both the US and UK wind toward the next presidential/general election, it’s in scarce supply.

Illustrations: Charlie Chaplin as one of his doppelgangers in The Great Dictator (1940).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Own goals

There’s no point in saying I told you so when the people you’re saying it to got the result they intended.

At the Guardian, Peter Walker reports the Electoral Commission’s finding that at least 14,000 people were turned away from polling stations in May’s local elections because they didn’t have the right ID as required under the new voter ID law. The Commission thinks that’s a huge underestimate; 4% of people who didn’t vote said it was because of voter ID – which Walker suggests could mean 400,000 were deterred. Three-quarters of those lacked the right documents; the rest opposed the policy. The demographics of this will be studied more closely in a report due in September, but early indications are that the policy disproportionately deterred people with disabilities, people from certain ethnic groups, and people who are unemployed.

The fact that the Conservatives, who brought in this policy, lost big time in those elections doesn’t change its wrongness. But it did lead the MP Jacob Rees-Mogg (Con-North East Somerset) to admit that this was an attempt to gerrymander the vote that backfired because older voters, who are more likely to vote Conservative, also disproportionately don’t have the necessary ID.

***

One of the more obscure sub-industries is the business of supplying ad services to websites. One such little-known company is Criteo, which provides interactive banner ads that are generated based on the user’s browsing history and behavior using a technique known as “behavioral retargeting”. In 2018, Criteo was one of seven companies listed in a complaint Privacy International and noyb filed with three data protection authorities – the UK, Ireland, and France. In 2020, the French data protection authority, CNIL, launched an investigation.

This week, CNIL issued Criteo with a €40 million fine over failings in how it gathers user consent, a ruling noyb calls a major blow to Criteo’s business model.

It’s good to see the legal actions and fines beginning to reach down into adtech’s underbelly. It’s also worth noting that the CNIL was willing to fine a *French* company to this extent. It makes it harder for the US tech giants to claim that the fines they’re attracting are just anti-US protectionism.

***

Also this week, the US Federal Trade Commission announced it’s suing Amazon, claiming the company enrolled millions of US consumers into its Prime subscription service through deceptive design and sabotaged their efforts to cancel.

“Amazon used manipulative, coercive, or deceptive user-interface designs known as “dark patterns” to trick consumers into enrolling in automatically-renewing Prime subscriptions,” the FTC writes.

I’m guessing this is one area where data protection laws have worked, In my UK-based ultra-brief Prime outings to watch the US Open tennis, canceling has taken at most two clicks. I don’t recognize the tortuous process Business Insider documented in 2022.

***

It has long been no secret that the secret behind AI is human labor. In 2019, Mary L. Gray and Siddharth Suri documented this in their book Ghost Work. Platform workers label images and other content, annotate text, and solve CAPTCHAs to help train AI models.

At MIT Technology Review, Rhiannon Williams reports that platform workers are using ChatGPT to speed up their work and earn more. A team of researchers from the Swiss Federal Institute of Technology study (PDF)found that between 33% and 46% of the 44 workers they tested with a request to summarize 16 extracts from medical research papers used AI models to complete the task.

It’s hard not to feel a little gleeful that today’s “AI” is already eating itself via a closed feedback loop. It’s not good news for platform workers, though, because the most likely consequence will be increased monitoring to force them to show their work.

But this is yet another case in which computer people could have learned from their own history. In 2008, researchers at Google published a paper suggesting that Google search data could be used to spot flu outbreaks. Sick people searching for information about their symptoms could provide real-time warnings ten days earlier than the Centers for Disease Control could.

This actually worked, some of the time. However, as early as 2009, Kaiser Fung reported at Harvard Business Review in 2014, Google Flu Trends missed the swine flu pandemic; in 2012, researchers found that it had overestimated the prevalence of flu for 100 out of the previous 108 weeks. More data is not necessarily better, Fung concluded.

In 2013, as David Lazer and Ryan Kennedy reported for Wired in 2015 in discussing their investigation into the failure of this idea, GFT missed by 140% (without explaining what that means). Lazer and Kennedy find that Google’s algorithm was vulnerable to poisoning by unrelated seasonal search terms and search terms that were correlated purely by chance, and failed to take into account changing user behavior as when it introduced autosuggest and added health-related search terms. The “availability” cognitive bias also played a role: when flu is in the news, searches go up whether or not people are sick.

While the parallels aren’t exact, large language modelers could have drawn the lesson that users can poison their models. ChatGPT’s arrival for widespread use will inevitably thin out the proportion of text that is human-written – and taint the well from which LLMs drink. Everyone imagines the next generation’s increased power. But it’s equally possible that the next generation will degrade as the percentage of AI-generated data rises.

Illustrations: Drunk parrot seen in a Putney garden (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Breaking badly

This week, the Online Safety Bill reached the House of Lords, which will consider 300 amendments. There are lots of problems with this bill, but the one that continues to have the most campaigning focus is the age-old threat to require access to end-to-end encrypted messaging services.

At his blog, security consultant Alec Muffett predicts the bill will fail in implementation if it passes. For one thing, he cites the argument made by Richard Allan, Baron of Hallam that the UK government wants the power to order decryption but will likely only ever use it as a threat to force the technology companies to provide other useful data. Meanwhile, the technology companies have pushed back with an open letter saying they will withdraw their encrypted products from the UK market rather than weaken them.

In addition, Muffett believes the legally required secrecy when a service provider is issued with a Technical Capability Notice to provide access to communications, which was devised for the legacy telecommunications world, is impossible in today’s world of computers and smartphones. Secrecy is no longer possible, given the many researchers and hackers who make it their job to study changes to apps, and who would surely notice and publicize new decryption capabilities. The government will be left with the choice of alienating the public or failing to deliver its stated objectives.

At Computer Weekly, Bill Goodwin points out that undermining encryption will affect anyone communicating with anyone in Britain, including the Ukrainian military communicating with the UK’s Ministry of Defence.

Meanwhile, this week Ed Caesar reports at The New Yorker on law enforcement’s successful efforts to penetrate communications networks protected by Encrochat and Sky ECC. It’s a reminder that there are other choices besides opening up an entire nation’s communications to attack.

***

This week also saw the disappointing damp-squib settlement of the lawsuit brought by Dominion Voting Systems against Fox News. Disappointing, because it leaves Fox and its hosts free to go on wreaking daily havoc across America by selling their audience rage-enhanced lies without even an apology. The payment that Fox has agreed to – $787 million – sounds like a lot, but a) the company can afford it given the size of its cash pile, and b) most of it will likely be covered by insurance.

If Fox’s major source of revenues were advertising, these defamation cases – still to come is a similar case brought by Smartmatic – might make their mark by alienating advertisers, as has been happening with Twitter. But it’s not; instead, Fox is supported by the fees cable companies pay to carry the channel. Even subscribers who never watch it are paying monthly for Fox News to go on fomenting discord and spreading disinformation. And Fox is seeking a raise to $3 per subscriber, which would mean more than $1,8 billion a year just from affiliate revenue.

All of that insulates the company from boycotts, alienated advertisers, and even the next tranche of lawsuits. The only feedback loop in play is ratings – and Fox News remains the most-watched basic cable network.

This system could not be more broken.

***

Meanwhile, an era is ending: Netflix will mail out its last rental DVD in September. As Chris Stokel-Walker writes at Wired, the result will be to shrink the range of content available by tens of thousands of titles because the streaming library is a fraction of the size of the rental library.

This reality seems backwards. Surely streaming services ought to have the most complete libraries. But licensing and lockups mean that Netflix can only host for streaming what content owners decree it may, whereas with the mail rental service once Netflix had paid the commercial rental rate to buy the DVD it could stay in the catalogue until the disk wore out.

The upshot is yet another data point that makes pirate services more attractive: no ads, easy access to the widest range of content, and no licensing deals to get in the way.

***

In all the professions people have been suggesting are threatened by large language model-based text generation – journalism, in particular – no one to date has listed fraudulent spiritualist mediums. And yet…

The family of Michael Schumacher is preparing legal action against the German weekly Die Aktuelle for publishing an interview with the seven-time Formula 1 champion. Schumacher has been out of the public eye since suffering a brain injury while skiing in 2013. The “interview” is wholly fictitious, the quotes created by prompting an “AI” chat bot.

Given my history as a skeptic, my instinctive reaction was to flash on articles in which mediums produced supposed quotes from dead people, all of which tended to be anodyne representations bereft of personality. Dressing this up in the trappings of “AI” makes such fakery no less reprehensible.

An article in the Washington Post examines Google’s C4 data set scraped from 15 million websites and used to train several of the highest profile large language models. The Post has provided a search engine, which tells us that my own pelicancrossing.net, which was first set up in 1996, has contributed 160,000 words or phrases (“tokens”), or 0.0001% of the total. The obvious implication is that LLM-generated fake interviews with famous people can draw on things they’ve actually said in the past, mixing falsity and truth into a wasteland that will be difficult to parse.

Illustrations: The House of Lords in 2011 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Twitter.

Gap year

What do Internet users want?

First, they want meaningful access. They want usability. They want not to be scammed, manipulated, lied to, exploited, or cheated.

It’s unlikely that any of the ongoing debates in either the US or UK will deliver any of those.

First and foremost, this week concluded two frustrating years in which the US Senate failed to confirm the appointment of Public Knowledge co-founder and EFF board member Gigi Sohn to the Federal Communications Commission. In her withdrawal statement, Sohn blamed a smear campaign by “legions of cable and media industry lobbyists, their bought-and-paid-for surrogates, and dark money political groups with bottomless pockets”.

Whether you agree or not, the result remains that for the last two years and for the foreseeable future the FCC will remain deadlocked and problems such as the US’s lack of competition and patchy broadband provision will remain unsolved.

Meanwhile, US politicians continue obsessing about whether and how to abort-retry-fail Section 230, that pesky 26-word law that relieves Internet hosts of liability for third-party content. This week it was the turn of the Senate Judiciary Committee. In its hearing, the Internet Society’s Andrew Sullivan stood out for trying to get across to lawmakers that S230 wasn’t – couldn’t have been – intended as protectionism for the technology giants because they did not exist when the law was passed. It’s fair to say that S230 helped allow the growth of *some* Internet companies – those that host user-generated content. That means all the social media sites as well as web boards and blogs and Google’s search engine and Amazon’s reviews, but neither Apple nor Netflix makes its living that way. Attacking the technology giants is a popular pasttime just now, but throwing out S230 without due attention to the unexpected collateral damage will just make them bigger.

Also on the US political mind is a proposed ban on TikTok. It’s hard to think of a move that would more quickly alienate young people. Plus, it fails to get at the root problem. If the fear is that TikTok gathers data on Americans and sends it home to China for use in designing manipulative programs…well, why single out TikTok when it lives in a forest of US companies doing the same kind of thing? As Karl Bode writes at TechDirt, if you really want to mitigate that threat, rein in the whole forest. Otherwise, if China really wants that data it can buy it on the open market.

Meanwhile, in the UK, as noted last week, opposition continues to increase to the clauses in the Online Safety bill proposing to undermine end-to-end encryption by requiring platforms to proactively scan private messages. This week, WhatsApp said it would withdraw its app from the UK rather than comply. However important the UK market is, it can’t possibly be big enough for Meta to risk fines of 4% of global revenues and criminal sanctions for executives. The really dumb thing is that everyone within the government uses WhatsApp because of its convenience and security, and we all know it. Or do they think they’ll have special access denied the rest of the population?

Also in the UK this week, the Data Protection and Digital Information bill returned to Parliament for its second reading. This is the UK’s post-Brexit attempt to “take control” by revising the EU’s General Data Protection Regulation; it was delayed during Liz Truss’s brief and destructive outing as prime minister. In its statement, the government talks about reducing the burdens on businesses without any apparent recognition that divergence from GDPR is risky for anyone trading internationally and complying with two regimes must inevitably be more expensive than complying with one.

The Open Rights Group and 25 other civil society organizations have written a letter (PDF) laying out their objections, noting that the proposed bill, in line with other recent legislation that weakens civil rights, weakens oversight and corporate accountability, lessens individuals’ rights, and weakens the independence of the Information Commissioner’s Office. “Co-designed with businesses from the start” is how the government describes the bill. But data protection law was not supposed to be designed for business – or, as Peter Geoghegan says at the London Review of Books, to aid SLAPP suits; it is supposed to protect our human rights in the face of state and corporate power. As the cryptography pioneer Whit Diffie said in 2019, “The problem isn’t privacy; it’s corporate malfeasance.”

The most depressing thing about all of these discussions is that the public interest is the loser in all of them. It makes no sense to focus on TikTok when US companies are just as aggressive in exploiting users’ data. It makes no sense to focus solely on the technology giants when the point of S230 was to protect small businesses, non-profits, and hobbyists. And it makes no sense to undermine the security afforded by end-to-end encryption when it’s essential for protecting the vulnerable people the Online Safety bill is supposed to help. In a survey, EDRi finds that compromising secure messaging is highly unpopular with young people, who clearly understand the risks to political activism and gender identity exploration.

One of the most disturbing aspects of our politics in this century so far is the widening gap between what people want, need, and know and the things politicians obsess about. We’re seeing this reflected in Internet policy, and it’s not helpful.

Illustrations: Andrew Sullivan, president of the Internet Society, testifying in front of the Senate Judiciary Committee.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.

Ghostwritten

This week’s deliberate leak of 100,000 WhatsApp messages sent between the retiring MP Matt Hancock (Con-West Suffolk) and his cabinet colleagues and scientific advisers offers several lessons for the future. Hancock was the health minister during the first year of the covid-19 pandemic, but forced to resign in June 2021, when he was caught on a security camera snogging an adviser in contravention of the social distancing rules.

The most ignored lesson relates to cybersecurity, and is simple: electronic messages are always at risk of copying and disclosure.

This leak happened to coincide with the revival of debates around the future of strong encryption in the UK. First, the pending Online Safety bill has provisions that taken together would undermine all encrypted communications. Simultaneously, a consultation on serious and organized crime proposes to criminalize “custom” encryption devices. A “dictionary attack”, Tim Cushing calls this idea at Techdirt, in that the government will get to define the crime at will.

The Online Safety Bill is the more imminent problem; it has already passed the House of Commons and is at the committee stage in the House of Lords. The bill requires service providers to protect children by proactively removing harmful content, whether public or private, and threatens criminal liability for executives of companies that fail to comply.

Signal, which is basically the same as WhatsApp without the Facebook ownership, has already said it will leave the country if the Online Safety bill passes with the provisions undermining encryption intact.

It’s hard to see what else Signal could do. It’s not a company that has to weigh its principles against the loss of revenue. Instead, as a California non-profit, its biggest asset is the trust of its user base, and staying in a country that has outlawed private communications would kill that off at speed. In threatening to leave it has company: the British secure communications company Element, which said the provisions would taint any secure communications product coming out of the UK – presumably even for its UK customers, such as the Ministry of Defence.

What the Hancock leak reminds us, however, is that encryption, even when appropriately strong and applied end-to-end, is not enough by itself to protect security. You must also be able to trust everyone in the chain to store the messages safely and respect their confidentiality. The biggest threat is careless or malicious insiders, who can undermine security in all sorts of ways. Signal (as an example) provides the ability to encrypt the message database, to disappear messages on an automated schedule, password protection, and so on. If you’re an activist in a hostile area, you may be diligent about turning all these on. But you have no way of knowing if your correspondents are just as careful.

In the case at hand, Hancock gave the messages to the ghost writer for his December 2022 book Pandemic Diaries, Isabel Oakeshott, after requiring her to sign a non-disclosure agreement that he must have thought would protect him, if not his colleagues, from unwanted disclosures. Oakeshott, who claims she acted in the public interest, decided to give the messages to the Daily Telegraph, which is now mining them for stories.

Digression: whatever Oakeshott’s personal motives, there is certainly public interest in these messages. The tone of many quoted exchanges confirms the public perception of the elitism and fecklessness of many of those in government. More interesting is the close-up look at decision making in conditions of uncertainty, which to some filled with hindsight looks like ignorance and impatience. It’s astonishing how quickly people have forgotten how much we didn’t know. As mathematician Christina Pagel told the BBC’s Newsnight, you can’t wait for more evidence when the infection rate is doubling every four days.

What they didn’t know and when they didn’t know it will be an important part of piecing together what actually happened. The mathematician Kit Yates has dissected another exchange, in which Boris Johnson queries his scientific advisers about fatality rates. Yates argues that in assessing this exchange timing ise everything. Had it been in early 2020, it would be understandable to confuse infection fatality rates and case fatality rates, though less so to confuse fractions (0.04) and percentages (4%). Yates pillories Johnson because in fact that exchange took place in August 2020, by which time greater knowledge should have conferred greater clarity. That said, security people might find familiar Johnson’s behavior in this exchange, where he appears to see the Financial Times as a greater authority than the scientists. Isn’t that just like every company CEO?

Exchanges like that are no doubt why the participants wanted the messages kept private. In a crisis, you need to be able to ask stupid questions. It would be better to have a prime minister who can do math and who sweats the details, but if that’s not what we’ve got I’d rather he at least asked for clarification.

Still, as we head into yet another round of the crypto wars, the bottom line is this: neither technology nor law prevented these messages from leaking out some 30 years early. We need the technology. We need the law on our side. But even then, your confidences are only ever as private as your correspondent(s) and their trust network(s) will allow.

Illustrations: The soon-to-be-former-MP Matt Hancock, on I’m a Celebrity.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon or Twitter.