Isolate

Yesterday, the Global Encryption Coalition published a joint letter calling on the UK to rescind its demand that Apple undermine (“backdoor”) the end-to-end encryption on its services. The Internet Society is taking signatures until February 20.

The background: on February 7, Joseph Menn reported at the Washington Post (followed by Dominic Preston at The Verge) that in January the office of the Home Secretary sent Apple a technical capability notice under the Investigatory Powers Act (2018) ordering it to provide access to content that anyone anywhere in the world has uploaded to iCloud and encrypted with Apple’s Advanced Data Protection.

Technical capability notices are supposed to be secret. It’s a criminal offense to reveal that you’ve been sent one. Apple can’t even tell users that their data may be compromised. (This kind of thing is why people publish warrant canaries.) Menn notes that even if Apple withdraws ADP in the UK, British authorities will still demand access to encrypted data everywhere *else*. So it appears that if the Home Office doesn’t back down and Apple is unwilling to cripple its encryption, the company will either have to withdraw ADP across the world or exit the UK market entirely. At his Odds and Ends of History blog, James O’Malley calls the Uk’s demand stupid, counter-productive, and unworkable. At TechRadar, Chiara Castro asks who’s next, and quotes Big Brother Watch director Silkie Carlo: “unprecedented for a government in any democracy”.

When the UK first began demanding extraterritorial jurisdiction for its interception rules, most people wondered how the country thought it would be able to impose it. That was 11 years ago; it was one of the new powers codified in the Data Retention and Investigatory Powers Act (2014) and kept in its replacement, the IPA in 2016.

Governments haven’t changed – they’ve been trying to undermine strong encryption in the hands of the masses since 1991, when Phil Zinmmermann launched PGP – but the technology has, as Graham Smith recounted at Ars Technica in 2017. Smartphones are everywhere. People store their whole lives on them for everything and giant technology companies encrypt both the device itself and the cloud backups. Government demands have changed to reflect that, from focusing on the individual with key escrow and key lengths to focusing on the technology provider with client-side scanning, encrypted messaging (see also the EU) and now cloud storage.

At one time, a government could install a secret wiretap by making a deal with a legacy telco. The Internet’s proliferation of communications providers changed that for a while. During the resulting panic the US passed the Communications Assistance for Law Enforcement Act (1994), which requires Internet service providers and telecommunications companies to install wiretap-ready equipment – originally for telephone calls, later broadband and VOIP traffic as well.

This is where the UK government’s refusal to learn from others’ mistakes is staggering. Just four months ago, the US discovered Salt Typhoon, a giant Chinese hack into its core telecommunications networks that was specifically facilitated by…by…CALEA. To repeat: there is no such thing as a magic hole that only “good guys” can use. If you undermine everyone’s privacy and security to facilitate law enforcement, you will get an insecure world where everyone is vulnerable. The hack has led US authorities to promote encrypted messaging.

Joseph Cox’s recent book, Dark Wire touches on this. It’s a worked example of what law enforcement internationally can do if given open access to all messages criminals send across a network when they think they are operating in complete safety. Yes, the results were impressive: hundreds of arrests, dozens of tons of drugs seized, masses if firearms impounded. But, Cox writes, all that success was merely a rounding error in global drug trade. Universal loss of privacy and security versus a rounding error: it’s the definition of “disproportionate”.

It remains to be seen what Apple decides to do and whether we can trust what the company tells us. At his blog, Alec Muffett is collecting ongoing coverage of events. The Future of Privacy Forum celebrated Safer Internet Day, February 11, with an infographic showing how encryption protects children and teens.

But set aside for a moment all the usual arguments about encryption, which really haven’t changed in over 30 years because mathematical reality hasn’t.

In the wider context, Britain risks making itself a technological backwater. First, there’s the backdoored encryption demand, which threatens every encrypted service. Second, there’s the impact of the onrushing Online Safety Act, which comes into force in March. Ofcom, the regulator charged with enforcing it, is issuing thousands of pages of guidance that make it plain that only large platforms will have the resources to comply. Small sites, whether businesses, volunteer-run Fediverse instances, blogs, established communities, or web boards, will struggle even if Ofcom starts to do a better job of helping them understand their legal obligations. Many will likely either shut down or exit the UK, leaving the British Internet poorer and more isolated as a result. Ofcom seems to see this as success.

It’s not hard to predict the outcome if these laws converge in the worst possible timeline: a second Brexit, this one online.

Illustrations: T-shirt (gift from Jen Persson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

What we talk about when we talk about computers

The climax of Nathan Englander‘s very funny play What We Talk About When We Talk About Anne Frank sees the four main characters play a game – the “Anne Frank game” – that two of them invented as children. The play is on at the Marylebone Theatre until February 15.

The plot: two estranged former best friends in a New York yeshiva have arranged a reunion for themselves and their husbands. Debbie (Caroline Catz), has let her religious attachment lapse in the secular environs of Miami, Florida, where her husband, Phil (Joshua Malina), is an attorney. Their college-age son, Trevor (Gabriel Howell), calls the action.

They host Hasidic Shosh (Dorothea Myer-Bennett) and Yuri (Simon Yadoo), formerly Lauren and Mark, whose lives in Israel and traditional black dress and, in Shosh’s case, hair-covering wig, have left them unprepared for the bare arms and legs of Floridians. Having spent her adult life in a cramped apartment with Yuri and their eight daughters, Shosh is astonished at the size of Debbie’s house.

They talk. They share life stories. They eat. And they fight: what is the right way to be Jewish? Trevor asks: given climate change, does it matter?

So, the Anne Frank game: who among your friends would hide you when the Nazis are coming? The rule that you must tell the truth reveals the characters’ moral and emotional cores.

I couldn’t avoid up-ending this question. There are people I trust and who I *think* would hide me, but it would often be better not to ask them. Some have exceptionally vulnerable families who can’t afford additional risk. Some I’m not sure could stand up to intensive questioning. Most have no functional hiding place. My own home offers nowhere that a searcher for stray humans wouldn’t think to look, and no opportunities to create one. With the best will in the world, I couldn’t make anyone safe, though possibly I could make them temporarily safer.

But practical considerations are not the game. The game is to think about whether you would risk your life for someone else, and why or why not. It’s a thought experiment. Debbie calls it “a game of ultimate truth”.

However, the game is also a cheat, in that the characters have full information about all parts of the story. We know the Nazis coming for the Frank family are unquestionably bent on evil, because we know the Franks’ fates when they were eventually found. It may be hard to tell the truth to your fellow players, but the game is easy to think about because it’s replete with moral clarity.

Things are fuzzier in real life, even for comparatively tiny decisions. In 2012, the late film critic Roger Ebert mulled what he would do if he were a Transport Security Administration agent suddenly required to give intimate patdowns to airline passengers unwilling to go through the scanner. Ebert considered the conflict between moral and personal distaste and TSA officers’ need to keep their reasonably well-paid jobs with health insurance benefits. He concluded that he hoped he’d quit rather than do the patdowns. Today, such qualms are ancient history; both scanners and patdowns have become normalized.

Moral and practical clarity is exactly what’s missing as the Department of Government Efficiency arrives in US government departments and agencies to demand access to their computer systems. Their motives and plans are unclear, as is their authority for the access they’re demanding. The outcome is unknown.

So, instead of a vulnerable 13-year-old girl and her family, what if the thing under threat is a computer? Not the sentient emotional robot/AI of techie fantasy but an ordinary computer system holding boring old databases. Or putting through boring old payments. Or underpinning the boring old air traffic control system. Do you see a computer or the millions of people whose lives depend on it? How much will you risk to protect it? What are you protecting it from? Hinder, help, quit?

Meanwhile, DOGE is demanding that staff allow its young coders to attach unauthorized servers, take control of websites. In addition: mass firings, and a plan to do some sort of inside-government AI startup.

DOGE itself appears to be thinking ahead; it’s told staff to avoid Slack while awaiting a technology that won’t be subject to FOIA requests.

The more you know about computers the scarier this all is. Computer systems of the complexity and accuracy of those the US government has built over decades are not easily understood by incoming non-experts who have apparently been visited by the Knowledge Fairy. After so much time and effort on security and protecting against shadowy hackers, the biggest attack – as Mike Masnick calls it – on government systems is coming from inside the house in full view.

Even if “all” DOGE has is read-only access as Treasury claims – though Wired and Talking Points Memo have evidence otherwise – those systems hold comprehensive sensitive information on most of the US population. Being able to read – and copy? – is plenty bad enough. In both fiction (Margaret Atwood’s The Handmaid’s Tale) and fact (IBM), computers have been used to select populations to victimize. Americans are about to find out they trusted their government more than they thought.

Illustration: Changing a tube in the early computer ENIAC (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.

The Gulf of Google

In 1945, the then mayor of New York City, Fiorello La Guardia signed a bill renaming Sixth Avenue. Eighty years later, even with street signs that include the new name, the vast majority of New Yorkers still say things like, “I’ll meet you at the southwest corner of 51st and Sixth”. You can lead a horse to Avenue of the Americas, but you can’t make him say it.

US president Donald Trump’s order renaming the Gulf of Mexico offers a rarely discussed way to splinter the Internet (at the application layer, anyway; geography matters!), and on Tuesday Google announced it would change the name for US users of its Maps app. As many have noted, this contravenes Google’s 2008 policy on naming bodies of water in Google Earth: “primary local usage”. A day later, reports came that Google has placed the US on its short list of sensitive countries – that is, ones whose rulers dispute the names and ownership of various territories: China, Russia, Israel, Saudi Arabia, Iraq.

Sharpieing a new name on a map is less brutal than invading, but it’s a game anyone can play. Seen on Mastodon: the bay, now labeled “Gulf of Fragile Masculinity”.

***

Ed Zitron has been expecting the generative AI bubble to collapse disastrously. Last week provided an “Is this it?” moment when the Chinese company DeepSeek released reasoning models that outperform the best of the west at a fraction of the cost and computing power. US stock market investors: “Let’s panic!”

The code, though not the training data, is open source, as is the relevant research. In Zitron’s analysis, the biggest loser here is OpenAI, though it didn’t seem like that to investors in other companies, especially Nvidia, whose share price dropped 17% on Tuesday alone. In an entertaining sideshow, OpenAI complains that DeepSeek stole its code – ironic given the history.

On Monday, Jon Stewart quipped that Chinese AI had taken American AI’s job. From there the countdown started until someone invoked national security.

Nvidia’s chips have been the picks and shovels of generative AI, just as they were for cryptocurrency mining. In the latter case, Nvidia’s fortunes waned when cryptocurrency prices crashed, ethercoin, among others, switched to proof of stake, and miners shifted to more efficient, lower-cost application-specific integrated circuits. All of these lowered computational needs. So it’s easy to believe the pattern is repeating with generative AI.

There are several ironies here. The first is that the potential for small language models to outshine large ones has been known since at least 2020, when Timnit Gebru, Emily Bender, Margaret Mitchell, and Angelina McMillan-Major published their stochastic parrots paper. Google soon fired Gebru, who told Bloomberg this week that AI development is being driven by FOMO rather than interesting questions. Second, as an AI researcher friend points out, Hugging Face, which is trying to replicate DeepSeek’s model from scratch, said the same thing two years ago. Imagine if someone had listened.

***

A work commitment forced me to slog through Ross Douthat’s lengthy interview with Marc Andreessen at the New York Times. Tl;dr: Andreessen says Silicon Valley turned right because Democrats broke The Deal under which Silicon Valley supported liberal democracy and the Democrats didn’t regulate them. In his whiny victimhood, Andreessen has no recognition that changes in Silicon Valley’s behavior – and the scale at which it operates – are *why* Democrats’ attitudes changed. If Silicon Valley wants its Deal back, it should stop doing things that are obviously exploitive. Random case in point: Hannah Ziegler reports at the Washington Post that a $1,700 bassinet called a “Snoo” suddenly started demanding $20 per month to keep rocking a baby all night. I mean, for that kind of money I pretty much expect the bassinet to make its own breast milk.

***

Almost exactly eight years ago, Donald Trump celebrated his installation in the US presidency by issuing an executive order that risked up-ending the legal basis for data flows between the EU, which has strict data protection laws, and the US, which doesn’t. This week, he did it again.

In 2017, Executive Order 13768 dominated Computers, Privacy, and Data Protection. The deal in place at the time, Privacy Shield, eventually survived until 2020, when it was struck down in lawyer Max Schrems’s second such case. It was replaced by the Transatlantic Data Privacy Framework, which established the five-member Privacy and Civil Liberties Oversight Board to oversee surveillance and, as Politico explains, handle complaints from Europeans about misuse of their data.

This week, Trump rendered the board non-operational by firing its three Democrats, leaving just one Republican-member in place.*

At Techdirt, Mike Masnick warns the framework could collapse, costing Facebook, Instagram, WhatsApp, YouTube, exTwitter, and other US-based services (including Truth Social) their European customers. At his NGO, noyb, Schrems himself takes note: “This deal was always built on sand.”

Schrems adds that another Trump Executive Order gives 45 days to review and possibly scrap predecessor Joe Biden’s national security decisions, including some the framework also relies on. Few things ought to scare US – and, in a slew of new complaints, Chinese – businesses more than knowing Schrems is watching.

Illustrations: The Gulf of Mexico (NASA, via Wikimedia).

*Corrected to reflect that the three departing board members are described as Democrats, not Democrat-appointed. In fact, two of them, Ed Felten and Travis LeBlanc, were appointed by Trump in his original term.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Dark Wire

Dark Wire
by Joseph Cox
PublicAffairs (Hachette Group)
ISBNs: 9781541702691 (hardcover), 9781541702714 (ebook)

One of the basic principles that emerged as soon as encryption software became available to ordinary individuals on home computers was this: everyone should encrypt their email so the people who really need the protection don’t stick out as targets. Also at that same time, the authorities were constantly warning that if encryption weren’t controlled by key escrow, an implanted back door, or restrictions on its strength, it would help hide the activities of drug traffickers, organized crime, pedophiles, and terrorists. This same argument continues today.

Today, billions of people have access to encrypted messaging via WhatsApp, Signal, and other services. Governments still hate it, but they *use* it; the UK government is all over WhatsApp, as multiple public inquiries have shown.

In Dark Wire: The Incredible True Story of the Largest Sting Operation Ever, Joseph Cox, one of the four founders of 404 Media, takes us on a trip through law enforcement’s adventures in encryption, as police try to identify and track down serious criminals making and distributing illegal drugs by the ton.

The story begins with PhantomSecure, a scheme that stripped down Blackberry devices and installed PGP to encrypt emails and systems to ensure the devices could exchange emails only with other Phantom Secure devices. The service became popular among all sorts of celebrities, politicians, and other non-criminals who value privacy – but not *only* them. All perfectly legal.

One of my favorite moments comes early,when a criminal debating whether to trust a new contact decides he can because he has one of these secure Blackberries. The criminal trusted the supply chain; surely no one would have sold him one of these things without thoroughly checking that he wasn’t a cop. Spoiler alert: he was a cop. That sale helped said cop and his colleagues in the United States, Australia, Canada, and the Netherlands infiltrate the network, arrest a bunch of criminals, and shut it down – eventually, after setbacks, and with the non-US forces frustrated and amazed by US Constitutional law limiting what agents were allowed to look at.

PhantomSecure’s closure made a hole in the market while security-conscious criminals scrambled to find alternatives. It was rapidly filled by competitors working with modified phones: Encrochat and Sky ECC. As users migrated to these services and law enforcement worked to infiltrate and shut them down as well, former PhantomSecure salesman “Afgoo” had a bright idea, which he offered to the FBI: why not build their own encrypted app and take over the market?

The result was Anom, From the sounds of it, some of its features were quite cool. For example, the app itself hid behind an innocent-looking calculator, which acted as a password gateway. Type in the right code, and the messaging app appeared. The thing sold itself.

Of course, the FBI insisted on some modifications. Behind the scenes, Anom devices sent copies of every message to the FBI’s servers. Eventually, the floods of data the agencies harvested this way led to 500 arrests on one day alone, and the seizure of hundreds of firearms and dozens of tons of illegal drugs and precursor chemicals.

Some of the techniques the criminals use are fascinating in their own right. One method of in-person authentication involved using the unique serial number on a bank note, sending it in advance; the mule delivering the money would simply have to show they had the bank note, a physical one-time pad. Banks themselves were rarely used. Instead, cash would be stored in safe houses in various countries and the money would never have to cross borders. So: no records, no transfers to monitor. All of this spilled open for law enforcement because of Anom.

And yet. Cox waits until the end to voice reservations. All those seizures and arrests barely made a dent in the world’s drug trade – a “rounding error”, Cox calls it.

The AI moment

“Why are we still talking about digital transformation?” The speaker was convening a session at last weekend’s UK Govcamp, an event organized by and for civil servants with an interest in digital stuff.

“Because we’ve failed?” someone suggested. These folks are usually *optimists*.

Govcamp is a long-running tradition that began as a guerrilla effort in 2008. At the time, civil servants wanting to harness new technology in the service of government were so thin on the ground they never met until one of them, Jeremy Gould, convened the first Govcamp. These are people who are willing to give up a Saturday in order to do better at their jobs working for us. All hail.

It’s hard to remember now, nearly 15 years on, the excitement in 2010 when David Cameron’s incoming government created the Government Digital Service and embedded it into the Cabinet Office. William Heath immediately ended the Ideal Government blog he’d begun writing in 2004 to press insistently for better use of digital technologies in government. The government had now hired all the people he could have wanted it to, he said, and therefore, “its job is done”.

Some good things followed: tilting government procurement to open the way for smaller British companies, consolidating government publishing, other things less visible but still important. Some data became open. This all has improved processes like applying for concessionary travel passes and other government documents, and made government publishing vastly more usable. The improvement isn’t universal: my application last year to renew my UK driver’s license was sent back because my signature strayed outside the box provided for it.

That’s just one way the business of government doesn’t feel that different. The whole process of developing legislation – green and white papers, public consultations, debates, and amendments – marches on much as it ever has, though with somewhat wider access because the documents are online. Thoughts about how to make it more participatory were the subject of a teacamp in 2013. Eleven years on, civil society is still reading and responding to government consultations in the time-honored way, and policy is still made by the few for the many.

At Govcamp, the conversation spread between the realities of their working lives and the difficulties systems posed for users – that is, the rest of us. “We haven’t removed those little frictions,” one said, evoking the old speed comparisons between Amazon (delivers tomorrow or even today) and the UK government (delivers in weeks, if not months).

“People know what good looks like,” someone else said, in echoing that frustration. That’s 2010-style optimism, from when Amazon product search yielded useful results, search engines weren’t spattered with AI slime and blanketed with ads, today’s algorithms were not yet born, and customer service still had a heartbeat. Here in 2025, we’re all coming up against rampant enshittification, with the result that the next cohort of incoming young civil servants *won’t* know any more what “good” looks like. There will be a whole new layer of necessary education.

Other comments: it’s evolution, not transformation; resistance to change and the requirement to ask permission are embedded throughout the culture; usability is still a problem; trying to change top-down only works in a large organization if it sets up an internal start-up and allows it to cannibalize the existing business; not enough technologists in most departments; the public sector doesn’t have the private sector option of deciding what to ignore; every new government has a new set of priorities. And: the public sector has no competition to push change.

One suggestion was that technological change happens in bursts – punctuated equilibrium. That sort of fits with the history of changing technological trends: computing, the Internet, the web, smartphones, the cloud. Today, that’s “AI”, which prime minister Keir Starmer announced this week he will mainline into the UK’s veins “for everything from spotting potholes to freeing up teachers to teach”.

The person who suggested “punctuated equilibrium” added: “Now is a new moment of change because of AI. It’s a new ‘GDS moment’.” This is plausible in the sense that new paradigms sometimes do bring profound change. Smartphones changed life for homeless people. On the other hand, many don’t do much. Think audio: that was going to be a game-changer, and yet after years of loss-making audio assistants, most of us are still typing.

So is AI one of those opportunities? Many brought up generative AI’s vast consumption of energy and water and rampant inaccuracy. Starmer, like Rishi Sunak before him, seems to think AI can make Britain the envy of other major governments.

Complex systems – such as digital governance – don’t easily change the flow of information or, therefore, the flow of power. It can take longer than most civil servants’ careers. Organizations like Mydex, which seeks to up-end today’s systems to put users in control, have been at work for years now. The upcoming digital identity framework has Mydex chair Alan Mitchell optimistic that the government’s digital identity framework is a breakthrough. We’ll see.

One attendee captured this: “It doesn’t feel like the question has changed from more efficient bureaucracy to things that change lives.” Said another in response, “The technology is the easy bit.”

Illustrations: Sir Humphrey Appleby (Nigel Hawthorne), Bernard Woolley (Derek Fowldes), and Jim Hacker (Paul Eddington) arguing over cultural change in Yes, Minister.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon Bluesky.

Banning TikTok

Two days from now, TikTok may go dark in the US. Nine months ago, in April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act, banning TikTok if its Chinese owner, ByteDance, has not removed itself from ownership by January 19, 2025.

Last Friday, January 10, the US Supreme Court heard three hours of arguments in consolidated challenges filed by TikTok and a group of TikTok users: TikTok, Inc. v. Garland and Furbaugh v. Garland. Too late?

As a precaution, Kinling Lo and Viola Zhou report at Rest of World, at least some of TikTok’s 170 million American users are building community arks elsewhere – the platform Xiaohongshu (“RedNote”), for one. This is not the smartest choice; it, too is Chinese and could become subject to the same national security concerns, like the other Chinese apps Makena Kelly reports at Wired are scooping up new US users. Ashley Belanger reports at Ars Technica that rumors say the Chinese are thinking of segregating these American newcomers.

“The Internet interprets censorship as damage, and routes around it,” EFF founder and activist John Gilmore told Time Magazine in 1993. He meant Usenet, which could get messages past individual server bans, but it’s really more a statement about Internet *users*, who will rebel against bans. That for sure has not changed despite the more concentrated control of the app ecosystem. People will access each other by any means necessary. Even *voice* calls.

PAFACA bans apps from four “foreign adversaries to the United States” – China, Russia, North Korea, and Iran. That being the case, Xiaohongshu/RedNote is not a safe haven. The law just hasn’t noticed this hitherto unknown platform yet.

The law’s passage in April 2024 was followed in early May by TikTok’s legal challenge. Because of the imminent sell-by deadline, the case was fast-tracked, and landed in the US District of Columbia Circuit Court of Appeals in early December. The district court upheld the law and rejected both TikTok’s constitutional challenage and its request for an injunction staying enforcement until the constitutional claims could be fully reviewed by the Supreme Court. TikTok appealed that decision, and so last week here we were. This case is separate from Free Speech Coalition v. Paxton, which SCOTUS heard *this* week and challenges Texas’s 2023 age verification law (H.B. 1181), which could have even further-reaching Internet effects.

Here it gets silly. Incoming president Donald Trump, who originally initiated the ban but was blocked by the courts on constitutional grounds, filed an amicus brief arguing that any ban should be delayed until after he’s taken office on Monday because he can negotiate a deal. NBC News reports that the outgoing Biden administration is *also* trying to stop the ban and, per Sky News, won’t enforce it if it takes effect.

Previously, both guys wanted a ban, but I guess now they’ve noticed that, as Mike Masnick says at Techdirt, it makes them look out of touch to nearly half the US population. In other words, they moved from “Oh my God! The kids are using *TikTok*!” to “Oh, my God! The kids are *using* TikTok!”

The court transcript shows that TikTok’s lawyers made three main arguments. One: TikTok is now incorporated in the US, and the law is “a burden on TikTok’s speech”. Two: PAFACA is content-based, in that it selects types of content to which it applies (user-generated) and ignores others (reviews). Three: the US government has “no valid interest in preventing foreign propaganda”. Therefore, the government could find less restrictive alternatives, such as banning the company from sharing sensitive data. In answer to questions, TikTok’s lawyers claimed that the US’s history of banning foreign ownership of broadcast media is not relevant because it was due to bandwidth scarcity. The government’s lawyers countered with national security: the Chinese government could manipulate TikTok’s content and use the data it collects for espionage.

Again: the Chinese can *buy* piles of US data just like anyone else. TikTok does what Silicon Valley does. Pass data privacy laws!

Experts try to read the court. Amy Howe at SCOTUSblog says the justices seemed divided, but overall likely to issue a swift decision. At This Week in Google and Techdirt, Cathy Gellis says the proceedings, have left her “cautiously optimistic” that the court will not undermine the First Amendment, a feeling seemingly echoed by some of the panel of experts who liveblogged the proceedings.

The US government appears to have tied itself up in knots: SCOTUS may uphold a Congressionally-legislated ban neither old nor new administration now wants, that half the population resents, and that won’t solve the US’s pervasive privacy problems. Lost on most Americans is the irony that the rest of the world has complained for years that under the PATRIOT Act foreign subsidiaries of US companies are required to send data to US intelligence. This is why Max Schrems keeps winning cases under GDPR.

So, to wrap up: the ban doesn’t solve the problem it purports to solve, and it’s not the least restrictive possibility. On the other hand, national security? The only winner may be, as Jason Koebler writes at 404Media, Mark Zuckerberg.

Illustrations: Logo of Douyin, ByteDance’s Chinese version of TikTok.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Disharmony

When an individual user does it, it’s piracy. When a major company does it…it may just get away with it.

At TechCrunch, Kyle Wiggers reports that buried in newly unredacted documents in the copyright case Kadrey v. Meta is testimony that Meta trained its Llama language model on a dataset of ebooks it torrented from LibGen. So, two issues. First, LibGen has been sued numerous times, fined, and ordered to shut down. Second: torrent downloads simultaneously upload to others. So, allegedly, Meta knowingly pirated copyrighted books to train its language model.

Kadrey v. Meta was brought by novelist Richard Kardrey, writer Christopher Golden, and comedian Sarah Silverberg, and is one of a number of cases accusing technology companies of training language models on copyrighted works without permission. Meta claims fair use. Still, not a good look.

***

Coincidentally, this week CEO Mark Zuckerberg announced changes to the company’s content moderation policies in the US (for now), a move widely seen as pandering to the incoming administration. The main changes announced in Zuckerberg’s video clip: Meta will replace fact-checkers (“too politically biased”) with a system of user-provided “community notes” as on exTwitter, remove content restrictions that “shut out people with different ideas”, dial back its automated filters to focus solely on illegal content, rely on user reports to identify material that should be taken down, bring back political content, and move its trust and safety and content moderation teams from California to Texas (“where there is less concern about the bias of our teams”). He also pledges to work with the incoming president to “push back on governments around the world that are going after American companies and pushing to censor more”.

Journalists and fact-checkers are warning that misinformation and disinformation will be rampant, and many are alarmed by the specifics of the kind of thing people are now allowed to say. Zuckerberg frames all this as a “return” to free expression while acknowledging that, “We’re going to catch less bad stuff”

At Techdirt, Mike Masnick begins as an outlier, arguing that many of these changes are actually sensible, though he calls the reasoning behind the Texas move “stupid”, and deplores Zuckerberg’s claim that this is about “free speech” and removing “censorship”. A day later, after seeing the company’s internal guidelines unearthed by Kate Knibbs at Wired , he deplores the new moderation policy as “hateful people are now welcome”.

More interesting for net.wars purposes is the international aspect. As the Guardian says, Zuckerberg can’t bring these changes across to the EU or UK without colliding headlong with the UK’s Online Safety Act and the EU’s Digital Markets Act. Both lay down requirements for content moderation on the largest platforms.

And yet, it’s possible that Zuckerberg may also think these changes help lay the groundwork to meet the EU/UK requirements. Meta will still remove illegal content, which it’s required to do anyway. But he may think there’s a benefit in dialing back users expectations about what else Meta will remove, in that platforms must conform to the rules they set in their terms and conditions. Notice-and-takedown is an easier standard to meet than performance indicators for automated filters. It’s also likely cheaper. This approach is, however, the opposite of what critics like Open Rights Group have predicted the law will bring; ORG believes that platforms will instead over-moderate in order to stay out of trouble, chilling free speech.

Related is an interesting piece by Henry Farrell at his Programmable Matter newsletter, who argues that the more important social media speech issue is that what we read there determines how we imagine others think rather than how we ourselves think. In other words, misinformation, disinformation, and hate speech change what we think is normal, expanding the window of what we think other people find acceptable. That has resonance for me: the worst thing about prominent trolls is they give everyone else permission to behave as badly as they do.

***

It’s now 25 years since I heard a privacy advocate predict that the EU’s then-new data protection rights could become the basis of a trade war with the US. While instead the EU and US have kept trying to find a bypass that will withstand a legal challenge from Max Schrems, the approaches seem to be continuing to diverge, and in more ways.

For example, last week in the longrunning battle over network neutrality, judges on the US Sixth Circuit Court of Appeals ruled that the Federal Communications Commission was out of line when it announced rules in 2023 that classified broadband suppliers as common carriers under Title II of the Communications Act (1934). This judgment is the result of the Supreme Court’s 2024 decision to overturn the Chevron deference, setting courts free to overrule government agencies’ expertise. And that means the end in the US (until or unless Congress legislates) of network neutrality, the principle that all data flowing across the Internet was created equal and should be transmitted without fear or favor. Network neutrality persists in California, Washington, and Colorado, whose legislatures have passed laws to protect it.

China has taught us that the Internet is more divisible by national law than many thought in the 1990s. Copyright law may be the only thing everyone agrees on.

Illustrations: Drunk parrot in a South London garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The lost Internet

As we open 2025 it would be traditional for an Old Internet Curmudgeon to rhapsodize about the good, old days of the 1990s, when the web was open, snark flourished at sites like suck.com, no one owned social media (that is, Usenet and Internet Relay Chat), and even the spam was relatively harmless.

But that’s not the period I miss right now. By “lost” I mean the late 2000s, when we shifted from an Internet of largely unreliable opinions to an Internet full of fact-based sites you could trust. This was the period during which Wikipedia (created 2001) grew up, and Open Street Map (founded 2004) was born, joining earlier sites like the Internet Archive (founded 1996) and Snopes (1994). In that time, Google produced useful results, blogs flourished, and before it killed them if you asked on Twitter for advice on where to find a post box near a point in Liverpool you’d get correct answers straight to your mobile phone.

Today, so far: I can’t get a weather app to stop showing the location I was at last week and show the location I’m at this week. Basically, the app is punishing me for not turning on location tracking. The TV remote at my friend’s house doesn’t fully work and she doesn’t know why or how to fix it; she works around it with a second remote whose failings are complementary. No calendar app works as well as the software I had 1995-2001 (it synced! without using a cloud server and third-party account!). At the supermarket, the computer checkout system locked up. It all adds up to a constant white noise of frustration.

We still have Wikipedia, Open Street Map, Snopes, and the Internet Archive. But this morning a Mastodon user posted that their ten-year-old says you can’t trust Google any more: “It just returns ‘a bunch of madeup stuff’.” When ten-year-olds know your knowledge product sucks…

If generative AI were a psychic we’d call what it does cold reading.

At his blog, Ed Zitron has published a magnificent, if lengthy, rant on the state ot technology. “The rot economy”, he calls it, and says we’re all victims of constant low-level trauma. Most of his complaints will be familiar: the technologies we use are constantly shifting and mostly for the worse. My favorite line: “We’re not expected to work out ‘the new way to use a toilet’ every few months because somebody decided we were finishing too quickly.”

Pause to remember nostalgically 2018, when a friend observed that technology wasn’t exciting any more and 2019, when many more people thought the Internet was no longer “fun”. Those were happy days. Now we are being overwhelmed with stuff we actively don’t want in our lives. Even hacked Christmas lights sound miserable for the neighbors.

***

I have spent some of these holidays editing a critique of Ofcom’s regulatory plans under the Online Safety Act (we all have our own ideas about holidays), and one thing seems clear: the splintering Internet is only going to get worse.

Yesterday, firing up Chrome because something didn’t work in Firefox, I saw a fleeting popup to the effect that because I may not be over 18 there are search results Google won’t show me. I don’t think age verification is in force in the Commonwealth of Pennsylvania – US states keep passing bills, but hit legal challenges.

Age verification has been “imminent” in the UK for so long – it was originally included in the Digital Economy Act 2017 – that it seems hard to believe it may actually become a reality. But: sites within the Act’s scope will have to complete an “illegal content risk assessment” by March 16. So the fleeting popup felt like a visitation from the Ghost of Christmas Future.

One reason age verification was dropped back then – aside from the distractions of Brexit – was that the mechanisms for implementing it were all badly flawed – privacy-invasive, ineffective, or both. I’m not sure they’ve improved much. In 2022, France’s data protection watchdog checked them out: “CNIL finds that such current systems are circumventable and intrusive, and calls for the implementation of more privacy-friendly models.”

I doubt Ofcom can square this circle, but the costs of trying will include security, privacy, freedom of expression, and constant technological friction. Bah, humbug.

***

Still, one thing is promising: the rise of small, independent media outlets wbo are doing high-quality work. Joining established efforts like nine-year-old The Ferret, ten-year-old Bristol Cable, and five-year-old Rest of World are year-and-a-half-old 404 Media and newcomer London Centric. 404Media, formed by four journalists formerly at Vice’s Motherboard, has been consistently making a splash since its founding; this week Jason Koebler reminds that Elon Musk’s proactive willingness to unlock the blown-up cybertruck in Las Vegas and provide comprehensive data on where it’s been, including video from charging stations, without warrant or court order, could apply to any Tesla customer at any time. Meanwhile, in its first three months London Centric’s founding journalist, Jim Waterson, has published pieces on the ongoing internal mess at Transport for London resulting from the August cyberattack and bicycle theft in the capital. Finally, if you’re looking for high-quality American political news, veteran journalist Dan Gillmore curates it for you every day in his Cornerstone of Democracy newsletter.

The corporate business model of journalism is inarguably in trouble, but journalism continues.

Happy new year.

Illustrations: The Marx Brothers in their 1929 film, The Cocoanuts, newly released into the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Non-playing characters

It’s the most repetitive musical time of the year. Stores have been torturing their staff with an endlessly looping soundtrack of the same songs – in some cases since August. Even friends are playing golden Christmas oldies from the 1930s to 1950s.

Once upon a time – within my lifetime, in fact – stores and restaurants were silent. Into that silence came Muzak. I may be exaggerating: Wikipedia tells me the company dates to 1934. But it feels true.

The trend through all those years has been toward turning music into a commodity and pushing musicians into the poorly paid background by rerecording “for hire” to avoid paying royalties, among other tactics.

That process has now reached its nadir with the revelation by Liz Pelly at Harper’s Magazine that Spotify has taken to filling its playlists with “fake” music – that is, music created at scale by production companies and assigned to “ghost artists” who don’t really exist. For users looking for playlists of background music, it’s good enough; for Spotify it’s far more lucrative than streaming well-known artists who must be paid royalties (even at greatly reduced rates from the old days of radio).

Pelly describes the reasoning behind the company’s “Perfect Fit Content” program this way: “Why pay full-price royalties if users were only half listening?” This is music as lava lamp.

And you thought AI was going to be the problem. But no, the problem is not the technology, it’s the business model. At The New Yorker, Hua Hsu ruminates on Pelly’s imminently forthcoming book, Mood Machine, in terms of opportunity costs: what is the music we’re not hearing as artists desperate to make a living divert to conform to today’s data-driven landscape? I was particularly struck by Hsu’s data point that Spotify has stopped paying royalties on tracks that are streamed fewer than 1,000 times in a year. From those who have little, everything is taken.

The kind of music I play – traditional and traditional-influenced contemporary – is the opposite of all this. Except for a brief period in the 1960s (“the folk scare”), folk musicians made our own way. We put out our own albums long before it became fashionable, and sold from the stage because we had to. If the trend continues, most other musicians will either become like us or be non-playing characters in an industry that couldn’t exist without them.

***

The current Labour government is legislating the next stage of reforming the House of Lords: the remaining 92 hereditary peers are to be ousted. This plan is a mere twig compared to Keir Starmer’s stated intention in 2020 and 2022 to abolish it entirely. At the Guardian, Simon Jenkins is dissatisfied: remove the hereditaries, sure, but, “There is no mention of bishops and donors, let alone Downing Street’s clothing suppliers and former secretaries. For its hordes of retired politicians, the place will remain a luxurious club that makes the Garrick [club] look like a greasy spoon.”

Jenkins’ main question is the right one: what do you replace the Lords with? It is widely known among the sort of activists who testify in Parliament that you get deeper and more thoughtful questions in the Lords than you ever do in the Commons. Even if you disagree with members like Big Issue founder John Bird and children’s rights campaigner and filmmaker Beeban Kidron, or even the hereditary Earl of Erroll, who worked in the IT industry and has been a supporter of digital rights for years, it’s clear they’re offering value. Yet I’d be surprised to see them stand for election, and as a result it’s not clear that a second wholly elected chamber would be an upgrade.

With change afoot, it’s worth calling out the December 18 Lords Grand Committee debate on the data bill. I tuned in late, just in time to hear Kidron and Timothy Clement-Jones dig into AI and UK copyright law. This is the Labour plan to create an exception to copyright law so AI companies can scrape data at will to train their models. As Robert Booth writes at the Guardian, there has been, unsurprisingly, widespread opposition from the creative sector. Among other naysayers, Kidron compared the government’s suggested system to asking shopkeepers to “opt out of shoplifters”.

So they’re in this ancient setting, wearing modern clothes, using the – let’s call it – *vintage* elocutionary styling of the House of Lords…and talking intelligently and calmly about the iniquity of vendors locking schools into expensive contracts for software they don’t need, and AI companies’ growing disregard for robots.txt. Awesome. Let’s keep that, somehow.

***

In our 20 years of friendship I never knew that John “JI” Ioannidis, who died last month, had invented technology billions of people use every day. As a graduate student at Columbia, where he received his PhD in 1993, in work technical experts have called “transformative”, Ioannidis solved the difficult problem of forwarding Internet data to devices moving around from network to network: Mobile IP, in other words. He also worked on IPSec, trust management, and prevention of denial of service attacks.

“He was a genius,” says one of his colleagues, and “severely undercredited”. He is survived by his brother and sister, and an infinite number of friends who went for dim sum with him. RIP.

Illustrations: Cartoon by veteran computer programmer Jef Poskanzer. Used by permission.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Government identification as a service

This week, the clock started ticking on the UK’s Online Safety Act. Ofcom, the regulator charged with enforcing it, published its codes of practice and guidance, which come into force on March 17, 2025. At that point, websites that fall into scope – in Ofcom’s 2023 estimate 150,000 of them – must comply with requirements to conduct risk assessments, preemptively block child sexual abuse material, register a responsible person (who faces legal and financial liability), and much more.

Almost immediately, the first casualty made itself known: Dee Kitchen announced the closure of her site, which supports hundreds of interest-based forums. Ofcom’s risk assessment guidance (PDF), the personal liability would be overwhelming even if the forums produced enough in donations to cover the costs of compliance.

Russ Garrett has a summary for small sites. UK-linked blogs – even those with barely any readers – could certainly fit the definition per Ofcom’s checker tool, if users can comment on each other’s posts. Common sense says that’s ridiculous in many cases…but as Kitchen says all takes to ruin the blogger’s life is a malicious complainant wielding the OSA as their weapon.

Kitchen will certainly not be alone in concluding the requirements are prohibitively risky for web forums and bulletin boards that are run by volunteers and have minimal funding. Yet they are the Internet’s healthy social ecology, without the algorithms and business models that do most to create the harms the Act is meant to address. Promising Trouble and Power to Change are collaborating on a community of practice, and have asked Ofcom for a briefing on compliance for volunteers and small sites.

Garrett’s summary also points out that Ofcom’s rules leave it wide open for sites to censor *more* than is required, and many will do exactly that to minimize their risk. A side effect, as Garrett writes, will be to further centralize the Net, as moving communities to larger providers such as Discord will shift the liability onto *them*. This is what happens when rules controlling speech are written from the single lens of preventing harm rather than starting from a base of human rights.

More guidance to come from Ofcom next month. We haven’t even started on implementing age verification yet.

***

On Monday, I learned a new term I wish I hadn’t: “government identity as a service”. GIAAS?

The speaker was human rights campaigner Edward Hasbrouck, in a talk on identification Dave Farber‘s and Dan Gillmor‘s weekly CCRC/IP-Asia Zoom call.

Most people trace the accelerating rise of demands for identification in countries like the US and UK to 9/11. Based on that, there are now people old enough to drink in a US state who are not aware it was ever possible to just walk up to fly, get a hotel room, or enter an office. As Hasbrouck writes in a US election day posting, the rise in government demands for ID has been powered by the simultaneous rise of corporate tracking for commercial purposes. He calls it a “malign convergence of interest”.

It has long been obvious that anything companies collect can be subpoenaed by governments. Hasbrouck’s point, however, is that identification enables control as well as surveillance; it brings watchlists, blocklists, and automated bars to freedom of action – it makes us decision subjects as Gavin Freeguard said at the recent Foundation for Information Policy Research event.

Hasbrouck pinpoints three components that each present a vulnerability to control: identification, logging, decision making. As an example, consider the UK’s in-progress eVisa system, in which the government confirms an individual’s visa status online in real time with no option for physical documentation. This gives the government enormous power to stop individuals from doing vital but mundane things like rent a home, board an aircraft, or get a job. Its heart is identification – and a law delegating border enforcement to myriad civil intermediaries and normalizes these checks.

Many in the UK were outraged by proposals to give the Department of Work and Pensions the power to examine people’s bank accounts. In the US, Hasbrouck points to a recent report from the House Judiciary Committee on the Weaponization of the Federal Government that documents the Treasury Department’s Financial Crimes Enforcement Network’s collaboration with the FBI to push banks to submit reports of suspicious activity while it trawled for possible suspects after the January 6 insurrection. Yes, the destructors should be caught and punished; but also any weapon turned against people we don’t like can also be turned against us. Did anyone vote to let the FBI conduct financial surveillance by the million?

Now imagine that companies outsource ID checks to the government and offload the risk of running their own. That is how the no-fly list works. That’s how airlines operate *now*. GIAAS.

Then add the passive identification that systems like facial recognition are spreading. You can no longer reliably know whether you have been identified and logged, who gets that information, or what hidden decision they may make based on it. Few of us are sure of our rights in any situation, and few of us even ask why. In his slides (PDF), Hasbrouck offers a list of ways to fight back. He has hope.

Illustrations: Edward Hasbrouck at CPDP in 2017.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.