Cautionary tales

I’ve been online for nearly 34 years, and I’m thinking of becoming a child. Or at least, a child to big user-to-user social media services, which next week will start asking for proof of adulthood. On July 25, the new age verification requirements under the Online Safety Act come into effect in the UK. The regulator, Ofcom, has published a guide.

Plenty of companies aim to join this new market. Some are familiar: credit scorers Experian and Transunion. Others are new: Yoti, which we saw demonstrated back in 2016, and Sam Altman and Andreessen Horowitz-backed six-year-old startup World, which recently did a promotional tour for the UK launch of its Orb identification system. Summary: many happy privacy words, but still dubious.

Reddit picked Persona; Dearbail Jordan at the BBC says Redditors will need to upload either a selfie for age estimation or a government-issued ID. Reddit says it will not see this data, only storing each user’s verification status along with the birth date they’ve (optionally) provided.

Bluesky has chosen Kids’ Web Services from Epic Games. The announcement says KWS accepts multiple options: payment cards, ID scans, and face scans. Users who decline to supply this information will be denied access to adult content and direct messaging. How much do I care about either? Would I rather just be a child to two-year-old Bluesky?

On older sites my adulthood ought to be obvious: I joined Twitter/X in 2008 and Reddit in 2015. Do the math, guys! I suppose there is a chance I could have created the account, forgotten it, and then revived it for a child (the “older brother problem”), but I’m not sure these third-party verifiers solve that either.

Everyone wants to protect children. But it doesn’t make sense to do it by creating a system that exposes everyone, including children, to new privacy risks. In its report on how to fix the OSA, the Open Rights Group argues that interoperability and portability should be first principles, and that users should be able to choose providers and methods. Today, the social media companies don’t see age verification data; in five years will they be buying up those providers? These first steps matter, as they are setting the template for what is to come.

This is the opening of a floodgate. On June 27 the US Supreme Court ruled in Free Speech Coalition v. Paxton to uphold a law requiring pornographic websites to verify users’ ages through government-issued ID. At TechDirt, Mike Masnick called the ruling taking a chainsaw to the First Amendment.

It’s easy to predict that here will be scandals surrounding the data age verifiers collect, and others where technological failures let children access the wrong sort of content. We’ll hear less about the frustrations of people who are blocked by age verification from essential information. Meanwhile, child safety folks will continue pushing for new levels of control.

The big question is this: how will we know if it’s working? What does “success” look like?

***

At Platformer, Casey Newton covers Substack’s announcement that it has closed series C funding round of $100 million, valuing the company at $1.1 billion. The eight-year-old company gets to say it’s a unicorn.

Newton tries to understand how Substack is worth that. He predicts – logically – that its only choice to justify its venture capitalists’ investment will be rampant enshittification. These guys don’t put in that kind of money without expecting a full-bore return, which is why Newton is dubious about the founders’ promise to invest most of that newly-raised capital in creators. Recall the stages Cory Doctorow laid out: first they amass as many users as possible; then they abuse those users to amass as many business customers (advertisers) as possible; then they squeeze everyone.

Substack, which announced four months ago that it – or, more correctly, its creators – has more than 5 million paid subscriptions, is different in that its multi-sided market structure is more like Uber or Amazon Marketplace than like a social media site or traditional publisher. It has users (readers and listeners), creators (like Uber’s drivers or Amazon’s sellers), and customers (advertisers). Viewed that way, it’s easy to see Substack’s most likely path: raise prices (users and advertisers), raise thresholds and commissions (creators), and, like Amazon, force sellers (creators) into using fee-based additional services in order to stay afloat. Plus, it must crush the competition. See similar math from Anil Dash.

Less ponderable is the headwind of Substack’s controversial hospitality to extremists, noted in 2023 by Jonathan Katz at The Atlantic. Some creators – like Newton – have opted to leave for competitor Ghost, which is both open source and cheaper. Many friends refuse to pay Substack even when they want to support creators whose work they admire. At the time, Stephen Bush responded at the Financial Times that Substack should admit that it’s not a publisher but a “handy bit of infrastructure for sending newsletters”. Is that worth $1.1 billion?

Like earlier Silicon Valley companies, Substack is planning to reverse its previous disdain for advertising, as Benjamin Mullin and Jessica Testa report at the New York Times. The company is apparently also looking forward to embracing social networking.

So, no really new ideas, then?

Illustrations: Unicorn (by Pearson Scott Foresman via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Revival

There appears to be media consensus: “Bluesky is dead.”

At The Commentary, James Meigs calls Bluesky “an expression of the left’s growing hypersensitivity to ideas leftists find offensive”, and says he accepts exTwitter’s “somewhat uglier vibe” in return for “knowing that right-wing views aren’t being deliberately buried”. Then he calls Bluesky “toxic” and a “hermetically sealed social-media bubble”.

At New Media and Marketing, Rich Meyer says Bluesky is in decline and engagement is dropping, and exTwitter is making a comeback.

At Slate, Alex Kirshner and Nitish Pahwa complain that Bluesky feels “empty”, say that its too-serious users are abandoning it because it isn’t fun, and compare it to a “small liberal arts college” and exTwitter to a “large state university”.

At The Spectator, Sean Thomas regrets that “Bluesky is dying” – and claims to have known it would fail from his first visit to the site, “a bad vegan cafe, full of humorless puritans”.

Many of these pieces – Mark Cuban at Fortune, for example, and Megan McArdle at the Washington Post – blame a “lack of diversity of thought”.

As Mike Masnick writes on TechDirt in its defense (Masnick is a Bluesky board member), “It seems a bit odd: when something is supposedly dying or irrelevant, journalists can’t stop writing about it.”

Have they so soon forgotten 2014, when everyone was writing that Twitter was dead?

Commentators may be missing that success for Bluesky looks different: it’s trying to build a protocol-driven ecosystem, not a site. Twitter had one, but destroyed it as its ad-based business model took over. Both Bluesky and Mastodon, which media largely ignores, aim to let users create their own experience and are building tools that give users as much control as possible. It seems to offend some commentators that one of them lets you block people you don’t want to deal with, but that’s weird, since it’s the one every social site has.

All social media have ups and downs, especially when they’re new (I really wonder how many of these commentators experienced exTwitter in its early days or have looked at Truth Social’s user numbers). Settling into a new environment and rebuilding take time – it may look like the old place, but its affordances are different, and old friends are missing. Meanwhile, anecdotally, some seem to be leaving social media entirely, driven away by privacy issues, toxic behavior, distaste for platform power and its owners, or simply distracted by life. Few of us *have* to use social media.

***

In 2002, the UK’s Financial Services Authority was the first to implement an EU directive allowing private organizations to issue their own electronic money without a banking license if they could meet the capital requirements. At the time, the idea seemed kind of cute, especially since there was a plan to waive some of the requirements for smaller businesses. Everyone wanted micropayments; here was a framework of possibility.

And then nothing much happened. The Register’s report (the first link above) said that organizations such as the Post Office, credit card companies, and mobile operators were considering launching emoney offerings. If they did, the results sank without trace. Instead, we’re all using credit/debit cards to pay for stuff online, just as we were 23 years ago. People are relucrtant to trust weird, new-fangled forms of money.

Then, in 2008, came cryptocurrencies – money as lottery ticket.

Last week, the Wall Street Journal reported that Amazon, Wal-Mart, and other multinationals are exploring stablecoins as a customer payment option – in other words, issuing their own cryptocurrencies, pegged to the US dollar. As Andrew Kassel explains at Investopedia, the result could be to bypass credit cards and banks, saving billions in fees.

It’s not clear how this would work, but I’m suspicious of the benefits to consumers. Would I have to buy a company’s stablecoin before doing business with it? And maintain a floating balance? At Axios, Brady Dale explores other possibilities. Ultimately, it sounds like a return to the 1970s, before multipurpose credit cards, when people had store cards from the retailers they used frequently, and paid a load of bills every month. Dale seems optimistic that this could be a win for consumers as well as retailers, but I can’t really see it.

In other words, the idea seems less cute now, less fun technological experiment, more rapacious. There’s another, more disturbing, possibility: the return of the old company town. Say you work for Amazon or Wal-Mart, and they offer you a 10% bonus for taking your pay in their stablecoin. You can’t spend it anywhere but their store, but that’s OK, right, because they stock everything you could possibly want? A modern company town doesn’t necessarily have to be geographical.

I’ve long thought that company towns, which allowed companies to effectively own employees, are the desired endgame for the titans. Elon Musk is heading that way with Starbase, Texas, now inhabited primarily by SpaceX employees, as Elizabeth Crisp reports at The Hill.

I don’t know if the employees who last month voted enthusiastically for the final incorporation of Starbase realize how abusive those old company towns were.

Illustrations: The Starbase sign adjoining Texas Highway 4, in 2023 (via Jenny Hautmann at Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A thousand small safety acts

“The safest place in the world to be online.”

I think I remember that slogan from Tony Blair’s 1990s government, when it primarily related to ecommerce. It morphed into child safety – for example, in 2010, when the first Digital Economy Act was passed, or 2017, when the Online Safety Act, passed in 2023 and entering into force in March 2025, was but a green paper. Now, Ofcom is charged with making it reality.

As prior net.wars posts attest, the 2017 green paper began with the idea that social media companies could be forced to pay, via a levy, for the harm they cause. The key remaining element of that is a focus on the large, dominant companies. The green paper nodded toward designing proportionately for small businesses and startups. But the large platforms pull the attention: rich, powerful, and huge. The law that’s emerged from these years of debate takes in hundreds of thousands of divergent services.

On Mastodon, I’ve been watching lawyer Neil Brown scrutinize the OSA with a particular eye on its impact on the wide ecosystem of what we might call “the community Internet” – the thousands of web boards, blogs, chat channels, and who-knows-what-else with no business model because they’re not businesses. As Brown keeps finding in his attempts to help provide these folks with tools they can use are struggling to understand and comply with the act.

First things first: everyone agrees that online harm is bad. “Of course I want people to be safe online,” Brown says. “I’m lucky, in that I’m a white, middle-aged geek. I would love everyone to have the same enriching online experience that I have. I don’t think the act is all bad.” Nonetheless, he sees many problems with both the act itself and how it’s being implemented. In contacts with organizations critiquing the act, he’s been surprised to find how many unexpectedly agree with him about the problems for small services. However, “Very few agreed on which was the worst bit.”

Brown outlines two classes of problem: the act is “too uncertain” for practical application, and the burden of compliance is “too high for insufficient benefit”.

Regarding the uncertainty, his first question is, “What is a user?” Is someone who reads net.wars a user, or just a reader? Do they become a user if they post a comment? Do they start interacting with the site when they read a comment, make a comment, or only when they comment to another user’s comment? In the fediverse, is someone who reads postings he makes via his private Mastodon instance its user? Is someone who replies from a different instance to that posting a user of his instance?

His instance has two UK users – surely insignificant. Parliament didn’t set a threshold for the “significant number of UK users” that brings a service into scope, so Ofcom says it has no answer to that question. But if you go by percentage, 100% of his user base is in Britain. Does that make Britain his “target market”? Does having a domain name in the UK namespace? What is a target market for the many community groups running infrastructure for free software projects? They just want help with planning, or translation; they’re not trying to sign up users.

Regarding the burden, the act requires service providers to perform a risk assessment for every service they run. A free software project will probably have a dozen or so – a wiki, messaging, a documentation server, and so on. Brown, admittedly not your average online participant, estimates that he himself runs 20 services from his home. Among them is a photo-sharing server, for which the law would have him write contractual terms of service for the only other user – his wife.

“It’s irritating,” he says. “No one is any safer for anything that I’ve done.”

So this is the mismatch. The law and Ofcom imagine a business with paid staff signing up users to profit from them. What Brown encounters is more like a stressed-out woman managing a small community for fun after she puts the kids to bed.

Brown thinks a lot could be done to make the act less onerous for the many sites that are clearly not the problem Parliament was trying to solve. Among them, carve out low-risk services. This isn’t just a question of size, since a tiny terrorist cell or a small ring sharing child sexual abuse material can pose acres of risk. But Brown thinks it shouldn’t be too hard to come up with criteria to rule services out of scope such as a limited user base coupled with a service “any reasonable person” would consider low risk.

Meanwhile, he keeps an In Memoriam list of the law’s casualties to date. Some have managed to move or find new owners; others are simply gone. Not on the list are non-UK sites that now simply block UK users. Others, as Brown says, just won’t start up. The result is an impoverished web for all of us.

“If you don’t want a web dominated by large, well-lawyered technology companies,” Brown sums up, “don’t create a web that squeezes out small low-risk services.”

Illustrations: Early 1970s cartoon illustrating IT project management.

Wendy M. Grossman is an award-winning journalist. Her Web site has extensive links to her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Careless People

Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism
By Sarah-Wynn-Williams
Macmillan
ISBN: 978-1035065929

In his 2021 book Social Warming, Charles Arthur concludes his study of social media with the observation that the many harms he documented happened because no one cared to stop them. “Nobody meant for this to happen,” he writes to open his final chapter.

In her new book, Careless People, about her time at Facebook, former New Zealand diplomat Sarah Wynn-Williams shows the truth of Arthur’s take. A sad tale of girl-meets-company, girl-loses-company, girl-tells-her-story, it starts with Wynn-Williams stalking Facebook to identify the right person to pitch hiring her to build its international diplomatic relationships. I kept hoping increasing dissent and disillusion would lead her to quit. Instead, she stays until she’s fired after HR dismisses her complaint of sexual harassment.

In 2011, when Wynn-Williams landed her dream job, Facebook’s wild expansion was at an early stage. CEO Mark Zuckerberg is awkward, sweaty, and uncomfortable around world leaders, who are dismissive. By her departure in 2017, presidents of major countries want selfies with him and he’s much more comfortable – but no longer cares. Meanwhile, then-Chief Operating Officer Sheryl Sandberg, wealthy from her time at Google, becomes a celebrity via her book, Lean In, written with the former TV comedy writer Nell Scovell. Sandberg’s public feminism clashes with her employee’s experience. When Wynn-Williams’s first child is a year old, a fellow female employee congratulates her on keeping the child so well-hidden she didn’t know it existed.

The book provides hysterically surreal examples of American corporatism. She is in the delivery room, feet in stirrups, ordered to push, when a text arrives: can she draft talking points for Davos? (She tries!) For an Asian trip, Zuckerberg wants her to arrange a riot or peace rally so he can appear to be “gently mobbed”. When the company fears “Mark” or “Sheryl” might be arrested if they travel to Korea, managers try to identify a “body” who can be sent in as a canary. Wynn-Williams’s husband has to stop her from going. Elsewhere, she uses her diplomatic training to land Zuckerberg a “longer-than-normal handshake” with Xi Jinping.

So when you get to her failure to get her bosses to beef up the two-person content moderation team for Myanmar’s 60 million people, rewrite the section so Burmese characters render correctly, and post country-specific policies, it’s obvious what her bosses will decide. The same is true of internal meetings discussing the tools later revealed to let advertisers target depressed teens. Wynn-Williams hopes for a safe way forward, but warns that company executives’ “lethal carelessness” hasn’t changed.

Cultural clash permeates this book. As a New Zealander, she’s acutely conscious of the attitudes she encounters, and especially of the wealth and class disparity that divide the early employees from later hires. As pregnancies bring serious medical problems and a second child, the very American problem of affording health insurance makes offending her bosses ever riskier.

The most important chapters, whose in-the-room tales fill in gaps in books by Frances Haugen, Sheera Frankel and Cecilia Kang, and Steven Levy, are those in which Wynn-Williams recounts the company’s decision to embrace politics and build its business in China. If, her bosses reason, politicians become dependent on Facebook for electoral success, they will balk at regulating it. Donald Trump’s 2016 election, which Zuckerberg initially denied had been significantly aided by Facebook, awakened these political aspirations. Meanwhile, Zuckerberg leads the company to build a censorship machine to please China. Wynn-Williams abhors all this – and refuses to work on China. Nonetheless, she holds onto the hope that she can change the company from inside.

Apparently having learned little from Internet history, Meta has turned this book into a bestseller by trying to suppress it. Wynn-Williams managed one interview, with Business Insider, before an arbitrator’s injunction stopped her from promoting the book or making any “disparaging, critical or otherwise detrimental comments” related to Meta. This fits the man Wynn-Williams depicts who hates to lose so much that his employees let him win at board games.

Banning TikTok

Two days from now, TikTok may go dark in the US. Nine months ago, in April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act, banning TikTok if its Chinese owner, ByteDance, has not removed itself from ownership by January 19, 2025.

Last Friday, January 10, the US Supreme Court heard three hours of arguments in consolidated challenges filed by TikTok and a group of TikTok users: TikTok, Inc. v. Garland and Furbaugh v. Garland. Too late?

As a precaution, Kinling Lo and Viola Zhou report at Rest of World, at least some of TikTok’s 170 million American users are building community arks elsewhere – the platform Xiaohongshu (“RedNote”), for one. This is not the smartest choice; it, too is Chinese and could become subject to the same national security concerns, like the other Chinese apps Makena Kelly reports at Wired are scooping up new US users. Ashley Belanger reports at Ars Technica that rumors say the Chinese are thinking of segregating these American newcomers.

“The Internet interprets censorship as damage, and routes around it,” EFF founder and activist John Gilmore told Time Magazine in 1993. He meant Usenet, which could get messages past individual server bans, but it’s really more a statement about Internet *users*, who will rebel against bans. That for sure has not changed despite the more concentrated control of the app ecosystem. People will access each other by any means necessary. Even *voice* calls.

PAFACA bans apps from four “foreign adversaries to the United States” – China, Russia, North Korea, and Iran. That being the case, Xiaohongshu/RedNote is not a safe haven. The law just hasn’t noticed this hitherto unknown platform yet.

The law’s passage in April 2024 was followed in early May by TikTok’s legal challenge. Because of the imminent sell-by deadline, the case was fast-tracked, and landed in the US District of Columbia Circuit Court of Appeals in early December. The district court upheld the law and rejected both TikTok’s constitutional challenage and its request for an injunction staying enforcement until the constitutional claims could be fully reviewed by the Supreme Court. TikTok appealed that decision, and so last week here we were. This case is separate from Free Speech Coalition v. Paxton, which SCOTUS heard *this* week and challenges Texas’s 2023 age verification law (H.B. 1181), which could have even further-reaching Internet effects.

Here it gets silly. Incoming president Donald Trump, who originally initiated the ban but was blocked by the courts on constitutional grounds, filed an amicus brief arguing that any ban should be delayed until after he’s taken office on Monday because he can negotiate a deal. NBC News reports that the outgoing Biden administration is *also* trying to stop the ban and, per Sky News, won’t enforce it if it takes effect.

Previously, both guys wanted a ban, but I guess now they’ve noticed that, as Mike Masnick says at Techdirt, it makes them look out of touch to nearly half the US population. In other words, they moved from “Oh my God! The kids are using *TikTok*!” to “Oh, my God! The kids are *using* TikTok!”

The court transcript shows that TikTok’s lawyers made three main arguments. One: TikTok is now incorporated in the US, and the law is “a burden on TikTok’s speech”. Two: PAFACA is content-based, in that it selects types of content to which it applies (user-generated) and ignores others (reviews). Three: the US government has “no valid interest in preventing foreign propaganda”. Therefore, the government could find less restrictive alternatives, such as banning the company from sharing sensitive data. In answer to questions, TikTok’s lawyers claimed that the US’s history of banning foreign ownership of broadcast media is not relevant because it was due to bandwidth scarcity. The government’s lawyers countered with national security: the Chinese government could manipulate TikTok’s content and use the data it collects for espionage.

Again: the Chinese can *buy* piles of US data just like anyone else. TikTok does what Silicon Valley does. Pass data privacy laws!

Experts try to read the court. Amy Howe at SCOTUSblog says the justices seemed divided, but overall likely to issue a swift decision. At This Week in Google and Techdirt, Cathy Gellis says the proceedings, have left her “cautiously optimistic” that the court will not undermine the First Amendment, a feeling seemingly echoed by some of the panel of experts who liveblogged the proceedings.

The US government appears to have tied itself up in knots: SCOTUS may uphold a Congressionally-legislated ban neither old nor new administration now wants, that half the population resents, and that won’t solve the US’s pervasive privacy problems. Lost on most Americans is the irony that the rest of the world has complained for years that under the PATRIOT Act foreign subsidiaries of US companies are required to send data to US intelligence. This is why Max Schrems keeps winning cases under GDPR.

So, to wrap up: the ban doesn’t solve the problem it purports to solve, and it’s not the least restrictive possibility. On the other hand, national security? The only winner may be, as Jason Koebler writes at 404Media, Mark Zuckerberg.

Illustrations: Logo of Douyin, ByteDance’s Chinese version of TikTok.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Disharmony

When an individual user does it, it’s piracy. When a major company does it…it may just get away with it.

At TechCrunch, Kyle Wiggers reports that buried in newly unredacted documents in the copyright case Kadrey v. Meta is testimony that Meta trained its Llama language model on a dataset of ebooks it torrented from LibGen. So, two issues. First, LibGen has been sued numerous times, fined, and ordered to shut down. Second: torrent downloads simultaneously upload to others. So, allegedly, Meta knowingly pirated copyrighted books to train its language model.

Kadrey v. Meta was brought by novelist Richard Kardrey, writer Christopher Golden, and comedian Sarah Silverberg, and is one of a number of cases accusing technology companies of training language models on copyrighted works without permission. Meta claims fair use. Still, not a good look.

***

Coincidentally, this week CEO Mark Zuckerberg announced changes to the company’s content moderation policies in the US (for now), a move widely seen as pandering to the incoming administration. The main changes announced in Zuckerberg’s video clip: Meta will replace fact-checkers (“too politically biased”) with a system of user-provided “community notes” as on exTwitter, remove content restrictions that “shut out people with different ideas”, dial back its automated filters to focus solely on illegal content, rely on user reports to identify material that should be taken down, bring back political content, and move its trust and safety and content moderation teams from California to Texas (“where there is less concern about the bias of our teams”). He also pledges to work with the incoming president to “push back on governments around the world that are going after American companies and pushing to censor more”.

Journalists and fact-checkers are warning that misinformation and disinformation will be rampant, and many are alarmed by the specifics of the kind of thing people are now allowed to say. Zuckerberg frames all this as a “return” to free expression while acknowledging that, “We’re going to catch less bad stuff”

At Techdirt, Mike Masnick begins as an outlier, arguing that many of these changes are actually sensible, though he calls the reasoning behind the Texas move “stupid”, and deplores Zuckerberg’s claim that this is about “free speech” and removing “censorship”. A day later, after seeing the company’s internal guidelines unearthed by Kate Knibbs at Wired , he deplores the new moderation policy as “hateful people are now welcome”.

More interesting for net.wars purposes is the international aspect. As the Guardian says, Zuckerberg can’t bring these changes across to the EU or UK without colliding headlong with the UK’s Online Safety Act and the EU’s Digital Markets Act. Both lay down requirements for content moderation on the largest platforms.

And yet, it’s possible that Zuckerberg may also think these changes help lay the groundwork to meet the EU/UK requirements. Meta will still remove illegal content, which it’s required to do anyway. But he may think there’s a benefit in dialing back users expectations about what else Meta will remove, in that platforms must conform to the rules they set in their terms and conditions. Notice-and-takedown is an easier standard to meet than performance indicators for automated filters. It’s also likely cheaper. This approach is, however, the opposite of what critics like Open Rights Group have predicted the law will bring; ORG believes that platforms will instead over-moderate in order to stay out of trouble, chilling free speech.

Related is an interesting piece by Henry Farrell at his Programmable Matter newsletter, who argues that the more important social media speech issue is that what we read there determines how we imagine others think rather than how we ourselves think. In other words, misinformation, disinformation, and hate speech change what we think is normal, expanding the window of what we think other people find acceptable. That has resonance for me: the worst thing about prominent trolls is they give everyone else permission to behave as badly as they do.

***

It’s now 25 years since I heard a privacy advocate predict that the EU’s then-new data protection rights could become the basis of a trade war with the US. While instead the EU and US have kept trying to find a bypass that will withstand a legal challenge from Max Schrems, the approaches seem to be continuing to diverge, and in more ways.

For example, last week in the longrunning battle over network neutrality, judges on the US Sixth Circuit Court of Appeals ruled that the Federal Communications Commission was out of line when it announced rules in 2023 that classified broadband suppliers as common carriers under Title II of the Communications Act (1934). This judgment is the result of the Supreme Court’s 2024 decision to overturn the Chevron deference, setting courts free to overrule government agencies’ expertise. And that means the end in the US (until or unless Congress legislates) of network neutrality, the principle that all data flowing across the Internet was created equal and should be transmitted without fear or favor. Network neutrality persists in California, Washington, and Colorado, whose legislatures have passed laws to protect it.

China has taught us that the Internet is more divisible by national law than many thought in the 1990s. Copyright law may be the only thing everyone agrees on.

Illustrations: Drunk parrot in a South London garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The lost Internet

As we open 2025 it would be traditional for an Old Internet Curmudgeon to rhapsodize about the good, old days of the 1990s, when the web was open, snark flourished at sites like suck.com, no one owned social media (that is, Usenet and Internet Relay Chat), and even the spam was relatively harmless.

But that’s not the period I miss right now. By “lost” I mean the late 2000s, when we shifted from an Internet of largely unreliable opinions to an Internet full of fact-based sites you could trust. This was the period during which Wikipedia (created 2001) grew up, and Open Street Map (founded 2004) was born, joining earlier sites like the Internet Archive (founded 1996) and Snopes (1994). In that time, Google produced useful results, blogs flourished, and before it killed them if you asked on Twitter for advice on where to find a post box near a point in Liverpool you’d get correct answers straight to your mobile phone.

Today, so far: I can’t get a weather app to stop showing the location I was at last week and show the location I’m at this week. Basically, the app is punishing me for not turning on location tracking. The TV remote at my friend’s house doesn’t fully work and she doesn’t know why or how to fix it; she works around it with a second remote whose failings are complementary. No calendar app works as well as the software I had 1995-2001 (it synced! without using a cloud server and third-party account!). At the supermarket, the computer checkout system locked up. It all adds up to a constant white noise of frustration.

We still have Wikipedia, Open Street Map, Snopes, and the Internet Archive. But this morning a Mastodon user posted that their ten-year-old says you can’t trust Google any more: “It just returns ‘a bunch of madeup stuff’.” When ten-year-olds know your knowledge product sucks…

If generative AI were a psychic we’d call what it does cold reading.

At his blog, Ed Zitron has published a magnificent, if lengthy, rant on the state ot technology. “The rot economy”, he calls it, and says we’re all victims of constant low-level trauma. Most of his complaints will be familiar: the technologies we use are constantly shifting and mostly for the worse. My favorite line: “We’re not expected to work out ‘the new way to use a toilet’ every few months because somebody decided we were finishing too quickly.”

Pause to remember nostalgically 2018, when a friend observed that technology wasn’t exciting any more and 2019, when many more people thought the Internet was no longer “fun”. Those were happy days. Now we are being overwhelmed with stuff we actively don’t want in our lives. Even hacked Christmas lights sound miserable for the neighbors.

***

I have spent some of these holidays editing a critique of Ofcom’s regulatory plans under the Online Safety Act (we all have our own ideas about holidays), and one thing seems clear: the splintering Internet is only going to get worse.

Yesterday, firing up Chrome because something didn’t work in Firefox, I saw a fleeting popup to the effect that because I may not be over 18 there are search results Google won’t show me. I don’t think age verification is in force in the Commonwealth of Pennsylvania – US states keep passing bills, but hit legal challenges.

Age verification has been “imminent” in the UK for so long – it was originally included in the Digital Economy Act 2017 – that it seems hard to believe it may actually become a reality. But: sites within the Act’s scope will have to complete an “illegal content risk assessment” by March 16. So the fleeting popup felt like a visitation from the Ghost of Christmas Future.

One reason age verification was dropped back then – aside from the distractions of Brexit – was that the mechanisms for implementing it were all badly flawed – privacy-invasive, ineffective, or both. I’m not sure they’ve improved much. In 2022, France’s data protection watchdog checked them out: “CNIL finds that such current systems are circumventable and intrusive, and calls for the implementation of more privacy-friendly models.”

I doubt Ofcom can square this circle, but the costs of trying will include security, privacy, freedom of expression, and constant technological friction. Bah, humbug.

***

Still, one thing is promising: the rise of small, independent media outlets wbo are doing high-quality work. Joining established efforts like nine-year-old The Ferret, ten-year-old Bristol Cable, and five-year-old Rest of World are year-and-a-half-old 404 Media and newcomer London Centric. 404Media, formed by four journalists formerly at Vice’s Motherboard, has been consistently making a splash since its founding; this week Jason Koebler reminds that Elon Musk’s proactive willingness to unlock the blown-up cybertruck in Las Vegas and provide comprehensive data on where it’s been, including video from charging stations, without warrant or court order, could apply to any Tesla customer at any time. Meanwhile, in its first three months London Centric’s founding journalist, Jim Waterson, has published pieces on the ongoing internal mess at Transport for London resulting from the August cyberattack and bicycle theft in the capital. Finally, if you’re looking for high-quality American political news, veteran journalist Dan Gillmore curates it for you every day in his Cornerstone of Democracy newsletter.

The corporate business model of journalism is inarguably in trouble, but journalism continues.

Happy new year.

Illustrations: The Marx Brothers in their 1929 film, The Cocoanuts, newly released into the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Non-playing characters

It’s the most repetitive musical time of the year. Stores have been torturing their staff with an endlessly looping soundtrack of the same songs – in some cases since August. Even friends are playing golden Christmas oldies from the 1930s to 1950s.

Once upon a time – within my lifetime, in fact – stores and restaurants were silent. Into that silence came Muzak. I may be exaggerating: Wikipedia tells me the company dates to 1934. But it feels true.

The trend through all those years has been toward turning music into a commodity and pushing musicians into the poorly paid background by rerecording “for hire” to avoid paying royalties, among other tactics.

That process has now reached its nadir with the revelation by Liz Pelly at Harper’s Magazine that Spotify has taken to filling its playlists with “fake” music – that is, music created at scale by production companies and assigned to “ghost artists” who don’t really exist. For users looking for playlists of background music, it’s good enough; for Spotify it’s far more lucrative than streaming well-known artists who must be paid royalties (even at greatly reduced rates from the old days of radio).

Pelly describes the reasoning behind the company’s “Perfect Fit Content” program this way: “Why pay full-price royalties if users were only half listening?” This is music as lava lamp.

And you thought AI was going to be the problem. But no, the problem is not the technology, it’s the business model. At The New Yorker, Hua Hsu ruminates on Pelly’s imminently forthcoming book, Mood Machine, in terms of opportunity costs: what is the music we’re not hearing as artists desperate to make a living divert to conform to today’s data-driven landscape? I was particularly struck by Hsu’s data point that Spotify has stopped paying royalties on tracks that are streamed fewer than 1,000 times in a year. From those who have little, everything is taken.

The kind of music I play – traditional and traditional-influenced contemporary – is the opposite of all this. Except for a brief period in the 1960s (“the folk scare”), folk musicians made our own way. We put out our own albums long before it became fashionable, and sold from the stage because we had to. If the trend continues, most other musicians will either become like us or be non-playing characters in an industry that couldn’t exist without them.

***

The current Labour government is legislating the next stage of reforming the House of Lords: the remaining 92 hereditary peers are to be ousted. This plan is a mere twig compared to Keir Starmer’s stated intention in 2020 and 2022 to abolish it entirely. At the Guardian, Simon Jenkins is dissatisfied: remove the hereditaries, sure, but, “There is no mention of bishops and donors, let alone Downing Street’s clothing suppliers and former secretaries. For its hordes of retired politicians, the place will remain a luxurious club that makes the Garrick [club] look like a greasy spoon.”

Jenkins’ main question is the right one: what do you replace the Lords with? It is widely known among the sort of activists who testify in Parliament that you get deeper and more thoughtful questions in the Lords than you ever do in the Commons. Even if you disagree with members like Big Issue founder John Bird and children’s rights campaigner and filmmaker Beeban Kidron, or even the hereditary Earl of Erroll, who worked in the IT industry and has been a supporter of digital rights for years, it’s clear they’re offering value. Yet I’d be surprised to see them stand for election, and as a result it’s not clear that a second wholly elected chamber would be an upgrade.

With change afoot, it’s worth calling out the December 18 Lords Grand Committee debate on the data bill. I tuned in late, just in time to hear Kidron and Timothy Clement-Jones dig into AI and UK copyright law. This is the Labour plan to create an exception to copyright law so AI companies can scrape data at will to train their models. As Robert Booth writes at the Guardian, there has been, unsurprisingly, widespread opposition from the creative sector. Among other naysayers, Kidron compared the government’s suggested system to asking shopkeepers to “opt out of shoplifters”.

So they’re in this ancient setting, wearing modern clothes, using the – let’s call it – *vintage* elocutionary styling of the House of Lords…and talking intelligently and calmly about the iniquity of vendors locking schools into expensive contracts for software they don’t need, and AI companies’ growing disregard for robots.txt. Awesome. Let’s keep that, somehow.

***

In our 20 years of friendship I never knew that John “JI” Ioannidis, who died last month, had invented technology billions of people use every day. As a graduate student at Columbia, where he received his PhD in 1993, in work technical experts have called “transformative”, Ioannidis solved the difficult problem of forwarding Internet data to devices moving around from network to network: Mobile IP, in other words. He also worked on IPSec, trust management, and prevention of denial of service attacks.

“He was a genius,” says one of his colleagues, and “severely undercredited”. He is survived by his brother and sister, and an infinite number of friends who went for dim sum with him. RIP.

Illustrations: Cartoon by veteran computer programmer Jef Poskanzer. Used by permission.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Loose ends

Privacy technologies typically fail for one of two reasons: 1) they’re too complicated and/or expensive to find widespread adoption among users; 2) sites and services ignore, undermine, or bypass them in order to preserve their business model. In the first category are numerous privacy-enhancing technologies that failed to make their case in the marketplace. Among examples of the first category are numerous encryption-related attempts to secure communications. Repeated failures in the marketplace, usually because the resulting products were too technically difficult for most users, they never found mass adoption. In the end, encrypted messaging didn’t really took off until WhatsApp built it into its service.

This week saw a category two failure: Mozilla announced it is removing the Do Not Track option from Firefox’s privacy settings. DNT is simple enough to implement if you can stand to check and change settings, but it falls on the wrong side of modern business models and, other than in California, the US has no supporting legislation to make it enforceable. Granted, Firefox is a minority browser now, but the moment feels significant for this 13-year-old technology.

As Kevin Purdy explains at Ars Technica, DNT began as an FTC proposal, based on work by Christopher Soghoian and Sid Stamm, that aimed to create a mechanism for the web similar to the “Do Not Call” list for telephone networks.

The world in which DNT seemed a hopeful possibility seems almost quaint now: then, one could still imagine that websites might voluntarily respect the signal web browsers sent indicating users’ preferences. Do Not Call, by contrast, was established by US federal legislation. Despite various efforts, the US failed to pass legislation underpinning DNT, and it never became a web standard. The closest it has come to the latter is Section 2.12 of the W3C’s Ethical Web Principles, which says, “People must be able to change web pages according to their needs.” Can I say I *need* to not be tracked?

Even at the time it seemed doubtful that web companies would comply. But it also suffered from unfortunate timing. DNT arrived just as the twin onslaught of smartphones and social media was changing the ethos that built the open web. Since then, as Cory Doctor wrote earlier this year, the incentives have aligned to push web browsers to become faithless user agents, and conventions mean less and less.

Ultimately, DNT only ever worked insofar as users could trust websites to honor their preference. As it’s become clear they can’t, ad blockers have proliferated, depriving sites of ad revenue they need to survive. Had DNT been successful, perhaps we’d have all been better off.

***

Also on the way out this week is Cruise’s San Francisco robotaxis. My last visit to San Francisco, about a year ago, was the first time I saw these in person. Most of the ones I saw were empty Waymos, perhaps in transit to a passenger, perhaps just pointlessly clogging the streets. Around then, a Cruise robotaxi ran over a pedestrian who’d been hit by another car and then dragged her 20 feet. San Francisco promptly suspended Cruise’s license. Technology critic Paris Marx thought the incident would likely be Cruise’s “death knell”. And so it’s proving. The announcement from GM, which acquired Cruise in 2016 for $1 billion, leaves just Waymo standing in the US self-driving taxi business, with Tesla saying it will enter the market late next year.

I always associate robotaxis with Vernor Vinge‘s 2006 novel Rainbows End. In it, Vinge imagined a future in which robotaxis arrived within minutes of being hailed and replaced both public transport and private car ownership. By 2012 or so, his fictional imagining had become real-life projection, and many were predicting that our streets would imminently be filled with self-driving cars, taxis or not. In 2017, the conversation was all about what ethics to program into them and reclaiming urban space. Now, that imagined future seems to be receding, as skeptics predicted it would.

***

American journalism has long operated under the presumption that the stories it produces should be “neutral”. Now, at the LA Times, CEO Patrick Soon-Shiong thinks he can enforce this neutrality by running an AI-based “bias meter” over the paper’s stories. If you remember, in the late stages of the US presidential election, Soon-Shiong blocked the paper from endorsing Kamala Harris. Reports say that the bias meter, due out next month, is meant to identify any bias the story’s source has and then deliver “both sides” of that story.

This is absurd. Few news stories have just two competing sides. A biased source can’t be countered by rewriting the story unless you include more sources and points of view, which means additional research. Most important, AI can’t think.

But readers can. And so what this story says is that Soon-Shiung doesn’t trust either the journalists who work for him or the paper’s readers to draw the conclusions he wants. If he knew more about journalism, he’d know that readers generally don’t adopt opinions just because someone tells them to. The far greater power, I recall reading years ago, lies in determining what readers *think about* by deciding what topics are important enough to cover. There’s bias there, too, but Soon-Shiong’s meter won’t show it.

Illustrations: Dominic Wilccox‘s concept driverless sleeper car, 2014.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Blue

The inxodus onto Bluesky noted here last week continues apace: the site’s added a million users a day for more than a week, gradually slowing down from 12 new users a second, per the live counter.

These are not lurkers. Suddenly, the site feels like Twitter circa 2009/2010, when your algorithm-free feed was filled with interesting people sharing ideas, there were no ads, and abuse was in its infancy. People missing in action for the last year or two are popping up; others I’ve wished would move off exTwitter so I could stop following them there have suddenly joined. Mastodon is also seeing an uptick, and (I hear) Threads continues to add users without, for me, adding interest to match…. I doubt this diaspora is all “liberals”, as some media have it – or if they are, it won’t be long before politicians and celebrities note the action is elsewhere and rush to stay relevant.

It takes a long time for a social medium to die if it isn’t killed by a corporation. Even after this week’s bonanza, Bluesky’s entire user base fits inside 5% of exTwitter, which still has around 500 million users as of September, about half of them active daily. What matters most are *posters*, who are about 10% or less of any social site’s user base. When they leave, engagement plummets, as shown in a 2017 paper in Nature.

An example in action: at Statnews, Katie Palmer reports that the science and medical community is adopting Bluesky.

I have to admit to some frustration over this: why not Mastodon? As retro-fun as this week on Bluesky has been, the problem noted here a few weeks ago of Bluesky’s venture capital funding remains. Yes, the company is incorporated as a public benefit company – but venture capitalists want exit strategies and return on investment. That tension looms.

Mastodon is a loose collection of servers that all run the same software, which in turn is written to the open protocol Activity Pub. Gergely Orosz has deep-dive looks at Bluesky’s development and culture; the goal was to write a new open protocol, AT, that would allow Bluesky, similarly, to federate with others. There is already a third-party bit of software, Bridgy, that provides interoperability among Bluesky, any system based on Activity Pub (“the Fediverse”, of which Mastodon is a subset), and the open web (such as blogs). For the moment, though, Bluesky remains the only site running its AT protocol, so the more users Bluesky adds, the more it feels like a platform rather than a protocol. And platforms can change according to the whims of their owners – which is exactly what those leaving exTwitter are escaping. So: why not Mastodon, which doesn’t have that problem?

In an exchange on Bluesky, Palmer said that those who mentioned it said they found Mastodon “too difficult to figure out”.

It can’t be the thing itself; typing and sending varies little. The problem has to be the initial uncertainty about choosing a server. What you really want is for institutions to set up their own, and then you sign up there. For most people that’s far too much heavy lifting. Still, this is what the BBC and the German government have done, and it has a significant advantage in that posting from an address on that server automatically verifies the poster as an authentic staffer. NPR simply found a server and opened an account, like I did when I joined Mastodon in 2019.

All that said, how Mastodon administrators will cope with increasing usage and resulting costs also remains an open question as discussed here last year.

So: some advice as you settle into your new online home:

– Plan for the site’s eventual demise. “On the Internet your home will always leave you” (I have lost the source of this quote). Every site, no matter how big and fast-growing it is now, or how much you all love it…assume that at some point in the future it will either die of outmoded business model (AOL forums); get bought and closed down (Television without Pity, CompuServe, Geocities); become intolerable because of cultural change (exTwitter); or be abandoned because the owner loses interest (countless blogs and comment boards). Plan for that day. Collect alternative means of contacting the people you come to know and value. Build multiple connections.

– Watch the data you’re giving the site. No one in 2007, when I joined Twitter, imagined their thousands of tweets would become fodder for a large language model to benefit one of the world’s richest multi-billionaires.

– If you are (re)building an online community for an organization, own that community. Use social media, by all means, but use it to encourage people to visit the organization’s website, or join its fully-controlled mailing list or web board. Otherwise, one day, when things change, you will have to start over from scratch, and may not even know who your members are or how to reach them.

– Don’t worry too much about the “filter bubble”, as John Elledge writes. Studies generally agree social media users encounter more, and more varied, sources of news than others. As he says, only journalists have to read widely among people whose views they find intolerable (see also the late, great Molly Ivins).

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.