Universal service

Last week a couple of friends and I got around to trying out Techdirt‘s 2025 game, One Billion Users. This card-based game has each player trying to build a social network while keeping toxicity under control.

First impression: the instructions are bananas complicated. There are users, influencers, events, hotfixes, safeguards…and a Troll, which everyone who understood the instructions tried to push off on someone else ASAP. One of our number became the gamemaster, reading out the instructions we struggled to remember. You win by adding (and subtracting) points based on the cards you’re holding when the the GAME OVER card turns up.

Even in a single game where we were feeling our way through, different strategies emerged. One of our number did her best to build a smaller, friendlier network. She succeeded – but it wasn’t a winning strategy. Without any thought to planning, my network ended up medium-sized. I was constrained by an event card stopping me from adding new users, and then, catastrophically, “gifted” the Troll. I came in second. The winner had built a huge number of users, successfully dumped the troll (thank you *so* much), and acquired several influencers who brought their own communities. We eventually identified the networks we’d built, in order: Tumblr, Twitter (not, I think, X), Facebook.

In a more detailed review, Adi Robertson at The Verge traces the roots of the game’s design to a game we played a lot in my childhood but that I no longer remember very well: Mille Bornes (“A Thousand Milestones”). A change of theme, some added twists, I see it now.

We will try this game again. I didn’t *want* to build the Torment Nexus!

***

It appears the BBC wants to switch off Freeview in 2034. For non-UK readers: Freeview is digital terrestrial television – that is, broadcast. It’s operated by a joint venture among the UK’s public service broadcasters (PSBs) – the BBC, ITV, Channel 4, and Channel 5. Given any television made since 2008 or another receiver device you can access 85 channels without paying anything more other than the BBC’s license fee. That, too, will soon be under review; the BBC’s charter is due for renewal in 2027. Freeview is one piece of a larger puzzle.

As Mark Sweney explains at the Guardian, the Department of Culture, Media, and Sport is reviewing options for Freeview’s future, and is considering three alternatives presented by Ofcom (PDF), the broadcast regulator. One: upgrade the present infrastructure. Two: maintain it as a cut-down service offering only the PSBs’ core channels. Three: move entirely to streaming.

The broadcasters, Sweney writes, favor the latter, choosing 2034 as a logical time to shut down Freeview because that’s when their contract with their network operator expires. By then, projections say that about 1.8 million people will still be dependent on Freeview, a long way down from today’s estimated 12 million. Many more homes, like mine, use both. The Ofcom report says that in 2023 39% of TV viewing was via broadcast.

Most of the discussion focuses on costs: updating the Freeview infrastructure is expensive for broadcasters, switching to streaming is an ongoing expense for individuals. Households would need a broadband subscription, new equipment, and the streaming app Freely, which was launched in 2024. There is a petition opposing the change.

This discussion is happening shortly after the British Audience Research Board announced that the number of YouTube viewers passed the number of BBC viewers for the first time. However, as Dekan Apajee writes at The Conversation, even on YouTube people are still watching the BBC’s output, even if they’re not be aware of it. Apajee is more concerned about context and finding ways to distinguish public service broadcasting and its values from the jumble of everything else on YouTube. How do the PSBs meet the requirement for universal service? Ofcom’s more recent report on the future of public service media (PDF), also warns of this loss of discoverability amid increased competition.

Adding to that, the BBC is reportedly considering a formal content agreement with YouTube that would have it publish some younger-oriented content there before showing it on its own platforms. It’s odd timing, as so many are warning against depending on US technology, as the economist Paul Krugman wrote yesterday. The loss of audience data has been a theme in the rise of streamers – and YouTube has just withdrawn from BARB’s audience measurement system, saying the organization violated YouTube’s terms and conditions.

Remarkably little of this discussion considers the potential loss of privacy inherent in forcing everyone to move to “smart” data collection machines (TVs, phones, computers). Is there a future in which it’s still possible to watch video content anonymously? (Yes, but they call it “piracy”.)

The BBC seems to believe that transitioning to streaming can be smooth. Sweney cites the years to 2012, when analog TV was switched off in favor of digital broadcast, which he describes as “near seamless” despite warnings of potential exclusion. Maybe so, but a lot of televisions were wastefully dumped, and that conversion was a one-time cost, not a permanent monthly drain.

At a meeting yesterday about building better technology, one attendee passionately advocated trustworthy content, presented by trusted sources. Ah, I thought, she wants to reinvent the BBC. Doesn’t everyone?

Illustrations: Family watching television in 1958 (via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The censorship-industrial complex

In a sign of the times, the Academy of Motion Picture Arts and Sciences has announced that in 2029 the annual Oscars ceremony will move from ABC to YouTube, where it will be viewable worldwide for free. At Variety, Clayton Davis speculates how advertising will work – perhaps mid-roll? The obvious answer is to place the ads between the list of nominees and opening the envelope to announce the winner. Cliff-hanger!

The move is notable. Ratings for the awards show have been declining for decades. In 1960, 45.8 million people in the US watched the Oscars – live, before home video recording. In 1998, the peak, 55.2 million, after VCRs, but before YouTube. In 2024: 19.5 million. This year, the Oscars drew under 18.1 million viewers.

On top of that, broadcast TV itself is in decline. One of the biggest audiences ever gathered for a single episode of a scripted show was in 1983: 100 million, for the series finale of M*A*S*H. In 2004, the Friends finale drew 52.5 million. In 2019, the Big Bang Theory finale drew just 17.9 million. YouTube has more than 2.7 billion active users a month. Whatever ABC was paying for the Oscars, reach may matter more than money, especially in an industry that is also threatened by shrinking theater audiences. In the UK, YouTube is second most-watched TV service ($), after only the BBC.

The move suggests that the US audience itself may also not be as uniquely important as it was historically. The Academy’s move fits into many other similar trends.

***

During this week’s San Francisco power outage, an apparently unexpected consequence was that non-functioning traffic lights paralyzed many of the city’s driverless Waymo taxis. In its blog posting, the company says, “While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.”

Friends in San Francisco note that the California Driver’s Handbook (under “Traffic Control”) is specific about what to do in such situations: treat the intersection as if it had all-way stop signs. It’s a great example of trusting human social cooperation.

Robocars are, of course, not in on this game. In an uncertain situation they can’t read us. So the volume of requests overwhelmed the remote human controllers and the cars froze, blocking intersections and even sidewalks. Waymo suspended the service temporarily, and says it is updating the cars’ software to make them act “more decisively” in such situations in future.

Of course, all these companies want to do away with the human safety drivers and remote controllers as they improve cars’ programming to incorporate more edge cases. I suspect, however, that we’ll never really reach the point where humans aren’t needed; there will always be new unforeseen issues. Driving a car is a technical challenge. Sharing the roads with others is a social effort requiring the kind of fuzzy flexibility computers are bad at. Getting rid of the humans will mean deciding what level of dysfunction we’re willing to accept from the cars.

Self-driving taxis are coming to London in 2026, and I’m struggling to imagine it. It’s a vastly more complex city to navigate than San Francisco, and has many narrow, twisty little streets to flummox programmers used to newer urban grids.

***

The US State Department has announced sanctions barring five people and potentially their families from obtaining visas to enter or stay in the US, labeling them radical activists and weaponized NGOs. They are: Imran Ahmed, an ex-Labour advisor and founder and CEO of the Centre for Countering Digital Hate; Clare Melford, founder of the Global Disinformation Index; Thierry Breton, a former member of the European Commission, whom under secretary of state for public diplomacy Sarah B. Rogers, called “a mastermind” of the Digital Services Act; and Josephine Ballon and Anna-Lena von Hodenberg, managing directors of the independent German organization HateAid, which supports people affected by digital violence. Ahmed, who lived in Washington, DC, has filed suit to block his deportation; a judge has issued a temporary restraining order.

It’s an odd collection as a “censorship-industrial complex”. Breton is no longer in a position to make laws calling US Big Tech to account; his inclusion is presumably a warning shot to anyone seeking to promote further regulation of this type. GDI’s site’s last “news” posting was in 2022. HateAid has helped a client file suit against Google in August 2025, and sued X in July for failing to remove criminal antisemitic content. The Center for Countering Digital Hate has also been in court to oppose antisemitic content on X and Instagram; in 2024 Elon Musk called it a ‘criminal organization’. There was more logic to”the three people in hell” taught to an Irish friend as a child (Cromwell, Queen Elizabeth I, and Martin Luther).

Whatever the Trump administration’s intention, the result is likely to simply add more fuel to initiatives to lessen European dependence on US technology.

Illustrations: Christmas tree in front of the US Capitol in 2020 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Sovereign immunity

At the Gikii conference in 2018, a speaker told us of her disquiet after receiving a warning from Tumblr that she had replied to several messages posted there by a Russian bot. After inspecting the relevant thread, her conclusion was that this bot’s postings were designed to increase the existing divisions within her community. There would, she warned, be a lot more of this.

We’ve seen confirming evidence over the years since. This week provided even more when X turned on location identification for all accounts, whether they wanted it or not. The result has been, as Jason Koebler writes at 404 Media, to expose the true locations of accounts purporting to be American, posting on political matters. A large portion of the accounts behind viral posts designed to exacerbate tensions are being run by people in countries like Bangladesh, Vietnam, India, Cambodia, and Russia, among others, with generative AI acting as an accelerant.

Unlike the speaker we began with, in his analysis, Koebler finds that the intention behind most of this is not to stir up divisions but simply to make money from an automated ecosystem that makes it easy. The US is the main target simply because it’s the most lucrative market. He also points out that while X’s new feature has led people to talk about it, the similar feature that has long existed on Facebook and YouTube has never led to change because, he writes, “social media companies do not give a fuck about this”. Cue the Upton Sinclair quote: “It is difficult to get a man to understand something when his Salary depends upon his not understanding it”

The incident reminded that this type of fraud in general seems to be endemic, especially in the online advertising ecosystem. In March, Portsmouth senior lecturer Karen Middleton submitted evidence (PDF) to a UK Parliamentary Select Committee Inquiry arguing that the advertising ecosystem urgently needs regulatory attention as a threat to information integrity. At the Financial Times, Martin Wolf thinks that users should be able to sue the platforms for reimbursement when they are tricked by fraudulent ads – a model that might work for fraudulent ads that cause quantifiable harm but not for those that cause wider, less tangible, social harm. Wolf cites a Reuters report from Jeff Horwitz, who analyzes internal Facebook documents to find that the company itself expected 10% of its 2024 revenues – $16 billion – to come from ads for scams and banned goods.

Search Engine Land, citing Juniper Research, estimated in 2023 that $84 billion in advertising spend would be lost to ad fraud that year, and predicted a rise to $172 billion by 2028. Spider Labs estimates 2024 losses at over $37.7 billion, based on traffic data it’s analyzed through its fraud prevention tool, and 2025 losses at $41.4 billion. For context, DataReportal puts global online ad revenue at close to $790.3 billion in 2024. Also for comparison, Adblock Tester estimated last week that ad blockers cut publishers’ advertising revenues on average by 25% in 2023, costing them up to $50 billion a year.

If Koebler is correct in his assessment, until or unless advertisers rebel the incentives are misplaced and change will not happen.

***

Enforcement of the Online Safety Act has continued to develop since it came into force in July. This week, Substack became the latest to announce it would implement age verification for whatever content it deems to be potentially harmful. Paid subscribers are exempt on the basis that they have signed up with credit cards, which are unavailable in the UK to those under 18.

In October, we noted the arrival of a lawsuit against Ofcom brought in US courts by 4Chan and Kiwi Farms. The lawyer’s name, Preston Byrne, sounded familiar; I now remember he talked bitcoin at the 2015 Tomorrow’s Transaction Forum.

James Titcomb writes at the Daily Telegraph that Ofcom’s lawyers have told the US court that it is a public regulatory authority and therefore has “sovereign immunity”. The lawsuit contends that Ofcom is run as a “commercial enterprise” and therefore doesn’t get to claim sovereign immunity. Plus: the First Amendment.

Meanwhile, with age verification spreading to Australia and the EU, on X Byrne is advocating that US states enact foreign censorship shield laws. One state – Wyoming – has already introduced one. The draft GRANITE Act was filed on November 19. Among other provisions, the law would permit US citizens who have been threatened with fines to demand three times the amount in damages – potentially billions for a company like Meta, which can be fined up to 10% of global revenue under various UK and EU laws. That clause would have to pass the US Congress. In the current mood, it might; in July in a report the House of Representatives Judiciary Committee called the EU’s Digital Services Act a foreign censorship threat.

It’s hard to know how – or when – this will end. In 1990s debates, many imagined that the competition to enforce national standards for speech across the world would lead either to unrestricted free speech or to a “least common denominator” regime in which the most restrictive laws applied everywhere. Byrne’s battle isn’t about that; it’s about who gets to decide.

Illustrations: A wild turkey strutting (by Frank Schulenberg at Wikimedia). Happy Thanksgiving!

Also this week:
At Plutopia, we interview Jennifer Granick, surveillance and cybersecurity counsel at ACLU.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Cautionary tales

I’ve been online for nearly 34 years, and I’m thinking of becoming a child. Or at least, a child to big user-to-user social media services, which next week will start asking for proof of adulthood. On July 25, the new age verification requirements under the Online Safety Act come into effect in the UK. The regulator, Ofcom, has published a guide.

Plenty of companies aim to join this new market. Some are familiar: credit scorers Experian and Transunion. Others are new: Yoti, which we saw demonstrated back in 2016, and Sam Altman and Andreessen Horowitz-backed six-year-old startup World, which recently did a promotional tour for the UK launch of its Orb identification system. Summary: many happy privacy words, but still dubious.

Reddit picked Persona; Dearbail Jordan at the BBC says Redditors will need to upload either a selfie for age estimation or a government-issued ID. Reddit says it will not see this data, only storing each user’s verification status along with the birth date they’ve (optionally) provided.

Bluesky has chosen Kids’ Web Services from Epic Games. The announcement says KWS accepts multiple options: payment cards, ID scans, and face scans. Users who decline to supply this information will be denied access to adult content and direct messaging. How much do I care about either? Would I rather just be a child to two-year-old Bluesky?

On older sites my adulthood ought to be obvious: I joined Twitter/X in 2008 and Reddit in 2015. Do the math, guys! I suppose there is a chance I could have created the account, forgotten it, and then revived it for a child (the “older brother problem”), but I’m not sure these third-party verifiers solve that either.

Everyone wants to protect children. But it doesn’t make sense to do it by creating a system that exposes everyone, including children, to new privacy risks. In its report on how to fix the OSA, the Open Rights Group argues that interoperability and portability should be first principles, and that users should be able to choose providers and methods. Today, the social media companies don’t see age verification data; in five years will they be buying up those providers? These first steps matter, as they are setting the template for what is to come.

This is the opening of a floodgate. On June 27 the US Supreme Court ruled in Free Speech Coalition v. Paxton to uphold a law requiring pornographic websites to verify users’ ages through government-issued ID. At TechDirt, Mike Masnick called the ruling taking a chainsaw to the First Amendment.

It’s easy to predict that here will be scandals surrounding the data age verifiers collect, and others where technological failures let children access the wrong sort of content. We’ll hear less about the frustrations of people who are blocked by age verification from essential information. Meanwhile, child safety folks will continue pushing for new levels of control.

The big question is this: how will we know if it’s working? What does “success” look like?

***

At Platformer, Casey Newton covers Substack’s announcement that it has closed series C funding round of $100 million, valuing the company at $1.1 billion. The eight-year-old company gets to say it’s a unicorn.

Newton tries to understand how Substack is worth that. He predicts – logically – that its only choice to justify its venture capitalists’ investment will be rampant enshittification. These guys don’t put in that kind of money without expecting a full-bore return, which is why Newton is dubious about the founders’ promise to invest most of that newly-raised capital in creators. Recall the stages Cory Doctorow laid out: first they amass as many users as possible; then they abuse those users to amass as many business customers (advertisers) as possible; then they squeeze everyone.

Substack, which announced four months ago that it – or, more correctly, its creators – has more than 5 million paid subscriptions, is different in that its multi-sided market structure is more like Uber or Amazon Marketplace than like a social media site or traditional publisher. It has users (readers and listeners), creators (like Uber’s drivers or Amazon’s sellers), and customers (advertisers). Viewed that way, it’s easy to see Substack’s most likely path: raise prices (users and advertisers), raise thresholds and commissions (creators), and, like Amazon, force sellers (creators) into using fee-based additional services in order to stay afloat. Plus, it must crush the competition. See similar math from Anil Dash.

Less ponderable is the headwind of Substack’s controversial hospitality to extremists, noted in 2023 by Jonathan Katz at The Atlantic. Some creators – like Newton – have opted to leave for competitor Ghost, which is both open source and cheaper. Many friends refuse to pay Substack even when they want to support creators whose work they admire. At the time, Stephen Bush responded at the Financial Times that Substack should admit that it’s not a publisher but a “handy bit of infrastructure for sending newsletters”. Is that worth $1.1 billion?

Like earlier Silicon Valley companies, Substack is planning to reverse its previous disdain for advertising, as Benjamin Mullin and Jessica Testa report at the New York Times. The company is apparently also looking forward to embracing social networking.

So, no really new ideas, then?

Illustrations: Unicorn (by Pearson Scott Foresman via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Revival

There appears to be media consensus: “Bluesky is dead.”

At The Commentary, James Meigs calls Bluesky “an expression of the left’s growing hypersensitivity to ideas leftists find offensive”, and says he accepts exTwitter’s “somewhat uglier vibe” in return for “knowing that right-wing views aren’t being deliberately buried”. Then he calls Bluesky “toxic” and a “hermetically sealed social-media bubble”.

At New Media and Marketing, Rich Meyer says Bluesky is in decline and engagement is dropping, and exTwitter is making a comeback.

At Slate, Alex Kirshner and Nitish Pahwa complain that Bluesky feels “empty”, say that its too-serious users are abandoning it because it isn’t fun, and compare it to a “small liberal arts college” and exTwitter to a “large state university”.

At The Spectator, Sean Thomas regrets that “Bluesky is dying” – and claims to have known it would fail from his first visit to the site, “a bad vegan cafe, full of humorless puritans”.

Many of these pieces – Mark Cuban at Fortune, for example, and Megan McArdle at the Washington Post – blame a “lack of diversity of thought”.

As Mike Masnick writes on TechDirt in its defense (Masnick is a Bluesky board member), “It seems a bit odd: when something is supposedly dying or irrelevant, journalists can’t stop writing about it.”

Have they so soon forgotten 2014, when everyone was writing that Twitter was dead?

Commentators may be missing that success for Bluesky looks different: it’s trying to build a protocol-driven ecosystem, not a site. Twitter had one, but destroyed it as its ad-based business model took over. Both Bluesky and Mastodon, which media largely ignores, aim to let users create their own experience and are building tools that give users as much control as possible. It seems to offend some commentators that one of them lets you block people you don’t want to deal with, but that’s weird, since it’s the one every social site has.

All social media have ups and downs, especially when they’re new (I really wonder how many of these commentators experienced exTwitter in its early days or have looked at Truth Social’s user numbers). Settling into a new environment and rebuilding take time – it may look like the old place, but its affordances are different, and old friends are missing. Meanwhile, anecdotally, some seem to be leaving social media entirely, driven away by privacy issues, toxic behavior, distaste for platform power and its owners, or simply distracted by life. Few of us *have* to use social media.

***

In 2002, the UK’s Financial Services Authority was the first to implement an EU directive allowing private organizations to issue their own electronic money without a banking license if they could meet the capital requirements. At the time, the idea seemed kind of cute, especially since there was a plan to waive some of the requirements for smaller businesses. Everyone wanted micropayments; here was a framework of possibility.

And then nothing much happened. The Register’s report (the first link above) said that organizations such as the Post Office, credit card companies, and mobile operators were considering launching emoney offerings. If they did, the results sank without trace. Instead, we’re all using credit/debit cards to pay for stuff online, just as we were 23 years ago. People are relucrtant to trust weird, new-fangled forms of money.

Then, in 2008, came cryptocurrencies – money as lottery ticket.

Last week, the Wall Street Journal reported that Amazon, Wal-Mart, and other multinationals are exploring stablecoins as a customer payment option – in other words, issuing their own cryptocurrencies, pegged to the US dollar. As Andrew Kassel explains at Investopedia, the result could be to bypass credit cards and banks, saving billions in fees.

It’s not clear how this would work, but I’m suspicious of the benefits to consumers. Would I have to buy a company’s stablecoin before doing business with it? And maintain a floating balance? At Axios, Brady Dale explores other possibilities. Ultimately, it sounds like a return to the 1970s, before multipurpose credit cards, when people had store cards from the retailers they used frequently, and paid a load of bills every month. Dale seems optimistic that this could be a win for consumers as well as retailers, but I can’t really see it.

In other words, the idea seems less cute now, less fun technological experiment, more rapacious. There’s another, more disturbing, possibility: the return of the old company town. Say you work for Amazon or Wal-Mart, and they offer you a 10% bonus for taking your pay in their stablecoin. You can’t spend it anywhere but their store, but that’s OK, right, because they stock everything you could possibly want? A modern company town doesn’t necessarily have to be geographical.

I’ve long thought that company towns, which allowed companies to effectively own employees, are the desired endgame for the titans. Elon Musk is heading that way with Starbase, Texas, now inhabited primarily by SpaceX employees, as Elizabeth Crisp reports at The Hill.

I don’t know if the employees who last month voted enthusiastically for the final incorporation of Starbase realize how abusive those old company towns were.

Illustrations: The Starbase sign adjoining Texas Highway 4, in 2023 (via Jenny Hautmann at Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A thousand small safety acts

“The safest place in the world to be online.”

I think I remember that slogan from Tony Blair’s 1990s government, when it primarily related to ecommerce. It morphed into child safety – for example, in 2010, when the first Digital Economy Act was passed, or 2017, when the Online Safety Act, passed in 2023 and entering into force in March 2025, was but a green paper. Now, Ofcom is charged with making it reality.

As prior net.wars posts attest, the 2017 green paper began with the idea that social media companies could be forced to pay, via a levy, for the harm they cause. The key remaining element of that is a focus on the large, dominant companies. The green paper nodded toward designing proportionately for small businesses and startups. But the large platforms pull the attention: rich, powerful, and huge. The law that’s emerged from these years of debate takes in hundreds of thousands of divergent services.

On Mastodon, I’ve been watching lawyer Neil Brown scrutinize the OSA with a particular eye on its impact on the wide ecosystem of what we might call “the community Internet” – the thousands of web boards, blogs, chat channels, and who-knows-what-else with no business model because they’re not businesses. As Brown keeps finding in his attempts to help provide these folks with tools they can use are struggling to understand and comply with the act.

First things first: everyone agrees that online harm is bad. “Of course I want people to be safe online,” Brown says. “I’m lucky, in that I’m a white, middle-aged geek. I would love everyone to have the same enriching online experience that I have. I don’t think the act is all bad.” Nonetheless, he sees many problems with both the act itself and how it’s being implemented. In contacts with organizations critiquing the act, he’s been surprised to find how many unexpectedly agree with him about the problems for small services. However, “Very few agreed on which was the worst bit.”

Brown outlines two classes of problem: the act is “too uncertain” for practical application, and the burden of compliance is “too high for insufficient benefit”.

Regarding the uncertainty, his first question is, “What is a user?” Is someone who reads net.wars a user, or just a reader? Do they become a user if they post a comment? Do they start interacting with the site when they read a comment, make a comment, or only when they comment to another user’s comment? In the fediverse, is someone who reads postings he makes via his private Mastodon instance its user? Is someone who replies from a different instance to that posting a user of his instance?

His instance has two UK users – surely insignificant. Parliament didn’t set a threshold for the “significant number of UK users” that brings a service into scope, so Ofcom says it has no answer to that question. But if you go by percentage, 100% of his user base is in Britain. Does that make Britain his “target market”? Does having a domain name in the UK namespace? What is a target market for the many community groups running infrastructure for free software projects? They just want help with planning, or translation; they’re not trying to sign up users.

Regarding the burden, the act requires service providers to perform a risk assessment for every service they run. A free software project will probably have a dozen or so – a wiki, messaging, a documentation server, and so on. Brown, admittedly not your average online participant, estimates that he himself runs 20 services from his home. Among them is a photo-sharing server, for which the law would have him write contractual terms of service for the only other user – his wife.

“It’s irritating,” he says. “No one is any safer for anything that I’ve done.”

So this is the mismatch. The law and Ofcom imagine a business with paid staff signing up users to profit from them. What Brown encounters is more like a stressed-out woman managing a small community for fun after she puts the kids to bed.

Brown thinks a lot could be done to make the act less onerous for the many sites that are clearly not the problem Parliament was trying to solve. Among them, carve out low-risk services. This isn’t just a question of size, since a tiny terrorist cell or a small ring sharing child sexual abuse material can pose acres of risk. But Brown thinks it shouldn’t be too hard to come up with criteria to rule services out of scope such as a limited user base coupled with a service “any reasonable person” would consider low risk.

Meanwhile, he keeps an In Memoriam list of the law’s casualties to date. Some have managed to move or find new owners; others are simply gone. Not on the list are non-UK sites that now simply block UK users. Others, as Brown says, just won’t start up. The result is an impoverished web for all of us.

“If you don’t want a web dominated by large, well-lawyered technology companies,” Brown sums up, “don’t create a web that squeezes out small low-risk services.”

Illustrations: Early 1970s cartoon illustrating IT project management.

Wendy M. Grossman is an award-winning journalist. Her Web site has extensive links to her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Careless People

Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism
By Sarah-Wynn-Williams
Macmillan
ISBN: 978-1035065929

In his 2021 book Social Warming, Charles Arthur concludes his study of social media with the observation that the many harms he documented happened because no one cared to stop them. “Nobody meant for this to happen,” he writes to open his final chapter.

In her new book, Careless People, about her time at Facebook, former New Zealand diplomat Sarah Wynn-Williams shows the truth of Arthur’s take. A sad tale of girl-meets-company, girl-loses-company, girl-tells-her-story, it starts with Wynn-Williams stalking Facebook to identify the right person to pitch hiring her to build its international diplomatic relationships. I kept hoping increasing dissent and disillusion would lead her to quit. Instead, she stays until she’s fired after HR dismisses her complaint of sexual harassment.

In 2011, when Wynn-Williams landed her dream job, Facebook’s wild expansion was at an early stage. CEO Mark Zuckerberg is awkward, sweaty, and uncomfortable around world leaders, who are dismissive. By her departure in 2017, presidents of major countries want selfies with him and he’s much more comfortable – but no longer cares. Meanwhile, then-Chief Operating Officer Sheryl Sandberg, wealthy from her time at Google, becomes a celebrity via her book, Lean In, written with the former TV comedy writer Nell Scovell. Sandberg’s public feminism clashes with her employee’s experience. When Wynn-Williams’s first child is a year old, a fellow female employee congratulates her on keeping the child so well-hidden she didn’t know it existed.

The book provides hysterically surreal examples of American corporatism. She is in the delivery room, feet in stirrups, ordered to push, when a text arrives: can she draft talking points for Davos? (She tries!) For an Asian trip, Zuckerberg wants her to arrange a riot or peace rally so he can appear to be “gently mobbed”. When the company fears “Mark” or “Sheryl” might be arrested if they travel to Korea, managers try to identify a “body” who can be sent in as a canary. Wynn-Williams’s husband has to stop her from going. Elsewhere, she uses her diplomatic training to land Zuckerberg a “longer-than-normal handshake” with Xi Jinping.

So when you get to her failure to get her bosses to beef up the two-person content moderation team for Myanmar’s 60 million people, rewrite the section so Burmese characters render correctly, and post country-specific policies, it’s obvious what her bosses will decide. The same is true of internal meetings discussing the tools later revealed to let advertisers target depressed teens. Wynn-Williams hopes for a safe way forward, but warns that company executives’ “lethal carelessness” hasn’t changed.

Cultural clash permeates this book. As a New Zealander, she’s acutely conscious of the attitudes she encounters, and especially of the wealth and class disparity that divide the early employees from later hires. As pregnancies bring serious medical problems and a second child, the very American problem of affording health insurance makes offending her bosses ever riskier.

The most important chapters, whose in-the-room tales fill in gaps in books by Frances Haugen, Sheera Frankel and Cecilia Kang, and Steven Levy, are those in which Wynn-Williams recounts the company’s decision to embrace politics and build its business in China. If, her bosses reason, politicians become dependent on Facebook for electoral success, they will balk at regulating it. Donald Trump’s 2016 election, which Zuckerberg initially denied had been significantly aided by Facebook, awakened these political aspirations. Meanwhile, Zuckerberg leads the company to build a censorship machine to please China. Wynn-Williams abhors all this – and refuses to work on China. Nonetheless, she holds onto the hope that she can change the company from inside.

Apparently having learned little from Internet history, Meta has turned this book into a bestseller by trying to suppress it. Wynn-Williams managed one interview, with Business Insider, before an arbitrator’s injunction stopped her from promoting the book or making any “disparaging, critical or otherwise detrimental comments” related to Meta. This fits the man Wynn-Williams depicts who hates to lose so much that his employees let him win at board games.

Banning TikTok

Two days from now, TikTok may go dark in the US. Nine months ago, in April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act, banning TikTok if its Chinese owner, ByteDance, has not removed itself from ownership by January 19, 2025.

Last Friday, January 10, the US Supreme Court heard three hours of arguments in consolidated challenges filed by TikTok and a group of TikTok users: TikTok, Inc. v. Garland and Furbaugh v. Garland. Too late?

As a precaution, Kinling Lo and Viola Zhou report at Rest of World, at least some of TikTok’s 170 million American users are building community arks elsewhere – the platform Xiaohongshu (“RedNote”), for one. This is not the smartest choice; it, too is Chinese and could become subject to the same national security concerns, like the other Chinese apps Makena Kelly reports at Wired are scooping up new US users. Ashley Belanger reports at Ars Technica that rumors say the Chinese are thinking of segregating these American newcomers.

“The Internet interprets censorship as damage, and routes around it,” EFF founder and activist John Gilmore told Time Magazine in 1993. He meant Usenet, which could get messages past individual server bans, but it’s really more a statement about Internet *users*, who will rebel against bans. That for sure has not changed despite the more concentrated control of the app ecosystem. People will access each other by any means necessary. Even *voice* calls.

PAFACA bans apps from four “foreign adversaries to the United States” – China, Russia, North Korea, and Iran. That being the case, Xiaohongshu/RedNote is not a safe haven. The law just hasn’t noticed this hitherto unknown platform yet.

The law’s passage in April 2024 was followed in early May by TikTok’s legal challenge. Because of the imminent sell-by deadline, the case was fast-tracked, and landed in the US District of Columbia Circuit Court of Appeals in early December. The district court upheld the law and rejected both TikTok’s constitutional challenage and its request for an injunction staying enforcement until the constitutional claims could be fully reviewed by the Supreme Court. TikTok appealed that decision, and so last week here we were. This case is separate from Free Speech Coalition v. Paxton, which SCOTUS heard *this* week and challenges Texas’s 2023 age verification law (H.B. 1181), which could have even further-reaching Internet effects.

Here it gets silly. Incoming president Donald Trump, who originally initiated the ban but was blocked by the courts on constitutional grounds, filed an amicus brief arguing that any ban should be delayed until after he’s taken office on Monday because he can negotiate a deal. NBC News reports that the outgoing Biden administration is *also* trying to stop the ban and, per Sky News, won’t enforce it if it takes effect.

Previously, both guys wanted a ban, but I guess now they’ve noticed that, as Mike Masnick says at Techdirt, it makes them look out of touch to nearly half the US population. In other words, they moved from “Oh my God! The kids are using *TikTok*!” to “Oh, my God! The kids are *using* TikTok!”

The court transcript shows that TikTok’s lawyers made three main arguments. One: TikTok is now incorporated in the US, and the law is “a burden on TikTok’s speech”. Two: PAFACA is content-based, in that it selects types of content to which it applies (user-generated) and ignores others (reviews). Three: the US government has “no valid interest in preventing foreign propaganda”. Therefore, the government could find less restrictive alternatives, such as banning the company from sharing sensitive data. In answer to questions, TikTok’s lawyers claimed that the US’s history of banning foreign ownership of broadcast media is not relevant because it was due to bandwidth scarcity. The government’s lawyers countered with national security: the Chinese government could manipulate TikTok’s content and use the data it collects for espionage.

Again: the Chinese can *buy* piles of US data just like anyone else. TikTok does what Silicon Valley does. Pass data privacy laws!

Experts try to read the court. Amy Howe at SCOTUSblog says the justices seemed divided, but overall likely to issue a swift decision. At This Week in Google and Techdirt, Cathy Gellis says the proceedings, have left her “cautiously optimistic” that the court will not undermine the First Amendment, a feeling seemingly echoed by some of the panel of experts who liveblogged the proceedings.

The US government appears to have tied itself up in knots: SCOTUS may uphold a Congressionally-legislated ban neither old nor new administration now wants, that half the population resents, and that won’t solve the US’s pervasive privacy problems. Lost on most Americans is the irony that the rest of the world has complained for years that under the PATRIOT Act foreign subsidiaries of US companies are required to send data to US intelligence. This is why Max Schrems keeps winning cases under GDPR.

So, to wrap up: the ban doesn’t solve the problem it purports to solve, and it’s not the least restrictive possibility. On the other hand, national security? The only winner may be, as Jason Koebler writes at 404Media, Mark Zuckerberg.

Illustrations: Logo of Douyin, ByteDance’s Chinese version of TikTok.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Disharmony

When an individual user does it, it’s piracy. When a major company does it…it may just get away with it.

At TechCrunch, Kyle Wiggers reports that buried in newly unredacted documents in the copyright case Kadrey v. Meta is testimony that Meta trained its Llama language model on a dataset of ebooks it torrented from LibGen. So, two issues. First, LibGen has been sued numerous times, fined, and ordered to shut down. Second: torrent downloads simultaneously upload to others. So, allegedly, Meta knowingly pirated copyrighted books to train its language model.

Kadrey v. Meta was brought by novelist Richard Kardrey, writer Christopher Golden, and comedian Sarah Silverberg, and is one of a number of cases accusing technology companies of training language models on copyrighted works without permission. Meta claims fair use. Still, not a good look.

***

Coincidentally, this week CEO Mark Zuckerberg announced changes to the company’s content moderation policies in the US (for now), a move widely seen as pandering to the incoming administration. The main changes announced in Zuckerberg’s video clip: Meta will replace fact-checkers (“too politically biased”) with a system of user-provided “community notes” as on exTwitter, remove content restrictions that “shut out people with different ideas”, dial back its automated filters to focus solely on illegal content, rely on user reports to identify material that should be taken down, bring back political content, and move its trust and safety and content moderation teams from California to Texas (“where there is less concern about the bias of our teams”). He also pledges to work with the incoming president to “push back on governments around the world that are going after American companies and pushing to censor more”.

Journalists and fact-checkers are warning that misinformation and disinformation will be rampant, and many are alarmed by the specifics of the kind of thing people are now allowed to say. Zuckerberg frames all this as a “return” to free expression while acknowledging that, “We’re going to catch less bad stuff”

At Techdirt, Mike Masnick begins as an outlier, arguing that many of these changes are actually sensible, though he calls the reasoning behind the Texas move “stupid”, and deplores Zuckerberg’s claim that this is about “free speech” and removing “censorship”. A day later, after seeing the company’s internal guidelines unearthed by Kate Knibbs at Wired , he deplores the new moderation policy as “hateful people are now welcome”.

More interesting for net.wars purposes is the international aspect. As the Guardian says, Zuckerberg can’t bring these changes across to the EU or UK without colliding headlong with the UK’s Online Safety Act and the EU’s Digital Markets Act. Both lay down requirements for content moderation on the largest platforms.

And yet, it’s possible that Zuckerberg may also think these changes help lay the groundwork to meet the EU/UK requirements. Meta will still remove illegal content, which it’s required to do anyway. But he may think there’s a benefit in dialing back users expectations about what else Meta will remove, in that platforms must conform to the rules they set in their terms and conditions. Notice-and-takedown is an easier standard to meet than performance indicators for automated filters. It’s also likely cheaper. This approach is, however, the opposite of what critics like Open Rights Group have predicted the law will bring; ORG believes that platforms will instead over-moderate in order to stay out of trouble, chilling free speech.

Related is an interesting piece by Henry Farrell at his Programmable Matter newsletter, who argues that the more important social media speech issue is that what we read there determines how we imagine others think rather than how we ourselves think. In other words, misinformation, disinformation, and hate speech change what we think is normal, expanding the window of what we think other people find acceptable. That has resonance for me: the worst thing about prominent trolls is they give everyone else permission to behave as badly as they do.

***

It’s now 25 years since I heard a privacy advocate predict that the EU’s then-new data protection rights could become the basis of a trade war with the US. While instead the EU and US have kept trying to find a bypass that will withstand a legal challenge from Max Schrems, the approaches seem to be continuing to diverge, and in more ways.

For example, last week in the longrunning battle over network neutrality, judges on the US Sixth Circuit Court of Appeals ruled that the Federal Communications Commission was out of line when it announced rules in 2023 that classified broadband suppliers as common carriers under Title II of the Communications Act (1934). This judgment is the result of the Supreme Court’s 2024 decision to overturn the Chevron deference, setting courts free to overrule government agencies’ expertise. And that means the end in the US (until or unless Congress legislates) of network neutrality, the principle that all data flowing across the Internet was created equal and should be transmitted without fear or favor. Network neutrality persists in California, Washington, and Colorado, whose legislatures have passed laws to protect it.

China has taught us that the Internet is more divisible by national law than many thought in the 1990s. Copyright law may be the only thing everyone agrees on.

Illustrations: Drunk parrot in a South London garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The lost Internet

As we open 2025 it would be traditional for an Old Internet Curmudgeon to rhapsodize about the good, old days of the 1990s, when the web was open, snark flourished at sites like suck.com, no one owned social media (that is, Usenet and Internet Relay Chat), and even the spam was relatively harmless.

But that’s not the period I miss right now. By “lost” I mean the late 2000s, when we shifted from an Internet of largely unreliable opinions to an Internet full of fact-based sites you could trust. This was the period during which Wikipedia (created 2001) grew up, and Open Street Map (founded 2004) was born, joining earlier sites like the Internet Archive (founded 1996) and Snopes (1994). In that time, Google produced useful results, blogs flourished, and before it killed them if you asked on Twitter for advice on where to find a post box near a point in Liverpool you’d get correct answers straight to your mobile phone.

Today, so far: I can’t get a weather app to stop showing the location I was at last week and show the location I’m at this week. Basically, the app is punishing me for not turning on location tracking. The TV remote at my friend’s house doesn’t fully work and she doesn’t know why or how to fix it; she works around it with a second remote whose failings are complementary. No calendar app works as well as the software I had 1995-2001 (it synced! without using a cloud server and third-party account!). At the supermarket, the computer checkout system locked up. It all adds up to a constant white noise of frustration.

We still have Wikipedia, Open Street Map, Snopes, and the Internet Archive. But this morning a Mastodon user posted that their ten-year-old says you can’t trust Google any more: “It just returns ‘a bunch of madeup stuff’.” When ten-year-olds know your knowledge product sucks…

If generative AI were a psychic we’d call what it does cold reading.

At his blog, Ed Zitron has published a magnificent, if lengthy, rant on the state ot technology. “The rot economy”, he calls it, and says we’re all victims of constant low-level trauma. Most of his complaints will be familiar: the technologies we use are constantly shifting and mostly for the worse. My favorite line: “We’re not expected to work out ‘the new way to use a toilet’ every few months because somebody decided we were finishing too quickly.”

Pause to remember nostalgically 2018, when a friend observed that technology wasn’t exciting any more and 2019, when many more people thought the Internet was no longer “fun”. Those were happy days. Now we are being overwhelmed with stuff we actively don’t want in our lives. Even hacked Christmas lights sound miserable for the neighbors.

***

I have spent some of these holidays editing a critique of Ofcom’s regulatory plans under the Online Safety Act (we all have our own ideas about holidays), and one thing seems clear: the splintering Internet is only going to get worse.

Yesterday, firing up Chrome because something didn’t work in Firefox, I saw a fleeting popup to the effect that because I may not be over 18 there are search results Google won’t show me. I don’t think age verification is in force in the Commonwealth of Pennsylvania – US states keep passing bills, but hit legal challenges.

Age verification has been “imminent” in the UK for so long – it was originally included in the Digital Economy Act 2017 – that it seems hard to believe it may actually become a reality. But: sites within the Act’s scope will have to complete an “illegal content risk assessment” by March 16. So the fleeting popup felt like a visitation from the Ghost of Christmas Future.

One reason age verification was dropped back then – aside from the distractions of Brexit – was that the mechanisms for implementing it were all badly flawed – privacy-invasive, ineffective, or both. I’m not sure they’ve improved much. In 2022, France’s data protection watchdog checked them out: “CNIL finds that such current systems are circumventable and intrusive, and calls for the implementation of more privacy-friendly models.”

I doubt Ofcom can square this circle, but the costs of trying will include security, privacy, freedom of expression, and constant technological friction. Bah, humbug.

***

Still, one thing is promising: the rise of small, independent media outlets wbo are doing high-quality work. Joining established efforts like nine-year-old The Ferret, ten-year-old Bristol Cable, and five-year-old Rest of World are year-and-a-half-old 404 Media and newcomer London Centric. 404Media, formed by four journalists formerly at Vice’s Motherboard, has been consistently making a splash since its founding; this week Jason Koebler reminds that Elon Musk’s proactive willingness to unlock the blown-up cybertruck in Las Vegas and provide comprehensive data on where it’s been, including video from charging stations, without warrant or court order, could apply to any Tesla customer at any time. Meanwhile, in its first three months London Centric’s founding journalist, Jim Waterson, has published pieces on the ongoing internal mess at Transport for London resulting from the August cyberattack and bicycle theft in the capital. Finally, if you’re looking for high-quality American political news, veteran journalist Dan Gillmore curates it for you every day in his Cornerstone of Democracy newsletter.

The corporate business model of journalism is inarguably in trouble, but journalism continues.

Happy new year.

Illustrations: The Marx Brothers in their 1929 film, The Cocoanuts, newly released into the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.