Universal service

Last week a couple of friends and I got around to trying out Techdirt‘s 2025 game, One Billion Users. This card-based game has each player trying to build a social network while keeping toxicity under control.

First impression: the instructions are bananas complicated. There are users, influencers, events, hotfixes, safeguards…and a Troll, which everyone who understood the instructions tried to push off on someone else ASAP. One of our number became the gamemaster, reading out the instructions we struggled to remember. You win by adding (and subtracting) points based on the cards you’re holding when the the GAME OVER card turns up.

Even in a single game where we were feeling our way through, different strategies emerged. One of our number did her best to build a smaller, friendlier network. She succeeded – but it wasn’t a winning strategy. Without any thought to planning, my network ended up medium-sized. I was constrained by an event card stopping me from adding new users, and then, catastrophically, “gifted” the Troll. I came in second. The winner had built a huge number of users, successfully dumped the troll (thank you *so* much), and acquired several influencers who brought their own communities. We eventually identified the networks we’d built, in order: Tumblr, Twitter (not, I think, X), Facebook.

In a more detailed review, Adi Robertson at The Verge traces the roots of the game’s design to a game we played a lot in my childhood but that I no longer remember very well: Mille Bornes (“A Thousand Milestones”). A change of theme, some added twists, I see it now.

We will try this game again. I didn’t *want* to build the Torment Nexus!

***

It appears the BBC wants to switch off Freeview in 2034. For non-UK readers: Freeview is digital terrestrial television – that is, broadcast. It’s operated by a joint venture among the UK’s public service broadcasters (PSBs) – the BBC, ITV, Channel 4, and Channel 5. Given any television made since 2008 or another receiver device you can access 85 channels without paying anything more other than the BBC’s license fee. That, too, will soon be under review; the BBC’s charter is due for renewal in 2027. Freeview is one piece of a larger puzzle.

As Mark Sweney explains at the Guardian, the Department of Culture, Media, and Sport is reviewing options for Freeview’s future, and is considering three alternatives presented by Ofcom (PDF), the broadcast regulator. One: upgrade the present infrastructure. Two: maintain it as a cut-down service offering only the PSBs’ core channels. Three: move entirely to streaming.

The broadcasters, Sweney writes, favor the latter, choosing 2034 as a logical time to shut down Freeview because that’s when their contract with their network operator expires. By then, projections say that about 1.8 million people will still be dependent on Freeview, a long way down from today’s estimated 12 million. Many more homes, like mine, use both. The Ofcom report says that in 2023 39% of TV viewing was via broadcast.

Most of the discussion focuses on costs: updating the Freeview infrastructure is expensive for broadcasters, switching to streaming is an ongoing expense for individuals. Households would need a broadband subscription, new equipment, and the streaming app Freely, which was launched in 2024. There is a petition opposing the change.

This discussion is happening shortly after the British Audience Research Board announced that the number of YouTube viewers passed the number of BBC viewers for the first time. However, as Dekan Apajee writes at The Conversation, even on YouTube people are still watching the BBC’s output, even if they’re not be aware of it. Apajee is more concerned about context and finding ways to distinguish public service broadcasting and its values from the jumble of everything else on YouTube. How do the PSBs meet the requirement for universal service? Ofcom’s more recent report on the future of public service media (PDF), also warns of this loss of discoverability amid increased competition.

Adding to that, the BBC is reportedly considering a formal content agreement with YouTube that would have it publish some younger-oriented content there before showing it on its own platforms. It’s odd timing, as so many are warning against depending on US technology, as the economist Paul Krugman wrote yesterday. The loss of audience data has been a theme in the rise of streamers – and YouTube has just withdrawn from BARB’s audience measurement system, saying the organization violated YouTube’s terms and conditions.

Remarkably little of this discussion considers the potential loss of privacy inherent in forcing everyone to move to “smart” data collection machines (TVs, phones, computers). Is there a future in which it’s still possible to watch video content anonymously? (Yes, but they call it “piracy”.)

The BBC seems to believe that transitioning to streaming can be smooth. Sweney cites the years to 2012, when analog TV was switched off in favor of digital broadcast, which he describes as “near seamless” despite warnings of potential exclusion. Maybe so, but a lot of televisions were wastefully dumped, and that conversion was a one-time cost, not a permanent monthly drain.

At a meeting yesterday about building better technology, one attendee passionately advocated trustworthy content, presented by trusted sources. Ah, I thought, she wants to reinvent the BBC. Doesn’t everyone?

Illustrations: Family watching television in 1958 (via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of causality

The debates over children’s use of social media, screens, and phones continue, exacerbated in the UK by ongoing Parliamentary scrutiny of the Children’s Wellbeing and Schools bill and continuing disgust over Grok‘s sexualized image generation. Robert Booth reports at the Guardian that the Center for Countering Digital Hate estimates that Grok AI generated 3 million sexualized images in under two weeks and that a third of them are still viewable on X. In that case, X and Grok appear to be a more general problem than children’s access.

We continue to need better evidence establishing causality or its absence. This week, researchers from the Bradford Centre for Health Data Science (led by Dan Lewer) and the University of Cambridge (led by Amy Orben) announced a six-week trial that will attempt to find the actual impact on teens of limiting – not ending – social media access. The BBC reports, that the trial will split 4,000 Bradford secondary school pupils into two groups, One will download an app hat turns off access to services like TikTok and Snapchat from 9pm to 7am and limits use at other times to a “daily budget”. The restrictions won’t include WhatsApp, which the researchers recognize is central to many family groups. The other half will go on using social media as before.

The researchers will compare the two groups by assessing their’ levels of anxiety, depression, sleep, bullying, and time spent with friends and family.

In earlier research, Orben developed a framework for data donation, which allows teens to understand their own use of social media. Another forthcoming study, Youth Perspectives on Social Media Harms: A Large-Scale Micro-Narrative Study, collects 901 first-person tales from 18- to 22-year-olds in the UK. From these Orben’s group derive four types of harm: harms from other people’s behavior, personal harmful behavior evoked by social media, harms related to the content they encounter, and harms related to platform features. In the first category they include bullying and scams; in the second, compulsive use and social comparison; in the third, graphic material; and in the fourth, algorithmic manipulation. They also note the study’s limitations. A longer-term or differently-timed study might show different effects – during the study period the 2024 US presidential election took place. The teens’ stories don’t establish causality. Finally, there may be other harms not captured in this study.

The most important element, however, is that they sought the perspective of young people themselves, who are to date rarely heard in these discussions.

As this research begins, at Techdirt Mike Masnick reports on two new finished papers also covering teens and social media. The first, Social Media Use and Well-Being Across Adolescent Development, published in JAMA Pediatrics, is a three-year study of 100,991 Australian adolescents to find whether well-being was associated with social media use. The researchers, from the University of South Australia, found a U-shaped curve: moderate social media use was associated with the best outcomes, while both the highest users and the non-users showed less well-being. Girls benefited increasingly from moderate social media use from mid-adolescence onwards, while in boys’ non-use became increasingly problematic, leading to worse outcomes than high use by their late teens.

The second, a study from the University of Manchester published in the Journal of Public Health, followed a group of 25,000 11- to 14-year-olds to find out whether the use of technology such as social media and gaming accurately predicted later mental health issues. The study found no evidence that heavier use of social media or gaming led to increased symptoms of anxiety or depression in the following year.

In his discussion of these two papers, Masnick argues that this research gives weight to his contention that the widespread claim that social media is inherently harmful is wrong.

In the UK and elsewhere, however, politicians are proceeding on the basis that social media *is* inevitably harmful. . This week, the government announced a consultation on children’s use of technology. The consultation seems, as Carly Page writes at The Register, geared toward increasing restrictions, Also this week, the House of Lords voted 261 to 150, defeating the government to add an amendment to the Children’s Wellbeing and Schools bill that would require social media services to add age verification to block under-16s from accessing them within a year. MPs will now have to vote to remove the amendment or it will become law, a backdoor preemption of the House of Commons’ prerogative to legislate.

UK prime minister Keir Starmer has been edging toward a social media ban for under-16s; now with added pressure from not only the Lords but also the Conservative Party leader, Kemi Badenoch, and 61 MPs sent an open letter supporting a ban like the one in Australia. Ofcom reports that 22% of children aged eight to 17 have a false user age of over 18 – but also that often it’s with their parents’ help. Would this be different under a national ban?

Starmer reportedly wants to delay deciding until evidence from Australia and, one presumes, from the consultation, is available. A sensible idea we hope is not doomed to failure.

Illustrations: Time magazine’s 1995 “Cyberporn” cover, which raised early alarm about kids online. Based on a fraudulent study, it nonetheless influenced policy-making for some years.

Also this week:
At the Plutopia podcast, we interview Dave Evans on his work to combat misinformation.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Split

In an abrupt reversal, UK prime minister Keir Starmer announced this week that the digital IDs he said in September would be mandatory for proving the right to work will now be…not so much. The announcement appears to reinstate the status quo: workers can continue proving their right to work by showing a passport or e-visa.

It’s not clear what led to the change, although some – commenters on social media, former Labour home secretary David Blunkett – suggest opponents “won”. The reality is that digital IDs are probably not really going away. However, making them optional is an important step in the right direction. A lot more is needed to develop a system that works for people instead of for governments.

***

Ten days on, the discovery that Xai’s Grok chatbot was being used to “nudify” images of women and children is still firing headlines, especially since the BBC reported that the Internet Watch Foundation had found “criminal images” of girls between 11 and 13 that appeared to have been Grok-generated. Child sexual abuse material is illegal in the UK, as in many other countries, no matter how it’s created or whether it’s real or synthetic. On Wednesday, Elon Musk asked on X if anyone could break Grok’s image moderation.

Last Friday, the Independent, among others, reported that X had turned off Grok’s image generation for all but the site’s (paying) verified users. On Monday, Starmer warned that X could lose the right to self-regulate if it could not control Grok. On Tuesday, Ofcom said it was launching an investigation, and Starmer told the House of Commons that X was “acting to ensure full compliance with the law”. In fact, it later came out, he was basing this information on media reports but had not himself been in contact with X himself. His government is now planning legislation to criminalize this type of software. Yesterday, Musk announced X would geoblock the AI tool in countries where it’s illegal. This morning, the Guardian reports that the feature is still not blocked in the Grok app.

As an unexpected side effect, these revelations have reignited divisions in the venerable and venerated elite scientists’ Royal Society, which elected Elon Musk an Overseas Fellow in 2018.

To recap: in August 2024, Nicola Davis reported at the Guardian that a 74 Fellows had written to the Society calling for Elon Musk’s expulsion, after Musk tweets promoting unrest in the UK and propagating scientific disinformation.

In late 2024, the developmental neuropsychologist Dorothy M. Bishop blogged that she had resigned from the Royal Society to protest Elon Musk’s continued membership as an Overseas Fellow.

Further resignations have followed. Next up, iIn February, was professor of systems biology Andrew Millar, who deplored . Around the same time, more than 1,000 scientists signed an open letter to the Society’s then-president Andrew Smith calling for Musk’s ouster.

In March, Andrew Sella, a chemist, returned the Society’s Michael Faraday prize for science communication, explainingthe society’s inaction. Also that month, on X neural networking pioneer Geoff Hinton called for Musk’s expulsion. There was another burst of calls for Musk to be expelled in September, when he addressed a far-right rally organized by Tommy Robinson.

At the end of 2025, the Royal Society changed presidents. In April 2025, the incoming president, geneticist and Nobel Laureate Paul Nurse, taking the position for a rare second time, told The Times that he had written to Musk asking him if he could do something to improve the situation of American science, adding that given the damage he has caused to the “scientific endeavor in the United States” he should consider resigning from the Society.

In retrospect, more attention should have been paid to Nurse’s position that Musk should not be expelled, which he justified by saying that many Fellows were “odd”. The Guardian published more details about that correspondence in July.

A few days ago, professor of materials science Rachel Oliver published an open letter to Nurse asking him to reconsider his argument that Fellows should only be expelled if their science proved “fraudulent or highly defective”. Oliver argues that this stance grants “a licence to harass to the already powerful people on whom the Society bestows fellowship”.

She was responding to this week’s report in which Nurse doubled down on those overlooked comments, arguing that the code of conduct Fellows cited to justify expulsion might need to be revised because it resembled an employer’s code of conduct, and Fellows are not employees. He also took another shot at members who aren’t Musk, pointing to a portrait of Isaac Newton and saying, “He was a very nasty piece of work, yet we revere him.” I’m not sure that “we tolerated assholes in the past so we should continue to do so” is the persuasive argument he thinks it is.

It’s also clear that the Royal Society will continue to face public and private censure, no matter what it does now. This row will resurface every time Musk is in the news. The Royal Society is damned whatever it decides; it can’t keep hoping Musk will gentlemanly fall on his sword.

Illustrations: Sir Isaac Newton, as seen in the National Portrait Gallery, London (via Wikimedia).

Also this week:
At the Techgrumps podcast, #3.36, Men are weird: The Return of the Glasshole.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Disconnexion

One thing we left out in last week’s complaint is generative AI’s undoubted ability to magnify the worst of human online behavior. A few days ago, the world discovered that X’s chatbot, Grok, can be commanded to “nudify” images of women and children – that is, digitally remove their clothes without their consent. A number of commenters also note that some of the same British politicians who are calling out X and Grok about this and who more broadly insist on increasing restrictions in the name of online safety nonetheless continue to post there. Even Ashley St. Clair, the mother of one of Elon Musk’s sons, is unable to get these images taken down. Some ministers have called for banning this form of deepfake software.

Among those calling for Elon Musk to act “urgently” are technology secretary Liz Kendall and prime minister Keir Starmer. The BBC reported this morning (January 9) that the government is calling on Ofcom to use “all its powers”. At Variety, Naman Rathandran reports that X has moved AI image editing behind a paywall.

On January 2, at the National Observer, Jimmy Thompson calls on the Canadian government to delete their accounts. On Wednesday, the Commons women and equalities committee announced it would stop using X. As of January 8, both Kendall and Starmer are still posting on X, along with the UK’s Supreme Court and the Regulatory Policy committee and doubtless many others. Ofcom, the regulatory agency in charge of enforcing the Online Safety Act, posted a statement on January 5 saying it has contacted X and plans a “swift assessment to determine whether there are potential compliance issues that warrant investigation”. At the Online Safety Act Network, Lorna Woods explains the relevant law.

My guess is that few politicians manage their own social media – an extreme form of mental compartmentalization – and their aides are schooled in the belief that “we must meet the audience where they are”. In that sense, these accounts are not ordinary users, who use social media to connect to their friends and other interesting people. Politicians, like many others who are paid to show off in public, use social media to broadcast, not so much to participate. But much depends on whether you think that Grok’s behavior is one piece of a fundamental structural problem with X and its ownership or whether you believe it’s an isolated ill-thought-out feature to be solved by tweaking software, a distinction Jason Koebler explores at 404 Media.

The politicians’ accounts doubtless predate Musk’s takeover. Twitter was – and X is – small compared to other social media. But the short-burst style perfectly suited journalists, who gave it far more coverage than it probably deserved. Politicians go where they perceive the public to be, which is often signaled by media coverage.

It’s not necessarily wrong for politicians and government agencies to argue that they should be on X to serve their constituents who use it. But to legitimize that claim they should also be cross-posting on every significant platform, especially the open web. We can then argue about the threshold for “significant”. At a guess, it’s bigger than a blog but smaller than Mastodon, where politicians are notoriously absent.

***

The early 2020s’ exciting future of cryptocurrencies has gotten lost in the distraction of the last couple of years’ excitement over our new future of technologies pretending to be “smart”. In 2023’s “crypto winter”, we thought anyone still interested was either an early booster or thought they could smell profit. As Molly White wrote this week, they’ve spent the last two years nourishing grudges and building a political machine that could sink large parts of the economy.

More quietly, as Dave Birch predicted in 2017 (and repeated in his 2020 book, The Currency Cold War) “serious people” were considering their approach. Among them, Birch numbered banks, governments, and communities.

Now, governments are hatching proposals. As 2025 ended, the European Council backed the European Central Bank’s digital euro plan; the European Parliament will vote on it this year. The Financial Times reports that this electronic alternative to cash could help European central bankers pull back some control over electronic retail payments from the US organizations that dominate the field. The ECB hopes to start issuing the currency in 2029. In the UK, the Bank of England is mulling the design of the digital pound. The International Monetary Fund sees the digital euro as a continuation of financial stability.

Birch dates government interest to Facebook’s now-defunct 2019 cryptocurrency plan. Today, I imagine new motives: the US’s diminishing reliability as an ally raises the desirability of lessening reliance on its infrastructure generally. Visa, Mastercard, and other payment mechanisms largely transit US systems, a reality the FT says European banks are already working to change. In March, ECB board member Philip R. Lane argued that the digital euro will foster monetary autonomy.

We’ll see. The Economist writes that many countries are recognizing cash’s greater resilience, and are rethinking plans to go all-digital.

It remains hard to know how much central bank digital currencies will matter. As I wrote in 2023, there are few obvious benefits to individuals. For most of us the problem isn’t the mechanism for payments, it’s finding the money.

Illustrations: Bank of England facade.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The AI who was God

Three subjects dominated 2025: increasing AI infestation, expanding surveillance use of biometrics, and, age verification and online safety. The last spent the year spreading across the world, including, most recently, to Louisiana. There, on December 22, a US District Court blocked the law on First Amendment grounds in a suit brought by the trade association NetChoice. Less than a week earlier on similar grounds, NetChoice also won a suit in Arkansas that would have penalized platforms for “using designs or algorithms” that they “know or should have known” could harm users by for example leading to addiction, drug use, or self-harm. The judge in this case called the law “unconstitutionally vague”. Personally, it seems like it would be hard to prove cause and effect.

However, much of the rest of the year felt in many ways like rinse-and-repeat, only bigger and more frustrating. The immediate future – 2026 – therefore looks like more of all of those perennial topics, especially surveillance. This time next year we will still be fighting over age verification, network neutrality, national identification systems, surveillance, data protection, security issues surrounding the Internet of Things and other “smart” devices, social media bans, and access to strong encryption, along with other perennials such as copyright and digital sovereignty.

It is however possible that AI might have gone quiet by then. Three types of reasons: financial, technical, and social.

To take finances first, concerns about the AI bubble have been building all year. In the latest of his series of diatribes about this and the “rot economy”, Ed Zitron writes that AI is bringing “enshittification Stage Four”, in which companies, having already turned on their users and customers, turn on their shareholders. Zitron traces the circular deals, the massive debt, the extravagant claims, and the disproportionately small revenues, and invokes the adage, if something can’t go on forever, it will stop.

On the technical side, no matter what Elon Musk predicts, more sober commentary at MIT Technology Review is calling “reset”. As Adam Becker writes in More Everything Forever, one thing that can’t go on ad infinitum is exponentially increasing computing power-up: exponential growth always hits resource limits. It is entirely possible that come 2027 we’ll have run out of all sorts of road on this current paradigm of “AI”. If so, expect to hear a lot more about how quantum is ready to remake the world. Generative AI will still be bigger ten years from now (just like the Internet in 2000, when the dot-com boom crashed), but it won’t become sentient and fix climate change.

Brief digression. On Mastodon, Icelandic web developer Baldur Bjarnason posts that he’s hearing people claim that studies showing that large language models won’t lead to AGI are “whitewashing creationism”. Uh…huh?

On the social side, pressure is mounting to curb the industry’s growth. US politicians including US senator Bernie Sanders (D-VT) and Florida governor Ron DeSantis (R) are both working to slow data center construction. Data centers guzzle power and water, as Zitron also explains, and nearby residents pay both directly and indirectly.

Other harms keep mounting up. The year’s Retraction Watch annual report includes myriad fake references Salesforce fired 4,000 people before only now realizing that large language models can’t do their jobs; other companies nonetheless want to copy it. Organizers canceled a concert by Canadian musician Ashley MacIsaac after a Google AI summary wrongly said he’d been convicted of sexual assault.

At Utah’s Park Record, Cannon Taylor reported recently that in late October an AI summary indicated that a West Jordan, Utah police officer had morphed into a frog. Simple explanation: ta Harry Potter movie playing in the background had been recorded by the officer’s bodycam during an investigation. Per the story, the summary seemed humanly written until, “And then the officer turned into a frog, and a magic book appeared and began granting wishes.”

The story goes on to report several different AI software trials. One, used in Summit County, has a setting that inserts deliberate errors into the summaries to expose officers who don’t thoroughly check them. With that turned off, the time savings over having officers write their own summaries are considerable. Summit County turned it on.The time savings vanished. The County decided to pass.

Back when pranksters used to deface web pages for fun, a pasttime more embarrassing than harmful, I thought it would be much worse when they learned to make small, hard-to-detect changes that poisoned the information supply.

AI is perfect for automating this.

In their “FakeParts” paper (PDF), researchers at the Institut Polytechnique de Paris discuss a disturbing example: subtle, localized AI-driven changes to otherwise real videos. These fakeparts blend in seamlessly; identifying them is far harder than spotting a complete fake, which on its own is hard enough. The researchers warn that subtle changes to facial expressions or gestures can change the emotional content of genuine statements, great for creating targeted attacks and sophisticated disinformation campaigns.

Cut to James Thurber‘s 1939 fable, The Owl Who Was God. If AI kills us, it will be because we trust it without applying common sense.

Illustrations: Barred owl (photo by Steve Bellovin, used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.