R.I.P. Dave Farber

For decades, Internet pioneer David J. Farber, who died on February 7, aged 91, maintained the Interesting People email list. When he began, it was an unusual choice, as few academics then reached out in such a public way.

Often called the grandfather of the Internet because he invented technologies and taught students whose work became crucial to its development, Farber’s long and distinguished career, which others recount in better detail, included stints at many respected US institutions – Bell Labs, RAND, the University of Delaware, Carnegie Mellon, the University of Pennsylvania – and a presence on too many technical organizations and projects to count. He was a regular at conferences, and Ione of those, in 1998, is where I met him in person.

Among the obits lauding his professional career: Stevens,Institute of Technology, where he earned his first degree (“he created the future”); the Internet Hall of Fame (“a mentor to many generations”); the Electronic Frontier Foundation, where he was a board member; the Japan Times, Tools on Fire, the New York Times (“gregarious”).

“Gregarious” was crucial to building the Internet, as Times writer Peter Wayner writes. Until 2020 the mailing list, whose membership Wayner estimates at 25,000, was really how I knew him. Farber liked to collect information and knowledge in all forms, including human, though I thought of the list less as interesting *people* than alerts to interesting information. However, the sources of that forwarded information were often surprising. Farber seemed to know *everyone*.

In 2020, covid’s arrival led Farber to create a new platform, a weekly Zoom call, which he of course publicized on his mailing list. Over the last nearly six years, many of these calls featured invited speakers, who might cover any topic from medical privacy to quantum computing to Hong Kong real estate to Brexit. Many regulars were his old friends; many others, like me, were newcomers. He was welcoming to all and eager to ask questions.

He was particularly interested in hearing from Asian speakers since he had moved, at 83, to Japan in 2018 to take up a post at Keio University. The move to a new country and culture seems to have given him a berth in a place where his age was respected and his accomplishments were revered. In the last week of January, he wound up what turns out to be his final semester by sending in the final grades for his class of 37 students. He was a teacher and communicator to the last. R.I.P.

Illustration: Dave Farber on a Zoom call, drawn by his friend of 70-plus years, John De Pillis (used by permission).

Whooped

In 2022, we noted a discussion by Julia Powles and Toby Walsh, summarized here, that warned about the increasing collection of data about elite athletes. The data, they said, does not flow to the sports scientists who really can help athletes perform better and minimize their risk of injury, but to data scientists and data crunchers. The money could be better spent on healthier environments and financial support. Powles, with Jacqueline Alderson, followed up in 2023 with best practice principles.

Cue tennis. At this year’s Australian Open, which concluded on February 1, eventual men’s singles champion Carlos Alcaraz, women’s singles finalist Aryna Sabalenka, and men’s singles semifinalist Jannik Sinner were all told to remove their Whoop tracker devices before playing early-round matches.

An important part of this story is the ridiculously convoluted nature of tennis politics. The International Tennis Federation runs lower-level tournaments and junior events; the Grand Slams are laws unto themselves; the men’s and women’s pro tours are run by the ATP and WTA respectively. That’s seven powers – without the national tennis federations or the national and international anti-doping edifice.

The players were caught between conflicting decisions. In mid-December, the ITF approved Whoop devices in competition as long as haptic feedback is disabled, adding it to its list of permitted “Player Analysis Technology” devices. On a Whoop, “haptic feedback” means the device vibrates to alert you to…something.

The ITF published its detailed examination (see also a wearer’s review by Emilie Lavinia at The Independent). Whoop’s array of sensors can capture: heart rate, heart rate variability, sleep stages and performance, recovery, activity strain metrics, blood oxygenation (SpO2), skin temperature, respiratory rate, and blood pressure (only some models), and perform on-demand ECG and irregular heart rhythm notification (only some regions, where it’s approved). As the ITF notes, data capture takes place automatically whenever the athlete is wearing the device. Players are allowed to charge the device on-court using its battery pack.

So far, so good. Turning off haptic feedback requires the player to disable alarms, turn off the “Strength Trainer” screens; and turn off “Strain Target” if they want to use “Live Activity” mode. The player has to show tournament officials on request that their settings are compliant (or that they’re using the Whoop 3.0, which has no haptic feedback).

The device has no screen; viewing the data requires an Android or iOS device, the app, paired via Bluetooth, and a subscription. This is the Gillette razor, or “free puppy” business model: the device is free, you pay for ongoing access to your own data.

So basically, the ITF is allowing players to collect data using the Whoop device for future inspection and discussion with their teams, as long as it doesn’t vibrate on their wrist. The key here is the potential for the device to be a conduit for an automated form of in-match coaching; the definition of PAT devices given in Appendix III of the ITF’s Rules of Tennis (PDF) specifically directs readers to Rule 30, which covers coaching.

For most of tennis’s Open Era – basically since 1968, when the sport went professional – coaching from the stands was banned. The original argument in favor of the ban was that tennis was and is a highly unequal sport. Top players can afford any assistance they want. The lowest-ranked players scrounge, as Irish player Conor Niland, who topped out at 129 in the world, recounts in a Guardian interview and in his excellent 2024 book, The Racket. Until the 1990s, even mid-range players often toured alone. Therefore, allowing coaching from the stands during matches threatened to make the playing field even more unequal.

As money flowed into tennis, the numbers who could afford traveling coaches rose and the no-coaching rule was increasingly flouted. Following a series of trials, the WTA began allowing coaching in 2022. The ATP, the ITF, and the Grand Slams finally began allowing it in 2025. Rule 30 leaves it open for events’ governing bodies to prohibit it.

The Whoop controversy digitizes all this baggage. Rule 30 differentiates between off-court coaching (coaching from the stands), which is permitted, and on-court coaching, which is only permitted during specific team events. Ordinary watches are allowed, but smart watches and mobile phones are banned because they are capable of communications, Teresa Merklin explains at Fiend at Court.

“We have coaching. why can’t you have your own data?” former champion Todd Woodbridge asked on TV.

We are still at the beginning of these technologies and their controversies. Merklin digs up a forgotten incident: in 2013, Wimbledon shot down Bethanie Mattek-Sands’ query about wearing Google Glass. Devices will keep shrinking and becoming harder to spot.

Unlike in the past, however, PATs are cheap compared to hiring traveling personnel. While Powles and Walsh were undoubtedly right that data analysis is no substitute for physiologists’ and sports scientists’ expertise, PATs might give lower-ranked players previously unaffordable insight. Given the increasing heat stress at many tournaments, feedback that warns when your body is overstressed seems like a good idea. On the other hand, can you imagine how much bettors would love to have access to this kind of data in real time?

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of the future Internet

“What kind of Internet do you want [him] to inherit?” “Him” was then measuring his age in weeks.

“Not *this* Internet.”

Now, when said son has grown to measure his life in months, my friend and I are no closer to a positive vision. But notably many more people seem to be asking the same kind of question.

In the last week I’ve been to two meetings convened to pull together a cross-section of activists, policy wonks, and techies to talk about building movements to push back against the spread of technological control. The goals of these groups, like my friend’s and mine remain fuzzy, but they reflect a widespread growing alarm about AI, US entanglement, and our other technological ills.

“When did the future stop being something we plan for and become something done to us?” a friend asked about five years ago. That sense of being held hostage by the inevitability narrative is there, too, in a jumble including job loss, the evils of capitalism, the embedding of companies like Palantir in the health service and soon in policing, the speed of change, widespread loneliness, sustainability, and existential threats. So the overall feel has been part-Occupy, part consciousness-raising session.

Those who did have visions to propose often seemed to be describing things that already exist: trusted, authoritative content (the BBC, Wikipedia); ending capitalism in favor of shared ownership and distributed power (“there’s always someone reinventing communism,” the person next to me muttered), and recreating the impossible dream of micropayments.

One meeting polled us with a list of concerns about AI and asked us to pick the most important. The winner, by far, was “consolidation of power”. This speaks to a wider movement than merely opposing AI or resisting the encroachment of the worst technology surveillance practices into daily life.

Similar discussions have been growing for at least a couple of years. At The Register, long-time open source advocate Liam Proven writes after attending the Open Source Policy Summit that Europe is reassessing its technological reliance on US IT services, which offers the potential for a US president to order disconnection. The lack of European billion-dollar technology companies leads people to forget the technology invented here that instead embraced openness: the web, Linux, Raspberry Pi, Open StreetMap, the Fediverse.

It’s a little alarming, however, that all of this discussion hovers at the application layer. Old-timers who’ve watched the Internet build up understand that underneath the social media and smartphones lies the physical layer, the infrastructure that is also condolidated and controlled: chips, cables, wireless spectrum. For younger folks, those elements are near-invisible; their adult lives have been dominated by concerns about data. Yet in the last year we’ve been warned of sabotage to undersea cables and chip shortages. There’s more general recognition of the issues surrounding data centers’ demand for power and water.

Even so, there’s a good amount of recognition that all the strands of our present polycrisis are intertwined – see for example the mission statement at Germany’s Cables of Resistance. A broader group, building on the 2024 conference convened by Cristina Caffarra, who called out policy makers at CPDP 2024 for ignoring physical infrastructure, is working on a EuroStack to provide a European cloud alternative.

At the political layer, we have Dutch News reporting that Dutch MPs are pushing their government to move away from depending on US technology companies to provide essential infrastructure. In the UK, LibDem and Green MPs are calling on the government to reconsider its contracts with Palantir.

A group called Pull the Plug will lead a “march against the machines” in London on February 28 to demand the UK government create citizens’ assemblies and implement their decisions on AI.

It feels like change is gathering here. In the US, the future still looks much like the past. In a blog post this week, here is Anthropic, presumably responding to OpenAI’s plan to add advertising to ChatGPT:

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking…We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

Compare and contrast to Google founders Sergey Brin and Larry Page in their 1998 Google-founding paper:

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users…we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

No wonder Anthropic adds this caution: “Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.” Translation: we may need the money.” Of course they’ll frame it as serving the customer better.

Illustrations: (One of) the first Internet ad, for AT&T, on HotWired (via The Internet History Podcast.

Also this week:
At the Plutopia podcast we talk to science fiction writer Ken MacLeod.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Universal service

Last week a couple of friends and I got around to trying out Techdirt‘s 2025 game, One Billion Users. This card-based game has each player trying to build a social network while keeping toxicity under control.

First impression: the instructions are bananas complicated. There are users, influencers, events, hotfixes, safeguards…and a Troll, which everyone who understood the instructions tried to push off on someone else ASAP. One of our number became the gamemaster, reading out the instructions we struggled to remember. You win by adding (and subtracting) points based on the cards you’re holding when the the GAME OVER card turns up.

Even in a single game where we were feeling our way through, different strategies emerged. One of our number did her best to build a smaller, friendlier network. She succeeded – but it wasn’t a winning strategy. Without any thought to planning, my network ended up medium-sized. I was constrained by an event card stopping me from adding new users, and then, catastrophically, “gifted” the Troll. I came in second. The winner had built a huge number of users, successfully dumped the troll (thank you *so* much), and acquired several influencers who brought their own communities. We eventually identified the networks we’d built, in order: Tumblr, Twitter (not, I think, X), Facebook.

In a more detailed review, Adi Robertson at The Verge traces the roots of the game’s design to a game we played a lot in my childhood but that I no longer remember very well: Mille Bornes (“A Thousand Milestones”). A change of theme, some added twists, I see it now.

We will try this game again. I didn’t *want* to build the Torment Nexus!

***

It appears the BBC wants to switch off Freeview in 2034. For non-UK readers: Freeview is digital terrestrial television – that is, broadcast. It’s operated by a joint venture among the UK’s public service broadcasters (PSBs) – the BBC, ITV, Channel 4, and Channel 5. Given any television made since 2008 or another receiver device you can access 85 channels without paying anything more other than the BBC’s license fee. That, too, will soon be under review; the BBC’s charter is due for renewal in 2027. Freeview is one piece of a larger puzzle.

As Mark Sweney explains at the Guardian, the Department of Culture, Media, and Sport is reviewing options for Freeview’s future, and is considering three alternatives presented by Ofcom (PDF), the broadcast regulator. One: upgrade the present infrastructure. Two: maintain it as a cut-down service offering only the PSBs’ core channels. Three: move entirely to streaming.

The broadcasters, Sweney writes, favor the latter, choosing 2034 as a logical time to shut down Freeview because that’s when their contract with their network operator expires. By then, projections say that about 1.8 million people will still be dependent on Freeview, a long way down from today’s estimated 12 million. Many more homes, like mine, use both. The Ofcom report says that in 2023 39% of TV viewing was via broadcast.

Most of the discussion focuses on costs: updating the Freeview infrastructure is expensive for broadcasters, switching to streaming is an ongoing expense for individuals. Households would need a broadband subscription, new equipment, and the streaming app Freely, which was launched in 2024. There is a petition opposing the change.

This discussion is happening shortly after the British Audience Research Board announced that the number of YouTube viewers passed the number of BBC viewers for the first time. However, as Dekan Apajee writes at The Conversation, even on YouTube people are still watching the BBC’s output, even if they’re not be aware of it. Apajee is more concerned about context and finding ways to distinguish public service broadcasting and its values from the jumble of everything else on YouTube. How do the PSBs meet the requirement for universal service? Ofcom’s more recent report on the future of public service media (PDF), also warns of this loss of discoverability amid increased competition.

Adding to that, the BBC is reportedly considering a formal content agreement with YouTube that would have it publish some younger-oriented content there before showing it on its own platforms. It’s odd timing, as so many are warning against depending on US technology, as the economist Paul Krugman wrote yesterday. The loss of audience data has been a theme in the rise of streamers – and YouTube has just withdrawn from BARB’s audience measurement system, saying the organization violated YouTube’s terms and conditions.

Remarkably little of this discussion considers the potential loss of privacy inherent in forcing everyone to move to “smart” data collection machines (TVs, phones, computers). Is there a future in which it’s still possible to watch video content anonymously? (Yes, but they call it “piracy”.)

The BBC seems to believe that transitioning to streaming can be smooth. Sweney cites the years to 2012, when analog TV was switched off in favor of digital broadcast, which he describes as “near seamless” despite warnings of potential exclusion. Maybe so, but a lot of televisions were wastefully dumped, and that conversion was a one-time cost, not a permanent monthly drain.

At a meeting yesterday about building better technology, one attendee passionately advocated trustworthy content, presented by trusted sources. Ah, I thought, she wants to reinvent the BBC. Doesn’t everyone?

Illustrations: Family watching television in 1958 (via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of causality

The debates over children’s use of social media, screens, and phones continue, exacerbated in the UK by ongoing Parliamentary scrutiny of the Children’s Wellbeing and Schools bill and continuing disgust over Grok‘s sexualized image generation. Robert Booth reports at the Guardian that the Center for Countering Digital Hate estimates that Grok AI generated 3 million sexualized images in under two weeks and that a third of them are still viewable on X. In that case, X and Grok appear to be a more general problem than children’s access.

We continue to need better evidence establishing causality or its absence. This week, researchers from the Bradford Centre for Health Data Science (led by Dan Lewer) and the University of Cambridge (led by Amy Orben) announced a six-week trial that will attempt to find the actual impact on teens of limiting – not ending – social media access. The BBC reports, that the trial will split 4,000 Bradford secondary school pupils into two groups, One will download an app hat turns off access to services like TikTok and Snapchat from 9pm to 7am and limits use at other times to a “daily budget”. The restrictions won’t include WhatsApp, which the researchers recognize is central to many family groups. The other half will go on using social media as before.

The researchers will compare the two groups by assessing their’ levels of anxiety, depression, sleep, bullying, and time spent with friends and family.

In earlier research, Orben developed a framework for data donation, which allows teens to understand their own use of social media. Another forthcoming study, Youth Perspectives on Social Media Harms: A Large-Scale Micro-Narrative Study, collects 901 first-person tales from 18- to 22-year-olds in the UK. From these Orben’s group derive four types of harm: harms from other people’s behavior, personal harmful behavior evoked by social media, harms related to the content they encounter, and harms related to platform features. In the first category they include bullying and scams; in the second, compulsive use and social comparison; in the third, graphic material; and in the fourth, algorithmic manipulation. They also note the study’s limitations. A longer-term or differently-timed study might show different effects – during the study period the 2024 US presidential election took place. The teens’ stories don’t establish causality. Finally, there may be other harms not captured in this study.

The most important element, however, is that they sought the perspective of young people themselves, who are to date rarely heard in these discussions.

As this research begins, at Techdirt Mike Masnick reports on two new finished papers also covering teens and social media. The first, Social Media Use and Well-Being Across Adolescent Development, published in JAMA Pediatrics, is a three-year study of 100,991 Australian adolescents to find whether well-being was associated with social media use. The researchers, from the University of South Australia, found a U-shaped curve: moderate social media use was associated with the best outcomes, while both the highest users and the non-users showed less well-being. Girls benefited increasingly from moderate social media use from mid-adolescence onwards, while in boys’ non-use became increasingly problematic, leading to worse outcomes than high use by their late teens.

The second, a study from the University of Manchester published in the Journal of Public Health, followed a group of 25,000 11- to 14-year-olds to find out whether the use of technology such as social media and gaming accurately predicted later mental health issues. The study found no evidence that heavier use of social media or gaming led to increased symptoms of anxiety or depression in the following year.

In his discussion of these two papers, Masnick argues that this research gives weight to his contention that the widespread claim that social media is inherently harmful is wrong.

In the UK and elsewhere, however, politicians are proceeding on the basis that social media *is* inevitably harmful. . This week, the government announced a consultation on children’s use of technology. The consultation seems, as Carly Page writes at The Register, geared toward increasing restrictions, Also this week, the House of Lords voted 261 to 150, defeating the government to add an amendment to the Children’s Wellbeing and Schools bill that would require social media services to add age verification to block under-16s from accessing them within a year. MPs will now have to vote to remove the amendment or it will become law, a backdoor preemption of the House of Commons’ prerogative to legislate.

UK prime minister Keir Starmer has been edging toward a social media ban for under-16s; now with added pressure from not only the Lords but also the Conservative Party leader, Kemi Badenoch, and 61 MPs sent an open letter supporting a ban like the one in Australia. Ofcom reports that 22% of children aged eight to 17 have a false user age of over 18 – but also that often it’s with their parents’ help. Would this be different under a national ban?

Starmer reportedly wants to delay deciding until evidence from Australia and, one presumes, from the consultation, is available. A sensible idea we hope is not doomed to failure.

Illustrations: Time magazine’s 1995 “Cyberporn” cover, which raised early alarm about kids online. Based on a fraudulent study, it nonetheless influenced policy-making for some years.

Also this week:
At the Plutopia podcast, we interview Dave Evans on his work to combat misinformation.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Split

In an abrupt reversal, UK prime minister Keir Starmer announced this week that the digital IDs he said in September would be mandatory for proving the right to work will now be…not so much. The announcement appears to reinstate the status quo: workers can continue proving their right to work by showing a passport or e-visa.

It’s not clear what led to the change, although some – commenters on social media, former Labour home secretary David Blunkett – suggest opponents “won”. The reality is that digital IDs are probably not really going away. However, making them optional is an important step in the right direction. A lot more is needed to develop a system that works for people instead of for governments.

***

Ten days on, the discovery that Xai’s Grok chatbot was being used to “nudify” images of women and children is still firing headlines, especially since the BBC reported that the Internet Watch Foundation had found “criminal images” of girls between 11 and 13 that appeared to have been Grok-generated. Child sexual abuse material is illegal in the UK, as in many other countries, no matter how it’s created or whether it’s real or synthetic. On Wednesday, Elon Musk asked on X if anyone could break Grok’s image moderation.

Last Friday, the Independent, among others, reported that X had turned off Grok’s image generation for all but the site’s (paying) verified users. On Monday, Starmer warned that X could lose the right to self-regulate if it could not control Grok. On Tuesday, Ofcom said it was launching an investigation, and Starmer told the House of Commons that X was “acting to ensure full compliance with the law”. In fact, it later came out, he was basing this information on media reports but had not himself been in contact with X himself. His government is now planning legislation to criminalize this type of software. Yesterday, Musk announced X would geoblock the AI tool in countries where it’s illegal. This morning, the Guardian reports that the feature is still not blocked in the Grok app.

As an unexpected side effect, these revelations have reignited divisions in the venerable and venerated elite scientists’ Royal Society, which elected Elon Musk an Overseas Fellow in 2018.

To recap: in August 2024, Nicola Davis reported at the Guardian that a 74 Fellows had written to the Society calling for Elon Musk’s expulsion, after Musk tweets promoting unrest in the UK and propagating scientific disinformation.

In late 2024, the developmental neuropsychologist Dorothy M. Bishop blogged that she had resigned from the Royal Society to protest Elon Musk’s continued membership as an Overseas Fellow.

Further resignations have followed. Next up, iIn February, was professor of systems biology Andrew Millar, who deplored . Around the same time, more than 1,000 scientists signed an open letter to the Society’s then-president Andrew Smith calling for Musk’s ouster.

In March, Andrew Sella, a chemist, returned the Society’s Michael Faraday prize for science communication, explainingthe society’s inaction. Also that month, on X neural networking pioneer Geoff Hinton called for Musk’s expulsion. There was another burst of calls for Musk to be expelled in September, when he addressed a far-right rally organized by Tommy Robinson.

At the end of 2025, the Royal Society changed presidents. In April 2025, the incoming president, geneticist and Nobel Laureate Paul Nurse, taking the position for a rare second time, told The Times that he had written to Musk asking him if he could do something to improve the situation of American science, adding that given the damage he has caused to the “scientific endeavor in the United States” he should consider resigning from the Society.

In retrospect, more attention should have been paid to Nurse’s position that Musk should not be expelled, which he justified by saying that many Fellows were “odd”. The Guardian published more details about that correspondence in July.

A few days ago, professor of materials science Rachel Oliver published an open letter to Nurse asking him to reconsider his argument that Fellows should only be expelled if their science proved “fraudulent or highly defective”. Oliver argues that this stance grants “a licence to harass to the already powerful people on whom the Society bestows fellowship”.

She was responding to this week’s report in which Nurse doubled down on those overlooked comments, arguing that the code of conduct Fellows cited to justify expulsion might need to be revised because it resembled an employer’s code of conduct, and Fellows are not employees. He also took another shot at members who aren’t Musk, pointing to a portrait of Isaac Newton and saying, “He was a very nasty piece of work, yet we revere him.” I’m not sure that “we tolerated assholes in the past so we should continue to do so” is the persuasive argument he thinks it is.

It’s also clear that the Royal Society will continue to face public and private censure, no matter what it does now. This row will resurface every time Musk is in the news. The Royal Society is damned whatever it decides; it can’t keep hoping Musk will gentlemanly fall on his sword.

Illustrations: Sir Isaac Newton, as seen in the National Portrait Gallery, London (via Wikimedia).

Also this week:
At the Techgrumps podcast, #3.36, Men are weird: The Return of the Glasshole.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Disconnexion

One thing we left out in last week’s complaint is generative AI’s undoubted ability to magnify the worst of human online behavior. A few days ago, the world discovered that X’s chatbot, Grok, can be commanded to “nudify” images of women and children – that is, digitally remove their clothes without their consent. A number of commenters also note that some of the same British politicians who are calling out X and Grok about this and who more broadly insist on increasing restrictions in the name of online safety nonetheless continue to post there. Even Ashley St. Clair, the mother of one of Elon Musk’s sons, is unable to get these images taken down. Some ministers have called for banning this form of deepfake software.

Among those calling for Elon Musk to act “urgently” are technology secretary Liz Kendall and prime minister Keir Starmer. The BBC reported this morning (January 9) that the government is calling on Ofcom to use “all its powers”. At Variety, Naman Rathandran reports that X has moved AI image editing behind a paywall.

On January 2, at the National Observer, Jimmy Thompson calls on the Canadian government to delete their accounts. On Wednesday, the Commons women and equalities committee announced it would stop using X. As of January 8, both Kendall and Starmer are still posting on X, along with the UK’s Supreme Court and the Regulatory Policy committee and doubtless many others. Ofcom, the regulatory agency in charge of enforcing the Online Safety Act, posted a statement on January 5 saying it has contacted X and plans a “swift assessment to determine whether there are potential compliance issues that warrant investigation”. At the Online Safety Act Network, Lorna Woods explains the relevant law.

My guess is that few politicians manage their own social media – an extreme form of mental compartmentalization – and their aides are schooled in the belief that “we must meet the audience where they are”. In that sense, these accounts are not ordinary users, who use social media to connect to their friends and other interesting people. Politicians, like many others who are paid to show off in public, use social media to broadcast, not so much to participate. But much depends on whether you think that Grok’s behavior is one piece of a fundamental structural problem with X and its ownership or whether you believe it’s an isolated ill-thought-out feature to be solved by tweaking software, a distinction Jason Koebler explores at 404 Media.

The politicians’ accounts doubtless predate Musk’s takeover. Twitter was – and X is – small compared to other social media. But the short-burst style perfectly suited journalists, who gave it far more coverage than it probably deserved. Politicians go where they perceive the public to be, which is often signaled by media coverage.

It’s not necessarily wrong for politicians and government agencies to argue that they should be on X to serve their constituents who use it. But to legitimize that claim they should also be cross-posting on every significant platform, especially the open web. We can then argue about the threshold for “significant”. At a guess, it’s bigger than a blog but smaller than Mastodon, where politicians are notoriously absent.

***

The early 2020s’ exciting future of cryptocurrencies has gotten lost in the distraction of the last couple of years’ excitement over our new future of technologies pretending to be “smart”. In 2023’s “crypto winter”, we thought anyone still interested was either an early booster or thought they could smell profit. As Molly White wrote this week, they’ve spent the last two years nourishing grudges and building a political machine that could sink large parts of the economy.

More quietly, as Dave Birch predicted in 2017 (and repeated in his 2020 book, The Currency Cold War) “serious people” were considering their approach. Among them, Birch numbered banks, governments, and communities.

Now, governments are hatching proposals. As 2025 ended, the European Council backed the European Central Bank’s digital euro plan; the European Parliament will vote on it this year. The Financial Times reports that this electronic alternative to cash could help European central bankers pull back some control over electronic retail payments from the US organizations that dominate the field. The ECB hopes to start issuing the currency in 2029. In the UK, the Bank of England is mulling the design of the digital pound. The International Monetary Fund sees the digital euro as a continuation of financial stability.

Birch dates government interest to Facebook’s now-defunct 2019 cryptocurrency plan. Today, I imagine new motives: the US’s diminishing reliability as an ally raises the desirability of lessening reliance on its infrastructure generally. Visa, Mastercard, and other payment mechanisms largely transit US systems, a reality the FT says European banks are already working to change. In March, ECB board member Philip R. Lane argued that the digital euro will foster monetary autonomy.

We’ll see. The Economist writes that many countries are recognizing cash’s greater resilience, and are rethinking plans to go all-digital.

It remains hard to know how much central bank digital currencies will matter. As I wrote in 2023, there are few obvious benefits to individuals. For most of us the problem isn’t the mechanism for payments, it’s finding the money.

Illustrations: Bank of England facade.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The AI who was God

Three subjects dominated 2025: increasing AI infestation, expanding surveillance use of biometrics, and, age verification and online safety. The last spent the year spreading across the world, including, most recently, to Louisiana. There, on December 22, a US District Court blocked the law on First Amendment grounds in a suit brought by the trade association NetChoice. Less than a week earlier on similar grounds, NetChoice also won a suit in Arkansas that would have penalized platforms for “using designs or algorithms” that they “know or should have known” could harm users by for example leading to addiction, drug use, or self-harm. The judge in this case called the law “unconstitutionally vague”. Personally, it seems like it would be hard to prove cause and effect.

However, much of the rest of the year felt in many ways like rinse-and-repeat, only bigger and more frustrating. The immediate future – 2026 – therefore looks like more of all of those perennial topics, especially surveillance. This time next year we will still be fighting over age verification, network neutrality, national identification systems, surveillance, data protection, security issues surrounding the Internet of Things and other “smart” devices, social media bans, and access to strong encryption, along with other perennials such as copyright and digital sovereignty.

It is however possible that AI might have gone quiet by then. Three types of reasons: financial, technical, and social.

To take finances first, concerns about the AI bubble have been building all year. In the latest of his series of diatribes about this and the “rot economy”, Ed Zitron writes that AI is bringing “enshittification Stage Four”, in which companies, having already turned on their users and customers, turn on their shareholders. Zitron traces the circular deals, the massive debt, the extravagant claims, and the disproportionately small revenues, and invokes the adage, if something can’t go on forever, it will stop.

On the technical side, no matter what Elon Musk predicts, more sober commentary at MIT Technology Review is calling “reset”. As Adam Becker writes in More Everything Forever, one thing that can’t go on ad infinitum is exponentially increasing computing power-up: exponential growth always hits resource limits. It is entirely possible that come 2027 we’ll have run out of all sorts of road on this current paradigm of “AI”. If so, expect to hear a lot more about how quantum is ready to remake the world. Generative AI will still be bigger ten years from now (just like the Internet in 2000, when the dot-com boom crashed), but it won’t become sentient and fix climate change.

Brief digression. On Mastodon, Icelandic web developer Baldur Bjarnason posts that he’s hearing people claim that studies showing that large language models won’t lead to AGI are “whitewashing creationism”. Uh…huh?

On the social side, pressure is mounting to curb the industry’s growth. US politicians including US senator Bernie Sanders (D-VT) and Florida governor Ron DeSantis (R) are both working to slow data center construction. Data centers guzzle power and water, as Zitron also explains, and nearby residents pay both directly and indirectly.

Other harms keep mounting up. The year’s Retraction Watch annual report includes myriad fake references Salesforce fired 4,000 people before only now realizing that large language models can’t do their jobs; other companies nonetheless want to copy it. Organizers canceled a concert by Canadian musician Ashley MacIsaac after a Google AI summary wrongly said he’d been convicted of sexual assault.

At Utah’s Park Record, Cannon Taylor reported recently that in late October an AI summary indicated that a West Jordan, Utah police officer had morphed into a frog. Simple explanation: ta Harry Potter movie playing in the background had been recorded by the officer’s bodycam during an investigation. Per the story, the summary seemed humanly written until, “And then the officer turned into a frog, and a magic book appeared and began granting wishes.”

The story goes on to report several different AI software trials. One, used in Summit County, has a setting that inserts deliberate errors into the summaries to expose officers who don’t thoroughly check them. With that turned off, the time savings over having officers write their own summaries are considerable. Summit County turned it on.The time savings vanished. The County decided to pass.

Back when pranksters used to deface web pages for fun, a pasttime more embarrassing than harmful, I thought it would be much worse when they learned to make small, hard-to-detect changes that poisoned the information supply.

AI is perfect for automating this.

In their “FakeParts” paper (PDF), researchers at the Institut Polytechnique de Paris discuss a disturbing example: subtle, localized AI-driven changes to otherwise real videos. These fakeparts blend in seamlessly; identifying them is far harder than spotting a complete fake, which on its own is hard enough. The researchers warn that subtle changes to facial expressions or gestures can change the emotional content of genuine statements, great for creating targeted attacks and sophisticated disinformation campaigns.

Cut to James Thurber‘s 1939 fable, The Owl Who Was God. If AI kills us, it will be because we trust it without applying common sense.

Illustrations: Barred owl (photo by Steve Bellovin, used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: More Everything Forever

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity
By Adam Becker
Basic Books (Hachette)
ISBN: 9781541619593
Publication date: April 22, 2025

A friend who would be 93 now used to say that the first time he’d read about the idea of living long enough to live forever was when he was about eight. Even at that age, he was a dedicated reader of science fiction, though he also said this was a habit so weird at the time that he had to hide it from his classmates.

Cut to 1992, when I reviewed Ed Regis’s book Great Mambo Chicken and the Transhuman Condition for New Scientist. Regis traveled the American southwest, finding cryonicists, guys building rockets in the desert, wondering whether gravity was really necessary, figuring out how to make backups of our brains, spinning chickens in centrifuges to understand the impact of heavier-than-earth gravity, that sort of thing. Regis called it “fin-de-siecle hubris”.

In 1992 it was certainly tempting to believe that this sort of craziness was somehow related to the upcoming millennium. Today’s techbros have no such excuse, yet their dreams are the same. This is the collection Timnit Gebru and Émile Torres have dubbed TESCREAL: Transhumanism, extropianism, Singularitarianism, Cosmism, Rationalists, Altruism, and Longtermism, all of it, as Adam Becker explains in More Everything Forever, more of a rebranding than a new vision of the future.

You could accordingly view Becker’s book as a follow-up, 25 years on. Regis could present all this as a mostly whacked-out bunch of dreamers, but since then it’s all become much more serious. Today’s chicken-spinners are armed with massive amounts of money and power and are willing to ignore the present suffering of millions if it means enabling their image of the future. We’ve met this crowd before, in the pages of Douglas Rushkoff’s Survival of the Richest. These are the folks who treat science fiction’s cautionary tales as a manual for what to build.

Becker does a fine job of tracing the history of the various TESCREAL strands. Most are older than one might expect, some with roots in thousands-year-old Christian beliefs. Isn’t fear of death, which Becker believes lies at the core of all this, as old as humanity? At last year’s CPDP, Mireille Hildebrandt called TESCREAL “paradise engineering”.

“If it violates physics, you can ignore it,” I was told at a conference on these topics after I asked how to distinguish the appealing-but-impossible from the well-maybe-someday. Becker proves the wisdom of this: his grounding in engineering and physics helps him provide essential debunking. Mars is too far away and too poisonous for humans to settle there any time soon. Meanwhile, he points out, Moore’s Law, which underpins projections by folks like Ray Kurzweil that computational power will continue to accelerate exponentially, is far more likely to end, like all other exponential trends. Physics, resource constraints, the increasing difficulty of finding new technological paradigms, and the fact that we understand so little of how the human brain or consciousness really works are all factors. The reality, Becker concludes, is that AGI is at best a long, long way off.

The censorship-industrial complex

In a sign of the times, the Academy of Motion Picture Arts and Sciences has announced that in 2029 the annual Oscars ceremony will move from ABC to YouTube, where it will be viewable worldwide for free. At Variety, Clayton Davis speculates how advertising will work – perhaps mid-roll? The obvious answer is to place the ads between the list of nominees and opening the envelope to announce the winner. Cliff-hanger!

The move is notable. Ratings for the awards show have been declining for decades. In 1960, 45.8 million people in the US watched the Oscars – live, before home video recording. In 1998, the peak, 55.2 million, after VCRs, but before YouTube. In 2024: 19.5 million. This year, the Oscars drew under 18.1 million viewers.

On top of that, broadcast TV itself is in decline. One of the biggest audiences ever gathered for a single episode of a scripted show was in 1983: 100 million, for the series finale of M*A*S*H. In 2004, the Friends finale drew 52.5 million. In 2019, the Big Bang Theory finale drew just 17.9 million. YouTube has more than 2.7 billion active users a month. Whatever ABC was paying for the Oscars, reach may matter more than money, especially in an industry that is also threatened by shrinking theater audiences. In the UK, YouTube is second most-watched TV service ($), after only the BBC.

The move suggests that the US audience itself may also not be as uniquely important as it was historically. The Academy’s move fits into many other similar trends.

***

During this week’s San Francisco power outage, an apparently unexpected consequence was that non-functioning traffic lights paralyzed many of the city’s driverless Waymo taxis. In its blog posting, the company says, “While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.”

Friends in San Francisco note that the California Driver’s Handbook (under “Traffic Control”) is specific about what to do in such situations: treat the intersection as if it had all-way stop signs. It’s a great example of trusting human social cooperation.

Robocars are, of course, not in on this game. In an uncertain situation they can’t read us. So the volume of requests overwhelmed the remote human controllers and the cars froze, blocking intersections and even sidewalks. Waymo suspended the service temporarily, and says it is updating the cars’ software to make them act “more decisively” in such situations in future.

Of course, all these companies want to do away with the human safety drivers and remote controllers as they improve cars’ programming to incorporate more edge cases. I suspect, however, that we’ll never really reach the point where humans aren’t needed; there will always be new unforeseen issues. Driving a car is a technical challenge. Sharing the roads with others is a social effort requiring the kind of fuzzy flexibility computers are bad at. Getting rid of the humans will mean deciding what level of dysfunction we’re willing to accept from the cars.

Self-driving taxis are coming to London in 2026, and I’m struggling to imagine it. It’s a vastly more complex city to navigate than San Francisco, and has many narrow, twisty little streets to flummox programmers used to newer urban grids.

***

The US State Department has announced sanctions barring five people and potentially their families from obtaining visas to enter or stay in the US, labeling them radical activists and weaponized NGOs. They are: Imran Ahmed, an ex-Labour advisor and founder and CEO of the Centre for Countering Digital Hate; Clare Melford, founder of the Global Disinformation Index; Thierry Breton, a former member of the European Commission, whom under secretary of state for public diplomacy Sarah B. Rogers, called “a mastermind” of the Digital Services Act; and Josephine Ballon and Anna-Lena von Hodenberg, managing directors of the independent German organization HateAid, which supports people affected by digital violence. Ahmed, who lived in Washington, DC, has filed suit to block his deportation; a judge has issued a temporary restraining order.

It’s an odd collection as a “censorship-industrial complex”. Breton is no longer in a position to make laws calling US Big Tech to account; his inclusion is presumably a warning shot to anyone seeking to promote further regulation of this type. GDI’s site’s last “news” posting was in 2022. HateAid has helped a client file suit against Google in August 2025, and sued X in July for failing to remove criminal antisemitic content. The Center for Countering Digital Hate has also been in court to oppose antisemitic content on X and Instagram; in 2024 Elon Musk called it a ‘criminal organization’. There was more logic to”the three people in hell” taught to an Irish friend as a child (Cromwell, Queen Elizabeth I, and Martin Luther).

Whatever the Trump administration’s intention, the result is likely to simply add more fuel to initiatives to lessen European dependence on US technology.

Illustrations: Christmas tree in front of the US Capitol in 2020 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.