Bedroom eyes

We’ve long known that much of today’s “AI” is humans all the way down. This week underlines this: in an investigation, Svenska Dagbladet and Göteborgs-Posten learn that Meta’s Ray-Ban smart glasses are capturing intimate details of people’s lives and sending them to Nairobi, Kenya. There, employees at Meta subcontractor Sama label and annotate the data for use in training models. Brings a new meaning to “bedroom eyes”.

This sort of violation is easily imposed on other people without their knowledge or consent. We worry about the police using live facial recognition, but what about being captured by random people on the street? In January’s episode of the TechGrumps podcast, we called the news of Meta’s new product “Return of the Glasshole“.

Two 2018 books, Mary L. Gray and Siddharth Suri’s Ghost Work and Sarah T. Roberts’ Behind the Screen made it clear that “machine learning” and “AI” depend on poorly-paid unseen laborers. Dataveillance is a stowaway in every “smart” device. But this is a whole new level: the Kenyans report glimpses of bank cards, bedroom intimacy, even bathroom visits. The journalists were able to establish that the glasses’ AI requires a connection to Meta’s servers to answer questions, and there’s no opt out.

The UK’s Information Commissioner’s Office is investigating, and at Ars Technica Sarah Perez reports that a US lawsuit has been filed.

As the original Swedish report goes on to say, the EU has no adequacy agreement with Kenya. More disturbing is the fact that probably hundreds of people within Meta worked on this without seeing a problem.

In 1974, the Watergate-related revelation that US president Richard Nixon had recorded everything taking place in his office inspired folksinger Bill Steele to write the song The Walls Have Ears (MP3). What struck him particularly was that everyone saw it as unremarkable. “Unfortunately still current,” he commented in his 1977 liner notes. Nearly 50 years later, ditto.

***

A lot of (especially younger) people don’t remember that before 9/11 you could walk into most buildings without showing ID. Many authorities – the EU in particular – have long been unhappy with anonymity online, and one conspiratorial theory about age gating and the digital ID infrastructure being built in many places is that the goal is complete and pervasive identification. In the UK, requiring ID for all Internet access has occasionally popped up as a child safety idea, even though security experts recommend lying about birth dates and other personal data in the interests of self-protection against identity theft.

Now we have generative AI, and along comes a new paper that finds that large language models can be used to deanonymize people online at large scale by analyzing profiles and conversations. In one exercise, they matched Hacker News posts to LinkedIn profiles. In another, they linked users across subReddit communities. In a third, they split Reddit profiles to mimic the use of pseudonymous posting. Pseudonymity doesn’t offer meaningful protection (though I’m not sure how much it ever did), and preventing this type of attack is difficult. They also suggest platforms should reconsider their data access policies in line with their findings.

It’s hard to imagine most platforms will care much; users have long been expected to assess their own risk. Even smaller communities with a more concerned administration will not be in a position to know how many other services their users access, what they post there, or how it can be cross-linked. The difficulty of remaining anonymous online has been growing ever since 2000, when Latanya Sweeney showed it was possible to identify 87% of the population recorded in census data given just Zip code, date of birth, and gender. As psychics know, most people don’t really remember what they’ve said and how it can be linked and exploited by someone who’s paying attention. The paper concludes: we need a new threat model for privacy online.

***

The Internet, famously, was designed to support communications in the face of a bomb outage.

Building it required physical links – undersea cables, fiber connections, data centers, routers. For younger folks who have grown up with wifi and mobile phone connections, that physical layer may be invisible. But it matters no less than it did twenty-five years ago, when experts agreed that ten backhoes (among other things) could do more effective damage than bombs.

This week’s horrible, spreading war in the Middle East has seen the closure of the Strait of Hormuz and the Red see to commercial traffic. Indranil Ghosh reports at Rest of World that that 17 undersea cables pass through the Red Sea alone, and billions, soon trillions, of dollars in US technology investment depends on fiber optic cables running through war zones. There’s been reporting before now about the links between various Middle Eastern countries and Silicon Valley (see for example the recent book Gilded Rage, by Jacob Silverman), but until now much less about the technological interdependence put in jeopardy by the conflict. Ghosh also reports that drones have struck two Amazon Web Services data centers in UAE and one in Bahrein.

The issue is not so much direct damage to the cables as the impossibility of repairing them as long as access is closed. The Internet, designed with war in mind, is a product of peace.

Illustrations: Monument to Anonymous, by Meredith Bergmann.

Also this week: At the Plutopia podcast, we interview Kate Devlin, who studies human-AI interaction.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Saving no one

In the early 2010s, after “nano” and before “AI”, 3D printing was the technology that was going to change everything. Then it seemed to go quiet except for guns.

“First we will gain control over the shape of physical things. Then we will gain new levels of control over their composition, the materials they’re made of. Finally, we will gain control over the behavior of physical things,” Hod Lipson and Melba Kurman wrote in their 2013 book, Fabricated. As far as I can tell, we’re still pretty much in the era of making physical things that could be made by traditional methods rather than weird new shapes that could *only* be produced by additive manufacturing. More than 15 years after a fellow technology conference attendee excitedly lectured me that 3D printing was going to change everything, its growth remains largely hidden from most of us.

Until this past week, when I attended an event awash in puzzle makers and discovered that it’s been a godsend to them for making not only prototypes but also small runs of copies or published designs, freeing them from having to find space and capital for the kind of quantities required by traditional production. It’s good to see a formerly hyped technology supporting clever and entertaining human invention.

Exploding egg, anyone?

***

In one of the biggest fines in its history, the UK Information Commissioner’s Office has announced it is fining Reddit £14.5 million for failing to put in place an effective age verification mechanism to block under-13s from using the site under Reddit’s stated terms of service. The story is somewhat confused by timing: the fine is under data protection law and relates to the period before the arrival of the Online Safety Act, but the OSA’s requirement for age verification brought the changes that sparked the fine. Reddit says it will appeal.

In the UK terms and conditions Reddit announced in June 2025, the company says that “by using the services, you state that…you are at least 13 years old”. But Reddit didn’t require proof, and the ICO says that many under-13s use(d) the platform.

In July, when the Online Safety Act came into effect, Reddit added an age gate of 18 for “mature” content. Unlike many other social media sites that are just giant pools of content sorted by curation or algorithm, Reddit is a large set of distinct subReddits. Each of these communities has its own rules, social norms, and, most important, human moderators. Because of this, it’s comparatively easy to mark a particular subReddit as “for adults only”. After the July change, anyone in the UK wishing to access one of those subReddits was asked to submit a selfie or an image of a government-issued ID in order to prove their age.

The ICO’s findings state that Reddit failed to protect under-13s from accessing content that placed them at risk; that it processed under-13s’ data unlawfully (because they are too young to meaningfully consent); and that a simple statement is not a sufficient age verification mechanism (which is made clear in the OSA).

A Reddit spokesperson told the Guardian: “The ICO’s insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users’ online privacy and safety.”

I take their point; I’d rather skip the “mature” content than bear the privacy risk of uploading personal data to whatever third-party company Reddit is using for age verification. Last July, I decided I would just be a child. (Although: my Reddit account dates to 2015, so they could just do the math.)

Turns out, it may have been a wise decision. Reddit, saying it didn’t want to hold users’ personal data, chose the age verification provider Persona.

Persona deserves a look. Last week, Discord announced it would begin treating all users as teens until they’d been verified, also using Persona. The result as Ashley Belanger reports at Ars Technica was a user backlash. First, because last time Discord tried this, its now-former age verification provider’s pile of 70,000 users’ age check information was hacked.

Second, because The Rage reports that a group of security researchers found a Persona front end exposed to the open Internet on a US government server. On examination, that code shows that Persona performs 269 different verification checks and scours the Internet and government sources using your selfie and facial recognition. Discord has now announced it will delay introducing age verification – and won’t be using Persona after an apparently unsatisfactory trial in the UK last year.In a blog posting, Discord says that, like Reddit, it does not want to know its users’ identification details. It is adding more verification options.

If the world had already had a set of established trustworthy companies that specialized in age verification when the OSA came into effect, then it would make sense to turn to them to provide that service. But we aren’t in that situation. Instead, although providers have been working for more than a decade to build such systems, their deployment at scale is new.

Part of keeping children – and the rest of us – safe is protecting security and privacy – and child safety campaigners’ refusal to accept this has been an issue for decades. Creating new privacy risks doesn’t keep anyone safer – including children.

Illustrations: Six-panel early 1970s cartoon strip, “What the User Wanted”.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Information wants to be surveiled

It has long been the case that the person sitting next to you on an airplane may have paid a very different price for their seat than you did. For two reasons. First, airlines don’t price seats – they price itineraries. A seat from London to New York may be priced very differently depending on whether it’s one-way, one-half of a round trip, or one stage in a larger, multi-stage journey. Second, however, airlines are sophisticated about maximizing the value of your seat, responding to patterns of peak travel (Thanksgiving, August), when you buy it, and other factors. Despite their complexity, those prices are supposed to be based on published tariffs that anyone can, in theory, calculate for themselves and come up with the same answer.

In 2012, travel and data privacy expert Edward Hasbrouck explained all this as part of documenting and opposing the airlines’ desire to move to personalized pricing. Instead of acting as common carriers and publishing tariffs that apply across the board, with personalized pricing the airlines would use the information they have about you to charge what you can and would be willing to pay. In 2012, the International Air Transportation Association called it “New Distribution Capability”.

This is a much scarier proposition now than it was then; companies have a lot more data they can exploit. In 2012, they might simply have known your flying habits and credit score, while balancing their desire to get the most they can for the seat against the time they have left to sell it. Uber uses similar tactics; it raises the price of a ride – “surge pricing”, or “dynamic pricing” – when demand is high.

Today, an airline might know – or be able to find out – that you are racing against time to see a beloved relative before they die. And why should it stop with airlines? Exploitative possibilities abound.

Uber has already been dubiously accused of charging more when riders’ phone batteries are low. Retailers collect shopping histories through loyalty cards and apps; often customers already pay more for opting out.

Some types of price discrimination have long been illegal under consumer protection law. On February 12, US senators Ben Ray Luján (D-NM) and Jeff Merkley (D-OR) introduced the Stop Price Gouging in Grocery Stores Act of 2026 to block it in grocery stores. Other efforts are also underway in the US. This week’s news is calling this “surveillance pricing” or “predatory pricing”, which accurately reflects the data collection and surveillance capitalism underpinning.

This is not really a story about specific technologies, although “AI”; the issue is, as Hasbrouck writes, opacity and discrimination.

The technological pieces are in place to make this all much worse. Not just online, where prices are easily generated on the fly, but also in real-world retailers via wireless connections and electronic shelf tags, which already exist in some stores. A May 2025 study finds that these tags are so far not used to implement surge pricing but to update sale prices and offer discounts – but for how long?

In a May 2025 paper in the International Journal of Research in Marketing researchers examine the rise of algorithmic pricing – Uber, landlords, retailers – generally and personalized pricing in particular. The authors note that the latter requires market power, and buyers must have limited ability to exploit price differences. And also: it’s profitable (duh). The authors go on to discuss the role of privacy, data protection, consumer laws, and backlash, in curbing unfairness. At Big, which focuses on consolidation and market power, Matt Stoller warns of the potential power of Google’s plan to run pricing strategies for advertisers.

Hasbrouck returned to the subject last year while recapping the latest season of The Amazing Race to explain the airlines’ use of systems that deliver increasingly opaque and unpredictable pricing and the lack of enforcement enabling it.

In a pair of posts, the ACLU’s Jay Stanley follows Hasbrouck’s logic to new levels, laying out a possible future these developments may bring. The desire to wring every last possible bit of profit out of us – we’re talking everyone who sells goods or services now, not just airlines – in Stanley’s view will lead to collecting more and more detailed data. The digital identification infrastructure being built into airports – as planned here in 2013 and shown arriving in 2022 – will ensure there is no countermeasure we can take to escape monitoring and data collection. In Stanley’s projection, stores will be able to demand a digital identification sign-in as a condition of entry.

Again, a bit of this is here already. In the UK, Facewatch, which we first encountered in 2013, is used by some major retailers to identify and bar shoppers who have previously been caught shoplifting or being violent. Recently, the system flagged a shopper and staff ousted the wrong person, who found it difficult to prove the mistake.

In other words, all these technologies, wrapped up together, could enable a world not much different from that imagined by Ira Levin in his 1970 book This Perfect Day, where everything anyone wanted to do required permission from a centralized system. Although: the key to making that work was drugging the entire population.

Illustrations: Burmese python in Florida, 2011 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

R.I.P. Dave Farber

For decades, Internet pioneer David J. Farber, who died on February 7, aged 91, maintained the Interesting People email list. When he began, it was an unusual choice, as few academics then reached out in such a public way.

Often called the grandfather of the Internet because he invented technologies and taught students whose work became crucial to its development, Farber’s long and distinguished career, which others recount in better detail, included stints at many respected US institutions – Bell Labs, RAND, the University of Delaware, Carnegie Mellon, the University of Pennsylvania – and a presence on too many technical organizations and projects to count. He was a regular at conferences, and Ione of those, in 1998, is where I met him in person.

Among the obits lauding his professional career: Stevens,Institute of Technology, where he earned his first degree (“he created the future”); the Internet Hall of Fame (“a mentor to many generations”); the Electronic Frontier Foundation, where he was a board member; the Japan Times, Tools on Fire, the New York Times (“gregarious”).

“Gregarious” was crucial to building the Internet, as Times writer Peter Wayner writes. Until 2020 the mailing list, whose membership Wayner estimates at 25,000, was really how I knew him. Farber liked to collect information and knowledge in all forms, including human, though I thought of the list less as interesting *people* than alerts to interesting information. However, the sources of that forwarded information were often surprising. Farber seemed to know *everyone*.

In 2020, covid’s arrival led Farber to create a new platform, a weekly Zoom call, which he of course publicized on his mailing list. Over the last nearly six years, many of these calls featured invited speakers, who might cover any topic from medical privacy to quantum computing to Hong Kong real estate to Brexit. Many regulars were his old friends; many others, like me, were newcomers. He was welcoming to all and eager to ask questions.

He was particularly interested in hearing from Asian speakers since he had moved, at 83, to Japan in 2018 to take up a post at Keio University. The move to a new country and culture seems to have given him a berth in a place where his age was respected and his accomplishments were revered. In the last week of January, he wound up what turns out to be his final semester by sending in the final grades for his class of 37 students. He was a teacher and communicator to the last. R.I.P.

Illustration: Dave Farber on a Zoom call, drawn by his friend of 70-plus years, John De Pillis (used by permission).

Whooped

In 2022, we noted a discussion by Julia Powles and Toby Walsh, summarized here, that warned about the increasing collection of data about elite athletes. The data, they said, does not flow to the sports scientists who really can help athletes perform better and minimize their risk of injury, but to data scientists and data crunchers. The money could be better spent on healthier environments and financial support. Powles, with Jacqueline Alderson, followed up in 2023 with best practice principles.

Cue tennis. At this year’s Australian Open, which concluded on February 1, eventual men’s singles champion Carlos Alcaraz, women’s singles finalist Aryna Sabalenka, and men’s singles semifinalist Jannik Sinner were all told to remove their Whoop tracker devices before playing early-round matches.

An important part of this story is the ridiculously convoluted nature of tennis politics. The International Tennis Federation runs lower-level tournaments and junior events; the Grand Slams are laws unto themselves; the men’s and women’s pro tours are run by the ATP and WTA respectively. That’s seven powers – without the national tennis federations or the national and international anti-doping edifice.

The players were caught between conflicting decisions. In mid-December, the ITF approved Whoop devices in competition as long as haptic feedback is disabled, adding it to its list of permitted “Player Analysis Technology” devices. On a Whoop, “haptic feedback” means the device vibrates to alert you to…something.

The ITF published its detailed examination (see also a wearer’s review by Emilie Lavinia at The Independent). Whoop’s array of sensors can capture: heart rate, heart rate variability, sleep stages and performance, recovery, activity strain metrics, blood oxygenation (SpO2), skin temperature, respiratory rate, and blood pressure (only some models), and perform on-demand ECG and irregular heart rhythm notification (only some regions, where it’s approved). As the ITF notes, data capture takes place automatically whenever the athlete is wearing the device. Players are allowed to charge the device on-court using its battery pack.

So far, so good. Turning off haptic feedback requires the player to disable alarms, turn off the “Strength Trainer” screens; and turn off “Strain Target” if they want to use “Live Activity” mode. The player has to show tournament officials on request that their settings are compliant (or that they’re using the Whoop 3.0, which has no haptic feedback).

The device has no screen; viewing the data requires an Android or iOS device, the app, paired via Bluetooth, and a subscription. This is the Gillette razor, or “free puppy” business model: the device is free, you pay for ongoing access to your own data.

So basically, the ITF is allowing players to collect data using the Whoop device for future inspection and discussion with their teams, as long as it doesn’t vibrate on their wrist. The key here is the potential for the device to be a conduit for an automated form of in-match coaching; the definition of PAT devices given in Appendix III of the ITF’s Rules of Tennis (PDF) specifically directs readers to Rule 30, which covers coaching.

For most of tennis’s Open Era – basically since 1968, when the sport went professional – coaching from the stands was banned. The original argument in favor of the ban was that tennis was and is a highly unequal sport. Top players can afford any assistance they want. The lowest-ranked players scrounge, as Irish player Conor Niland, who topped out at 129 in the world, recounts in a Guardian interview and in his excellent 2024 book, The Racket. Until the 1990s, even mid-range players often toured alone. Therefore, allowing coaching from the stands during matches threatened to make the playing field even more unequal.

As money flowed into tennis, the numbers who could afford traveling coaches rose and the no-coaching rule was increasingly flouted. Following a series of trials, the WTA began allowing coaching in 2022. The ATP, the ITF, and the Grand Slams finally began allowing it in 2025. Rule 30 leaves it open for events’ governing bodies to prohibit it.

The Whoop controversy digitizes all this baggage. Rule 30 differentiates between off-court coaching (coaching from the stands), which is permitted, and on-court coaching, which is only permitted during specific team events. Ordinary watches are allowed, but smart watches and mobile phones are banned because they are capable of communications, Teresa Merklin explains at Fiend at Court.

“We have coaching. why can’t you have your own data?” former champion Todd Woodbridge asked on TV.

We are still at the beginning of these technologies and their controversies. Merklin digs up a forgotten incident: in 2013, Wimbledon shot down Bethanie Mattek-Sands’ query about wearing Google Glass. Devices will keep shrinking and becoming harder to spot.

Unlike in the past, however, PATs are cheap compared to hiring traveling personnel. While Powles and Walsh were undoubtedly right that data analysis is no substitute for physiologists’ and sports scientists’ expertise, PATs might give lower-ranked players previously unaffordable insight. Given the increasing heat stress at many tournaments, feedback that warns when your body is overstressed seems like a good idea. On the other hand, can you imagine how much bettors would love to have access to this kind of data in real time?

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of the future Internet

“What kind of Internet do you want [him] to inherit?” “Him” was then measuring his age in weeks.

“Not *this* Internet.”

Now, when said son has grown to measure his life in months, my friend and I are no closer to a positive vision. But notably many more people seem to be asking the same kind of question.

In the last week I’ve been to two meetings convened to pull together a cross-section of activists, policy wonks, and techies to talk about building movements to push back against the spread of technological control. The goals of these groups, like my friend’s and mine remain fuzzy, but they reflect a widespread growing alarm about AI, US entanglement, and our other technological ills.

“When did the future stop being something we plan for and become something done to us?” a friend asked about five years ago. That sense of being held hostage by the inevitability narrative is there, too, in a jumble including job loss, the evils of capitalism, the embedding of companies like Palantir in the health service and soon in policing, the speed of change, widespread loneliness, sustainability, and existential threats. So the overall feel has been part-Occupy, part consciousness-raising session.

Those who did have visions to propose often seemed to be describing things that already exist: trusted, authoritative content (the BBC, Wikipedia); ending capitalism in favor of shared ownership and distributed power (“there’s always someone reinventing communism,” the person next to me muttered), and recreating the impossible dream of micropayments.

One meeting polled us with a list of concerns about AI and asked us to pick the most important. The winner, by far, was “consolidation of power”. This speaks to a wider movement than merely opposing AI or resisting the encroachment of the worst technology surveillance practices into daily life.

Similar discussions have been growing for at least a couple of years. At The Register, long-time open source advocate Liam Proven writes after attending the Open Source Policy Summit that Europe is reassessing its technological reliance on US IT services, which offers the potential for a US president to order disconnection. The lack of European billion-dollar technology companies leads people to forget the technology invented here that instead embraced openness: the web, Linux, Raspberry Pi, Open StreetMap, the Fediverse.

It’s a little alarming, however, that all of this discussion hovers at the application layer. Old-timers who’ve watched the Internet build up understand that underneath the social media and smartphones lies the physical layer, the infrastructure that is also condolidated and controlled: chips, cables, wireless spectrum. For younger folks, those elements are near-invisible; their adult lives have been dominated by concerns about data. Yet in the last year we’ve been warned of sabotage to undersea cables and chip shortages. There’s more general recognition of the issues surrounding data centers’ demand for power and water.

Even so, there’s a good amount of recognition that all the strands of our present polycrisis are intertwined – see for example the mission statement at Germany’s Cables of Resistance. A broader group, building on the 2024 conference convened by Cristina Caffarra, who called out policy makers at CPDP 2024 for ignoring physical infrastructure, is working on a EuroStack to provide a European cloud alternative.

At the political layer, we have Dutch News reporting that Dutch MPs are pushing their government to move away from depending on US technology companies to provide essential infrastructure. In the UK, LibDem and Green MPs are calling on the government to reconsider its contracts with Palantir.

A group called Pull the Plug will lead a “march against the machines” in London on February 28 to demand the UK government create citizens’ assemblies and implement their decisions on AI.

It feels like change is gathering here. In the US, the future still looks much like the past. In a blog post this week, here is Anthropic, presumably responding to OpenAI’s plan to add advertising to ChatGPT:

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking…We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

Compare and contrast to Google founders Sergey Brin and Larry Page in their 1998 Google-founding paper:

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users…we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

No wonder Anthropic adds this caution: “Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.” Translation: we may need the money.” Of course they’ll frame it as serving the customer better.

Illustrations: (One of) the first Internet ad, for AT&T, on HotWired (via The Internet History Podcast.

Also this week:
At the Plutopia podcast we talk to science fiction writer Ken MacLeod.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Universal service

Last week a couple of friends and I got around to trying out Techdirt‘s 2025 game, One Billion Users. This card-based game has each player trying to build a social network while keeping toxicity under control.

First impression: the instructions are bananas complicated. There are users, influencers, events, hotfixes, safeguards…and a Troll, which everyone who understood the instructions tried to push off on someone else ASAP. One of our number became the gamemaster, reading out the instructions we struggled to remember. You win by adding (and subtracting) points based on the cards you’re holding when the the GAME OVER card turns up.

Even in a single game where we were feeling our way through, different strategies emerged. One of our number did her best to build a smaller, friendlier network. She succeeded – but it wasn’t a winning strategy. Without any thought to planning, my network ended up medium-sized. I was constrained by an event card stopping me from adding new users, and then, catastrophically, “gifted” the Troll. I came in second. The winner had built a huge number of users, successfully dumped the troll (thank you *so* much), and acquired several influencers who brought their own communities. We eventually identified the networks we’d built, in order: Tumblr, Twitter (not, I think, X), Facebook.

In a more detailed review, Adi Robertson at The Verge traces the roots of the game’s design to a game we played a lot in my childhood but that I no longer remember very well: Mille Bornes (“A Thousand Milestones”). A change of theme, some added twists, I see it now.

We will try this game again. I didn’t *want* to build the Torment Nexus!

***

It appears the BBC wants to switch off Freeview in 2034. For non-UK readers: Freeview is digital terrestrial television – that is, broadcast. It’s operated by a joint venture among the UK’s public service broadcasters (PSBs) – the BBC, ITV, Channel 4, and Channel 5. Given any television made since 2008 or another receiver device you can access 85 channels without paying anything more other than the BBC’s license fee. That, too, will soon be under review; the BBC’s charter is due for renewal in 2027. Freeview is one piece of a larger puzzle.

As Mark Sweney explains at the Guardian, the Department of Culture, Media, and Sport is reviewing options for Freeview’s future, and is considering three alternatives presented by Ofcom (PDF), the broadcast regulator. One: upgrade the present infrastructure. Two: maintain it as a cut-down service offering only the PSBs’ core channels. Three: move entirely to streaming.

The broadcasters, Sweney writes, favor the latter, choosing 2034 as a logical time to shut down Freeview because that’s when their contract with their network operator expires. By then, projections say that about 1.8 million people will still be dependent on Freeview, a long way down from today’s estimated 12 million. Many more homes, like mine, use both. The Ofcom report says that in 2023 39% of TV viewing was via broadcast.

Most of the discussion focuses on costs: updating the Freeview infrastructure is expensive for broadcasters, switching to streaming is an ongoing expense for individuals. Households would need a broadband subscription, new equipment, and the streaming app Freely, which was launched in 2024. There is a petition opposing the change.

This discussion is happening shortly after the British Audience Research Board announced that the number of YouTube viewers passed the number of BBC viewers for the first time. However, as Dekan Apajee writes at The Conversation, even on YouTube people are still watching the BBC’s output, even if they’re not be aware of it. Apajee is more concerned about context and finding ways to distinguish public service broadcasting and its values from the jumble of everything else on YouTube. How do the PSBs meet the requirement for universal service? Ofcom’s more recent report on the future of public service media (PDF), also warns of this loss of discoverability amid increased competition.

Adding to that, the BBC is reportedly considering a formal content agreement with YouTube that would have it publish some younger-oriented content there before showing it on its own platforms. It’s odd timing, as so many are warning against depending on US technology, as the economist Paul Krugman wrote yesterday. The loss of audience data has been a theme in the rise of streamers – and YouTube has just withdrawn from BARB’s audience measurement system, saying the organization violated YouTube’s terms and conditions.

Remarkably little of this discussion considers the potential loss of privacy inherent in forcing everyone to move to “smart” data collection machines (TVs, phones, computers). Is there a future in which it’s still possible to watch video content anonymously? (Yes, but they call it “piracy”.)

The BBC seems to believe that transitioning to streaming can be smooth. Sweney cites the years to 2012, when analog TV was switched off in favor of digital broadcast, which he describes as “near seamless” despite warnings of potential exclusion. Maybe so, but a lot of televisions were wastefully dumped, and that conversion was a one-time cost, not a permanent monthly drain.

At a meeting yesterday about building better technology, one attendee passionately advocated trustworthy content, presented by trusted sources. Ah, I thought, she wants to reinvent the BBC. Doesn’t everyone?

Illustrations: Family watching television in 1958 (via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of causality

The debates over children’s use of social media, screens, and phones continue, exacerbated in the UK by ongoing Parliamentary scrutiny of the Children’s Wellbeing and Schools bill and continuing disgust over Grok‘s sexualized image generation. Robert Booth reports at the Guardian that the Center for Countering Digital Hate estimates that Grok AI generated 3 million sexualized images in under two weeks and that a third of them are still viewable on X. In that case, X and Grok appear to be a more general problem than children’s access.

We continue to need better evidence establishing causality or its absence. This week, researchers from the Bradford Centre for Health Data Science (led by Dan Lewer) and the University of Cambridge (led by Amy Orben) announced a six-week trial that will attempt to find the actual impact on teens of limiting – not ending – social media access. The BBC reports, that the trial will split 4,000 Bradford secondary school pupils into two groups, One will download an app hat turns off access to services like TikTok and Snapchat from 9pm to 7am and limits use at other times to a “daily budget”. The restrictions won’t include WhatsApp, which the researchers recognize is central to many family groups. The other half will go on using social media as before.

The researchers will compare the two groups by assessing their’ levels of anxiety, depression, sleep, bullying, and time spent with friends and family.

In earlier research, Orben developed a framework for data donation, which allows teens to understand their own use of social media. Another forthcoming study, Youth Perspectives on Social Media Harms: A Large-Scale Micro-Narrative Study, collects 901 first-person tales from 18- to 22-year-olds in the UK. From these Orben’s group derive four types of harm: harms from other people’s behavior, personal harmful behavior evoked by social media, harms related to the content they encounter, and harms related to platform features. In the first category they include bullying and scams; in the second, compulsive use and social comparison; in the third, graphic material; and in the fourth, algorithmic manipulation. They also note the study’s limitations. A longer-term or differently-timed study might show different effects – during the study period the 2024 US presidential election took place. The teens’ stories don’t establish causality. Finally, there may be other harms not captured in this study.

The most important element, however, is that they sought the perspective of young people themselves, who are to date rarely heard in these discussions.

As this research begins, at Techdirt Mike Masnick reports on two new finished papers also covering teens and social media. The first, Social Media Use and Well-Being Across Adolescent Development, published in JAMA Pediatrics, is a three-year study of 100,991 Australian adolescents to find whether well-being was associated with social media use. The researchers, from the University of South Australia, found a U-shaped curve: moderate social media use was associated with the best outcomes, while both the highest users and the non-users showed less well-being. Girls benefited increasingly from moderate social media use from mid-adolescence onwards, while in boys’ non-use became increasingly problematic, leading to worse outcomes than high use by their late teens.

The second, a study from the University of Manchester published in the Journal of Public Health, followed a group of 25,000 11- to 14-year-olds to find out whether the use of technology such as social media and gaming accurately predicted later mental health issues. The study found no evidence that heavier use of social media or gaming led to increased symptoms of anxiety or depression in the following year.

In his discussion of these two papers, Masnick argues that this research gives weight to his contention that the widespread claim that social media is inherently harmful is wrong.

In the UK and elsewhere, however, politicians are proceeding on the basis that social media *is* inevitably harmful. . This week, the government announced a consultation on children’s use of technology. The consultation seems, as Carly Page writes at The Register, geared toward increasing restrictions, Also this week, the House of Lords voted 261 to 150, defeating the government to add an amendment to the Children’s Wellbeing and Schools bill that would require social media services to add age verification to block under-16s from accessing them within a year. MPs will now have to vote to remove the amendment or it will become law, a backdoor preemption of the House of Commons’ prerogative to legislate.

UK prime minister Keir Starmer has been edging toward a social media ban for under-16s; now with added pressure from not only the Lords but also the Conservative Party leader, Kemi Badenoch, and 61 MPs sent an open letter supporting a ban like the one in Australia. Ofcom reports that 22% of children aged eight to 17 have a false user age of over 18 – but also that often it’s with their parents’ help. Would this be different under a national ban?

Starmer reportedly wants to delay deciding until evidence from Australia and, one presumes, from the consultation, is available. A sensible idea we hope is not doomed to failure.

Illustrations: Time magazine’s 1995 “Cyberporn” cover, which raised early alarm about kids online. Based on a fraudulent study, it nonetheless influenced policy-making for some years.

Also this week:
At the Plutopia podcast, we interview Dave Evans on his work to combat misinformation.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Split

In an abrupt reversal, UK prime minister Keir Starmer announced this week that the digital IDs he said in September would be mandatory for proving the right to work will now be…not so much. The announcement appears to reinstate the status quo: workers can continue proving their right to work by showing a passport or e-visa.

It’s not clear what led to the change, although some – commenters on social media, former Labour home secretary David Blunkett – suggest opponents “won”. The reality is that digital IDs are probably not really going away. However, making them optional is an important step in the right direction. A lot more is needed to develop a system that works for people instead of for governments.

***

Ten days on, the discovery that Xai’s Grok chatbot was being used to “nudify” images of women and children is still firing headlines, especially since the BBC reported that the Internet Watch Foundation had found “criminal images” of girls between 11 and 13 that appeared to have been Grok-generated. Child sexual abuse material is illegal in the UK, as in many other countries, no matter how it’s created or whether it’s real or synthetic. On Wednesday, Elon Musk asked on X if anyone could break Grok’s image moderation.

Last Friday, the Independent, among others, reported that X had turned off Grok’s image generation for all but the site’s (paying) verified users. On Monday, Starmer warned that X could lose the right to self-regulate if it could not control Grok. On Tuesday, Ofcom said it was launching an investigation, and Starmer told the House of Commons that X was “acting to ensure full compliance with the law”. In fact, it later came out, he was basing this information on media reports but had not himself been in contact with X himself. His government is now planning legislation to criminalize this type of software. Yesterday, Musk announced X would geoblock the AI tool in countries where it’s illegal. This morning, the Guardian reports that the feature is still not blocked in the Grok app.

As an unexpected side effect, these revelations have reignited divisions in the venerable and venerated elite scientists’ Royal Society, which elected Elon Musk an Overseas Fellow in 2018.

To recap: in August 2024, Nicola Davis reported at the Guardian that a 74 Fellows had written to the Society calling for Elon Musk’s expulsion, after Musk tweets promoting unrest in the UK and propagating scientific disinformation.

In late 2024, the developmental neuropsychologist Dorothy M. Bishop blogged that she had resigned from the Royal Society to protest Elon Musk’s continued membership as an Overseas Fellow.

Further resignations have followed. Next up, iIn February, was professor of systems biology Andrew Millar, who deplored . Around the same time, more than 1,000 scientists signed an open letter to the Society’s then-president Andrew Smith calling for Musk’s ouster.

In March, Andrew Sella, a chemist, returned the Society’s Michael Faraday prize for science communication, explainingthe society’s inaction. Also that month, on X neural networking pioneer Geoff Hinton called for Musk’s expulsion. There was another burst of calls for Musk to be expelled in September, when he addressed a far-right rally organized by Tommy Robinson.

At the end of 2025, the Royal Society changed presidents. In April 2025, the incoming president, geneticist and Nobel Laureate Paul Nurse, taking the position for a rare second time, told The Times that he had written to Musk asking him if he could do something to improve the situation of American science, adding that given the damage he has caused to the “scientific endeavor in the United States” he should consider resigning from the Society.

In retrospect, more attention should have been paid to Nurse’s position that Musk should not be expelled, which he justified by saying that many Fellows were “odd”. The Guardian published more details about that correspondence in July.

A few days ago, professor of materials science Rachel Oliver published an open letter to Nurse asking him to reconsider his argument that Fellows should only be expelled if their science proved “fraudulent or highly defective”. Oliver argues that this stance grants “a licence to harass to the already powerful people on whom the Society bestows fellowship”.

She was responding to this week’s report in which Nurse doubled down on those overlooked comments, arguing that the code of conduct Fellows cited to justify expulsion might need to be revised because it resembled an employer’s code of conduct, and Fellows are not employees. He also took another shot at members who aren’t Musk, pointing to a portrait of Isaac Newton and saying, “He was a very nasty piece of work, yet we revere him.” I’m not sure that “we tolerated assholes in the past so we should continue to do so” is the persuasive argument he thinks it is.

It’s also clear that the Royal Society will continue to face public and private censure, no matter what it does now. This row will resurface every time Musk is in the news. The Royal Society is damned whatever it decides; it can’t keep hoping Musk will gentlemanly fall on his sword.

Illustrations: Sir Isaac Newton, as seen in the National Portrait Gallery, London (via Wikimedia).

Also this week:
At the Techgrumps podcast, #3.36, Men are weird: The Return of the Glasshole.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Disconnexion

One thing we left out in last week’s complaint is generative AI’s undoubted ability to magnify the worst of human online behavior. A few days ago, the world discovered that X’s chatbot, Grok, can be commanded to “nudify” images of women and children – that is, digitally remove their clothes without their consent. A number of commenters also note that some of the same British politicians who are calling out X and Grok about this and who more broadly insist on increasing restrictions in the name of online safety nonetheless continue to post there. Even Ashley St. Clair, the mother of one of Elon Musk’s sons, is unable to get these images taken down. Some ministers have called for banning this form of deepfake software.

Among those calling for Elon Musk to act “urgently” are technology secretary Liz Kendall and prime minister Keir Starmer. The BBC reported this morning (January 9) that the government is calling on Ofcom to use “all its powers”. At Variety, Naman Rathandran reports that X has moved AI image editing behind a paywall.

On January 2, at the National Observer, Jimmy Thompson calls on the Canadian government to delete their accounts. On Wednesday, the Commons women and equalities committee announced it would stop using X. As of January 8, both Kendall and Starmer are still posting on X, along with the UK’s Supreme Court and the Regulatory Policy committee and doubtless many others. Ofcom, the regulatory agency in charge of enforcing the Online Safety Act, posted a statement on January 5 saying it has contacted X and plans a “swift assessment to determine whether there are potential compliance issues that warrant investigation”. At the Online Safety Act Network, Lorna Woods explains the relevant law.

My guess is that few politicians manage their own social media – an extreme form of mental compartmentalization – and their aides are schooled in the belief that “we must meet the audience where they are”. In that sense, these accounts are not ordinary users, who use social media to connect to their friends and other interesting people. Politicians, like many others who are paid to show off in public, use social media to broadcast, not so much to participate. But much depends on whether you think that Grok’s behavior is one piece of a fundamental structural problem with X and its ownership or whether you believe it’s an isolated ill-thought-out feature to be solved by tweaking software, a distinction Jason Koebler explores at 404 Media.

The politicians’ accounts doubtless predate Musk’s takeover. Twitter was – and X is – small compared to other social media. But the short-burst style perfectly suited journalists, who gave it far more coverage than it probably deserved. Politicians go where they perceive the public to be, which is often signaled by media coverage.

It’s not necessarily wrong for politicians and government agencies to argue that they should be on X to serve their constituents who use it. But to legitimize that claim they should also be cross-posting on every significant platform, especially the open web. We can then argue about the threshold for “significant”. At a guess, it’s bigger than a blog but smaller than Mastodon, where politicians are notoriously absent.

***

The early 2020s’ exciting future of cryptocurrencies has gotten lost in the distraction of the last couple of years’ excitement over our new future of technologies pretending to be “smart”. In 2023’s “crypto winter”, we thought anyone still interested was either an early booster or thought they could smell profit. As Molly White wrote this week, they’ve spent the last two years nourishing grudges and building a political machine that could sink large parts of the economy.

More quietly, as Dave Birch predicted in 2017 (and repeated in his 2020 book, The Currency Cold War) “serious people” were considering their approach. Among them, Birch numbered banks, governments, and communities.

Now, governments are hatching proposals. As 2025 ended, the European Council backed the European Central Bank’s digital euro plan; the European Parliament will vote on it this year. The Financial Times reports that this electronic alternative to cash could help European central bankers pull back some control over electronic retail payments from the US organizations that dominate the field. The ECB hopes to start issuing the currency in 2029. In the UK, the Bank of England is mulling the design of the digital pound. The International Monetary Fund sees the digital euro as a continuation of financial stability.

Birch dates government interest to Facebook’s now-defunct 2019 cryptocurrency plan. Today, I imagine new motives: the US’s diminishing reliability as an ally raises the desirability of lessening reliance on its infrastructure generally. Visa, Mastercard, and other payment mechanisms largely transit US systems, a reality the FT says European banks are already working to change. In March, ECB board member Philip R. Lane argued that the digital euro will foster monetary autonomy.

We’ll see. The Economist writes that many countries are recognizing cash’s greater resilience, and are rethinking plans to go all-digital.

It remains hard to know how much central bank digital currencies will matter. As I wrote in 2023, there are few obvious benefits to individuals. For most of us the problem isn’t the mechanism for payments, it’s finding the money.

Illustrations: Bank of England facade.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.