Saving no one

In the early 2010s, after “nano” and before “AI”, 3D printing was the technology that was going to change everything. Then it seemed to go quiet except for guns.

“First we will gain control over the shape of physical things. Then we will gain new levels of control over their composition, the materials they’re made of. Finally, we will gain control over the behavior of physical things,” Hod Lipson and Melba Kurman wrote in their 2013 book, Fabricated. As far as I can tell, we’re still pretty much in the era of making physical things that could be made by traditional methods rather than weird new shapes that could *only* be produced by additive manufacturing. More than 15 years after a fellow technology conference attendee excitedly lectured me that 3D printing was going to change everything, its growth remains largely hidden from most of us.

Until this past week, when I attended an event awash in puzzle makers and discovered that it’s been a godsend to them for making not only prototypes but also small runs of copies or published designs, freeing them from having to find space and capital for the kind of quantities required by traditional production. It’s good to see a formerly hyped technology supporting clever and entertaining human invention.

Exploding egg, anyone?

***

In one of the biggest fines in its history, the UK Information Commissioner’s Office has announced it is fining Reddit £14.5 million for failing to put in place an effective age verification mechanism to block under-13s from using the site under Reddit’s stated terms of service. The story is somewhat confused by timing: the fine is under data protection law and relates to the period before the arrival of the Online Safety Act, but the OSA’s requirement for age verification brought the changes that sparked the fine. Reddit says it will appeal.

In the UK terms and conditions Reddit announced in June 2025, the company says that “by using the services, you state that…you are at least 13 years old”. But Reddit didn’t require proof, and the ICO says that many under-13s use(d) the platform.

In July, when the Online Safety Act came into effect, Reddit added an age gate of 18 for “mature” content. Unlike many other social media sites that are just giant pools of content sorted by curation or algorithm, Reddit is a large set of distinct subReddits. Each of these communities has its own rules, social norms, and, most important, human moderators. Because of this, it’s comparatively easy to mark a particular subReddit as “for adults only”. After the July change, anyone in the UK wishing to access one of those subReddits was asked to submit a selfie or an image of a government-issued ID in order to prove their age.

The ICO’s findings state that Reddit failed to protect under-13s from accessing content that placed them at risk; that it processed under-13s’ data unlawfully (because they are too young to meaningfully consent); and that a simple statement is not a sufficient age verification mechanism (which is made clear in the OSA).

A Reddit spokesperson told the Guardian: “The ICO’s insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users’ online privacy and safety.”

I take their point; I’d rather skip the “mature” content than bear the privacy risk of uploading personal data to whatever third-party company Reddit is using for age verification. Last July, I decided I would just be a child. (Although: my Reddit account dates to 2015, so they could just do the math.)

Turns out, it may have been a wise decision. Reddit, saying it didn’t want to hold users’ personal data, chose the age verification provider Persona.

Persona deserves a look. Last week, Discord announced it would begin treating all users as teens until they’d been verified, also using Persona. The result as Ashley Belanger reports at Ars Technica was a user backlash. First, because last time Discord tried this, its now-former age verification provider’s pile of 70,000 users’ age check information was hacked.

Second, because The Rage reports that a group of security researchers found a Persona front end exposed to the open Internet on a US government server. On examination, that code shows that Persona performs 269 different verification checks and scours the Internet and government sources using your selfie and facial recognition. Discord has now announced it will delay introducing age verification – and won’t be using Persona after an apparently unsatisfactory trial in the UK last year.In a blog posting, Discord says that, like Reddit, it does not want to know its users’ identification details. It is adding more verification options.

If the world had already had a set of established trustworthy companies that specialized in age verification when the OSA came into effect, then it would make sense to turn to them to provide that service. But we aren’t in that situation. Instead, although providers have been working for more than a decade to build such systems, their deployment at scale is new.

Part of keeping children – and the rest of us – safe is protecting security and privacy – and child safety campaigners’ refusal to accept this has been an issue for decades. Creating new privacy risks doesn’t keep anyone safer – including children.

Illustrations: Six-panel early 1970s cartoon strip, “What the User Wanted”.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of the future Internet

“What kind of Internet do you want [him] to inherit?” “Him” was then measuring his age in weeks.

“Not *this* Internet.”

Now, when said son has grown to measure his life in months, my friend and I are no closer to a positive vision. But notably many more people seem to be asking the same kind of question.

In the last week I’ve been to two meetings convened to pull together a cross-section of activists, policy wonks, and techies to talk about building movements to push back against the spread of technological control. The goals of these groups, like my friend’s and mine remain fuzzy, but they reflect a widespread growing alarm about AI, US entanglement, and our other technological ills.

“When did the future stop being something we plan for and become something done to us?” a friend asked about five years ago. That sense of being held hostage by the inevitability narrative is there, too, in a jumble including job loss, the evils of capitalism, the embedding of companies like Palantir in the health service and soon in policing, the speed of change, widespread loneliness, sustainability, and existential threats. So the overall feel has been part-Occupy, part consciousness-raising session.

Those who did have visions to propose often seemed to be describing things that already exist: trusted, authoritative content (the BBC, Wikipedia); ending capitalism in favor of shared ownership and distributed power (“there’s always someone reinventing communism,” the person next to me muttered), and recreating the impossible dream of micropayments.

One meeting polled us with a list of concerns about AI and asked us to pick the most important. The winner, by far, was “consolidation of power”. This speaks to a wider movement than merely opposing AI or resisting the encroachment of the worst technology surveillance practices into daily life.

Similar discussions have been growing for at least a couple of years. At The Register, long-time open source advocate Liam Proven writes after attending the Open Source Policy Summit that Europe is reassessing its technological reliance on US IT services, which offers the potential for a US president to order disconnection. The lack of European billion-dollar technology companies leads people to forget the technology invented here that instead embraced openness: the web, Linux, Raspberry Pi, Open StreetMap, the Fediverse.

It’s a little alarming, however, that all of this discussion hovers at the application layer. Old-timers who’ve watched the Internet build up understand that underneath the social media and smartphones lies the physical layer, the infrastructure that is also condolidated and controlled: chips, cables, wireless spectrum. For younger folks, those elements are near-invisible; their adult lives have been dominated by concerns about data. Yet in the last year we’ve been warned of sabotage to undersea cables and chip shortages. There’s more general recognition of the issues surrounding data centers’ demand for power and water.

Even so, there’s a good amount of recognition that all the strands of our present polycrisis are intertwined – see for example the mission statement at Germany’s Cables of Resistance. A broader group, building on the 2024 conference convened by Cristina Caffarra, who called out policy makers at CPDP 2024 for ignoring physical infrastructure, is working on a EuroStack to provide a European cloud alternative.

At the political layer, we have Dutch News reporting that Dutch MPs are pushing their government to move away from depending on US technology companies to provide essential infrastructure. In the UK, LibDem and Green MPs are calling on the government to reconsider its contracts with Palantir.

A group called Pull the Plug will lead a “march against the machines” in London on February 28 to demand the UK government create citizens’ assemblies and implement their decisions on AI.

It feels like change is gathering here. In the US, the future still looks much like the past. In a blog post this week, here is Anthropic, presumably responding to OpenAI’s plan to add advertising to ChatGPT:

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking…We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

Compare and contrast to Google founders Sergey Brin and Larry Page in their 1998 Google-founding paper:

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users…we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

No wonder Anthropic adds this caution: “Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.” Translation: we may need the money.” Of course they’ll frame it as serving the customer better.

Illustrations: (One of) the first Internet ad, for AT&T, on HotWired (via The Internet History Podcast.

Also this week:
At the Plutopia podcast we talk to science fiction writer Ken MacLeod.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Universal service

Last week a couple of friends and I got around to trying out Techdirt‘s 2025 game, One Billion Users. This card-based game has each player trying to build a social network while keeping toxicity under control.

First impression: the instructions are bananas complicated. There are users, influencers, events, hotfixes, safeguards…and a Troll, which everyone who understood the instructions tried to push off on someone else ASAP. One of our number became the gamemaster, reading out the instructions we struggled to remember. You win by adding (and subtracting) points based on the cards you’re holding when the the GAME OVER card turns up.

Even in a single game where we were feeling our way through, different strategies emerged. One of our number did her best to build a smaller, friendlier network. She succeeded – but it wasn’t a winning strategy. Without any thought to planning, my network ended up medium-sized. I was constrained by an event card stopping me from adding new users, and then, catastrophically, “gifted” the Troll. I came in second. The winner had built a huge number of users, successfully dumped the troll (thank you *so* much), and acquired several influencers who brought their own communities. We eventually identified the networks we’d built, in order: Tumblr, Twitter (not, I think, X), Facebook.

In a more detailed review, Adi Robertson at The Verge traces the roots of the game’s design to a game we played a lot in my childhood but that I no longer remember very well: Mille Bornes (“A Thousand Milestones”). A change of theme, some added twists, I see it now.

We will try this game again. I didn’t *want* to build the Torment Nexus!

***

It appears the BBC wants to switch off Freeview in 2034. For non-UK readers: Freeview is digital terrestrial television – that is, broadcast. It’s operated by a joint venture among the UK’s public service broadcasters (PSBs) – the BBC, ITV, Channel 4, and Channel 5. Given any television made since 2008 or another receiver device you can access 85 channels without paying anything more other than the BBC’s license fee. That, too, will soon be under review; the BBC’s charter is due for renewal in 2027. Freeview is one piece of a larger puzzle.

As Mark Sweney explains at the Guardian, the Department of Culture, Media, and Sport is reviewing options for Freeview’s future, and is considering three alternatives presented by Ofcom (PDF), the broadcast regulator. One: upgrade the present infrastructure. Two: maintain it as a cut-down service offering only the PSBs’ core channels. Three: move entirely to streaming.

The broadcasters, Sweney writes, favor the latter, choosing 2034 as a logical time to shut down Freeview because that’s when their contract with their network operator expires. By then, projections say that about 1.8 million people will still be dependent on Freeview, a long way down from today’s estimated 12 million. Many more homes, like mine, use both. The Ofcom report says that in 2023 39% of TV viewing was via broadcast.

Most of the discussion focuses on costs: updating the Freeview infrastructure is expensive for broadcasters, switching to streaming is an ongoing expense for individuals. Households would need a broadband subscription, new equipment, and the streaming app Freely, which was launched in 2024. There is a petition opposing the change.

This discussion is happening shortly after the British Audience Research Board announced that the number of YouTube viewers passed the number of BBC viewers for the first time. However, as Dekan Apajee writes at The Conversation, even on YouTube people are still watching the BBC’s output, even if they’re not be aware of it. Apajee is more concerned about context and finding ways to distinguish public service broadcasting and its values from the jumble of everything else on YouTube. How do the PSBs meet the requirement for universal service? Ofcom’s more recent report on the future of public service media (PDF), also warns of this loss of discoverability amid increased competition.

Adding to that, the BBC is reportedly considering a formal content agreement with YouTube that would have it publish some younger-oriented content there before showing it on its own platforms. It’s odd timing, as so many are warning against depending on US technology, as the economist Paul Krugman wrote yesterday. The loss of audience data has been a theme in the rise of streamers – and YouTube has just withdrawn from BARB’s audience measurement system, saying the organization violated YouTube’s terms and conditions.

Remarkably little of this discussion considers the potential loss of privacy inherent in forcing everyone to move to “smart” data collection machines (TVs, phones, computers). Is there a future in which it’s still possible to watch video content anonymously? (Yes, but they call it “piracy”.)

The BBC seems to believe that transitioning to streaming can be smooth. Sweney cites the years to 2012, when analog TV was switched off in favor of digital broadcast, which he describes as “near seamless” despite warnings of potential exclusion. Maybe so, but a lot of televisions were wastefully dumped, and that conversion was a one-time cost, not a permanent monthly drain.

At a meeting yesterday about building better technology, one attendee passionately advocated trustworthy content, presented by trusted sources. Ah, I thought, she wants to reinvent the BBC. Doesn’t everyone?

Illustrations: Family watching television in 1958 (via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of causality

The debates over children’s use of social media, screens, and phones continue, exacerbated in the UK by ongoing Parliamentary scrutiny of the Children’s Wellbeing and Schools bill and continuing disgust over Grok‘s sexualized image generation. Robert Booth reports at the Guardian that the Center for Countering Digital Hate estimates that Grok AI generated 3 million sexualized images in under two weeks and that a third of them are still viewable on X. In that case, X and Grok appear to be a more general problem than children’s access.

We continue to need better evidence establishing causality or its absence. This week, researchers from the Bradford Centre for Health Data Science (led by Dan Lewer) and the University of Cambridge (led by Amy Orben) announced a six-week trial that will attempt to find the actual impact on teens of limiting – not ending – social media access. The BBC reports, that the trial will split 4,000 Bradford secondary school pupils into two groups, One will download an app hat turns off access to services like TikTok and Snapchat from 9pm to 7am and limits use at other times to a “daily budget”. The restrictions won’t include WhatsApp, which the researchers recognize is central to many family groups. The other half will go on using social media as before.

The researchers will compare the two groups by assessing their’ levels of anxiety, depression, sleep, bullying, and time spent with friends and family.

In earlier research, Orben developed a framework for data donation, which allows teens to understand their own use of social media. Another forthcoming study, Youth Perspectives on Social Media Harms: A Large-Scale Micro-Narrative Study, collects 901 first-person tales from 18- to 22-year-olds in the UK. From these Orben’s group derive four types of harm: harms from other people’s behavior, personal harmful behavior evoked by social media, harms related to the content they encounter, and harms related to platform features. In the first category they include bullying and scams; in the second, compulsive use and social comparison; in the third, graphic material; and in the fourth, algorithmic manipulation. They also note the study’s limitations. A longer-term or differently-timed study might show different effects – during the study period the 2024 US presidential election took place. The teens’ stories don’t establish causality. Finally, there may be other harms not captured in this study.

The most important element, however, is that they sought the perspective of young people themselves, who are to date rarely heard in these discussions.

As this research begins, at Techdirt Mike Masnick reports on two new finished papers also covering teens and social media. The first, Social Media Use and Well-Being Across Adolescent Development, published in JAMA Pediatrics, is a three-year study of 100,991 Australian adolescents to find whether well-being was associated with social media use. The researchers, from the University of South Australia, found a U-shaped curve: moderate social media use was associated with the best outcomes, while both the highest users and the non-users showed less well-being. Girls benefited increasingly from moderate social media use from mid-adolescence onwards, while in boys’ non-use became increasingly problematic, leading to worse outcomes than high use by their late teens.

The second, a study from the University of Manchester published in the Journal of Public Health, followed a group of 25,000 11- to 14-year-olds to find out whether the use of technology such as social media and gaming accurately predicted later mental health issues. The study found no evidence that heavier use of social media or gaming led to increased symptoms of anxiety or depression in the following year.

In his discussion of these two papers, Masnick argues that this research gives weight to his contention that the widespread claim that social media is inherently harmful is wrong.

In the UK and elsewhere, however, politicians are proceeding on the basis that social media *is* inevitably harmful. . This week, the government announced a consultation on children’s use of technology. The consultation seems, as Carly Page writes at The Register, geared toward increasing restrictions, Also this week, the House of Lords voted 261 to 150, defeating the government to add an amendment to the Children’s Wellbeing and Schools bill that would require social media services to add age verification to block under-16s from accessing them within a year. MPs will now have to vote to remove the amendment or it will become law, a backdoor preemption of the House of Commons’ prerogative to legislate.

UK prime minister Keir Starmer has been edging toward a social media ban for under-16s; now with added pressure from not only the Lords but also the Conservative Party leader, Kemi Badenoch, and 61 MPs sent an open letter supporting a ban like the one in Australia. Ofcom reports that 22% of children aged eight to 17 have a false user age of over 18 – but also that often it’s with their parents’ help. Would this be different under a national ban?

Starmer reportedly wants to delay deciding until evidence from Australia and, one presumes, from the consultation, is available. A sensible idea we hope is not doomed to failure.

Illustrations: Time magazine’s 1995 “Cyberporn” cover, which raised early alarm about kids online. Based on a fraudulent study, it nonetheless influenced policy-making for some years.

Also this week:
At the Plutopia podcast, we interview Dave Evans on his work to combat misinformation.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Disconnexion

One thing we left out in last week’s complaint is generative AI’s undoubted ability to magnify the worst of human online behavior. A few days ago, the world discovered that X’s chatbot, Grok, can be commanded to “nudify” images of women and children – that is, digitally remove their clothes without their consent. A number of commenters also note that some of the same British politicians who are calling out X and Grok about this and who more broadly insist on increasing restrictions in the name of online safety nonetheless continue to post there. Even Ashley St. Clair, the mother of one of Elon Musk’s sons, is unable to get these images taken down. Some ministers have called for banning this form of deepfake software.

Among those calling for Elon Musk to act “urgently” are technology secretary Liz Kendall and prime minister Keir Starmer. The BBC reported this morning (January 9) that the government is calling on Ofcom to use “all its powers”. At Variety, Naman Rathandran reports that X has moved AI image editing behind a paywall.

On January 2, at the National Observer, Jimmy Thompson calls on the Canadian government to delete their accounts. On Wednesday, the Commons women and equalities committee announced it would stop using X. As of January 8, both Kendall and Starmer are still posting on X, along with the UK’s Supreme Court and the Regulatory Policy committee and doubtless many others. Ofcom, the regulatory agency in charge of enforcing the Online Safety Act, posted a statement on January 5 saying it has contacted X and plans a “swift assessment to determine whether there are potential compliance issues that warrant investigation”. At the Online Safety Act Network, Lorna Woods explains the relevant law.

My guess is that few politicians manage their own social media – an extreme form of mental compartmentalization – and their aides are schooled in the belief that “we must meet the audience where they are”. In that sense, these accounts are not ordinary users, who use social media to connect to their friends and other interesting people. Politicians, like many others who are paid to show off in public, use social media to broadcast, not so much to participate. But much depends on whether you think that Grok’s behavior is one piece of a fundamental structural problem with X and its ownership or whether you believe it’s an isolated ill-thought-out feature to be solved by tweaking software, a distinction Jason Koebler explores at 404 Media.

The politicians’ accounts doubtless predate Musk’s takeover. Twitter was – and X is – small compared to other social media. But the short-burst style perfectly suited journalists, who gave it far more coverage than it probably deserved. Politicians go where they perceive the public to be, which is often signaled by media coverage.

It’s not necessarily wrong for politicians and government agencies to argue that they should be on X to serve their constituents who use it. But to legitimize that claim they should also be cross-posting on every significant platform, especially the open web. We can then argue about the threshold for “significant”. At a guess, it’s bigger than a blog but smaller than Mastodon, where politicians are notoriously absent.

***

The early 2020s’ exciting future of cryptocurrencies has gotten lost in the distraction of the last couple of years’ excitement over our new future of technologies pretending to be “smart”. In 2023’s “crypto winter”, we thought anyone still interested was either an early booster or thought they could smell profit. As Molly White wrote this week, they’ve spent the last two years nourishing grudges and building a political machine that could sink large parts of the economy.

More quietly, as Dave Birch predicted in 2017 (and repeated in his 2020 book, The Currency Cold War) “serious people” were considering their approach. Among them, Birch numbered banks, governments, and communities.

Now, governments are hatching proposals. As 2025 ended, the European Council backed the European Central Bank’s digital euro plan; the European Parliament will vote on it this year. The Financial Times reports that this electronic alternative to cash could help European central bankers pull back some control over electronic retail payments from the US organizations that dominate the field. The ECB hopes to start issuing the currency in 2029. In the UK, the Bank of England is mulling the design of the digital pound. The International Monetary Fund sees the digital euro as a continuation of financial stability.

Birch dates government interest to Facebook’s now-defunct 2019 cryptocurrency plan. Today, I imagine new motives: the US’s diminishing reliability as an ally raises the desirability of lessening reliance on its infrastructure generally. Visa, Mastercard, and other payment mechanisms largely transit US systems, a reality the FT says European banks are already working to change. In March, ECB board member Philip R. Lane argued that the digital euro will foster monetary autonomy.

We’ll see. The Economist writes that many countries are recognizing cash’s greater resilience, and are rethinking plans to go all-digital.

It remains hard to know how much central bank digital currencies will matter. As I wrote in 2023, there are few obvious benefits to individuals. For most of us the problem isn’t the mechanism for payments, it’s finding the money.

Illustrations: Bank of England facade.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The AI who was God

Three subjects dominated 2025: increasing AI infestation, expanding surveillance use of biometrics, and, age verification and online safety. The last spent the year spreading across the world, including, most recently, to Louisiana. There, on December 22, a US District Court blocked the law on First Amendment grounds in a suit brought by the trade association NetChoice. Less than a week earlier on similar grounds, NetChoice also won a suit in Arkansas that would have penalized platforms for “using designs or algorithms” that they “know or should have known” could harm users by for example leading to addiction, drug use, or self-harm. The judge in this case called the law “unconstitutionally vague”. Personally, it seems like it would be hard to prove cause and effect.

However, much of the rest of the year felt in many ways like rinse-and-repeat, only bigger and more frustrating. The immediate future – 2026 – therefore looks like more of all of those perennial topics, especially surveillance. This time next year we will still be fighting over age verification, network neutrality, national identification systems, surveillance, data protection, security issues surrounding the Internet of Things and other “smart” devices, social media bans, and access to strong encryption, along with other perennials such as copyright and digital sovereignty.

It is however possible that AI might have gone quiet by then. Three types of reasons: financial, technical, and social.

To take finances first, concerns about the AI bubble have been building all year. In the latest of his series of diatribes about this and the “rot economy”, Ed Zitron writes that AI is bringing “enshittification Stage Four”, in which companies, having already turned on their users and customers, turn on their shareholders. Zitron traces the circular deals, the massive debt, the extravagant claims, and the disproportionately small revenues, and invokes the adage, if something can’t go on forever, it will stop.

On the technical side, no matter what Elon Musk predicts, more sober commentary at MIT Technology Review is calling “reset”. As Adam Becker writes in More Everything Forever, one thing that can’t go on ad infinitum is exponentially increasing computing power-up: exponential growth always hits resource limits. It is entirely possible that come 2027 we’ll have run out of all sorts of road on this current paradigm of “AI”. If so, expect to hear a lot more about how quantum is ready to remake the world. Generative AI will still be bigger ten years from now (just like the Internet in 2000, when the dot-com boom crashed), but it won’t become sentient and fix climate change.

Brief digression. On Mastodon, Icelandic web developer Baldur Bjarnason posts that he’s hearing people claim that studies showing that large language models won’t lead to AGI are “whitewashing creationism”. Uh…huh?

On the social side, pressure is mounting to curb the industry’s growth. US politicians including US senator Bernie Sanders (D-VT) and Florida governor Ron DeSantis (R) are both working to slow data center construction. Data centers guzzle power and water, as Zitron also explains, and nearby residents pay both directly and indirectly.

Other harms keep mounting up. The year’s Retraction Watch annual report includes myriad fake references Salesforce fired 4,000 people before only now realizing that large language models can’t do their jobs; other companies nonetheless want to copy it. Organizers canceled a concert by Canadian musician Ashley MacIsaac after a Google AI summary wrongly said he’d been convicted of sexual assault.

At Utah’s Park Record, Cannon Taylor reported recently that in late October an AI summary indicated that a West Jordan, Utah police officer had morphed into a frog. Simple explanation: ta Harry Potter movie playing in the background had been recorded by the officer’s bodycam during an investigation. Per the story, the summary seemed humanly written until, “And then the officer turned into a frog, and a magic book appeared and began granting wishes.”

The story goes on to report several different AI software trials. One, used in Summit County, has a setting that inserts deliberate errors into the summaries to expose officers who don’t thoroughly check them. With that turned off, the time savings over having officers write their own summaries are considerable. Summit County turned it on.The time savings vanished. The County decided to pass.

Back when pranksters used to deface web pages for fun, a pasttime more embarrassing than harmful, I thought it would be much worse when they learned to make small, hard-to-detect changes that poisoned the information supply.

AI is perfect for automating this.

In their “FakeParts” paper (PDF), researchers at the Institut Polytechnique de Paris discuss a disturbing example: subtle, localized AI-driven changes to otherwise real videos. These fakeparts blend in seamlessly; identifying them is far harder than spotting a complete fake, which on its own is hard enough. The researchers warn that subtle changes to facial expressions or gestures can change the emotional content of genuine statements, great for creating targeted attacks and sophisticated disinformation campaigns.

Cut to James Thurber‘s 1939 fable, The Owl Who Was God. If AI kills us, it will be because we trust it without applying common sense.

Illustrations: Barred owl (photo by Steve Bellovin, used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Slop

Sometimes it doesn’t pay to be first. iRobot, the maker of the Roomba, has filed for Chapter 11 bankruptcy protection and been acquired by Picea, one of its Chinese suppliers, Lauren Almeida reports at the Guardian. The company’s value has cratered since 2021.

Given the wild enthusiasm that greeted the Roomba’s release in 2002, it seems incredible. Years before then, I recall an event where a speaker whose identity I don’t remember said that ever since he had mentioned the possibility of a robot vacuum sometime like the 1960s he’d gotten thousands of letters asking when it would be ready. There was definitely customer demand. It helped that the Roomba itself was kind of cute as it banged randomly into furniture. People named them, and took them on vacation. But, as often happens, the Roomba’s success attracted lower-cost competitors, and the first mover failed to keep up.

I got one in 2003. After a great few months, I realized that Roombas are not compatible with long hair, which ties them into knots that take longer to cut out than vacuuming. I gave it away within a year and haven’t tried again.

At Mashable, Leah Stodart warns that although the Roombas people already have will continue to work “for now”, users can’t be confident that this state of affairs will continue. Like so many other things that used to be things we owned and are now things we subscribe to (but still think we “buy”), newer-model Roombas are controlled by an app that the manufacturer can change or discontinue at will. She calls it “unplanned obsolescence”. Her advice not to buy a new one this year is sound from the consumer’s point of view, but hardly likely to help the company survive.

***

If generative AI is so great, why is everyone forcing it on us? The latest example, Luke James reports at Tom’s Hardware, is LG “smart” TVs whose users woke up the other day to find a new update had installed “CoPilot: Your AI Companion” without asking permission and that there was no option to remove it. The most you can do to disable it, James says, is keep your TV disconnected from the Internet.

There are of course many more, the automated summaries popping up everywhere being the most obvious. Then, Matthew Gault reports at 404 Media, a Discord moderator and an Anthropic executive added Anthropic’s Claude chatbot to a community for queer gamers, who had voted to restrict Claude to its own channel. Result: major exodus. Duh.

And, of course, as Lance Ulanoff reminds at TechRadar, there is “AI slop” everywhere – music playlists, YouTube videos, ebooks – threatening people’s livelihood even though, as Cory Doctorow has written, “AI can’t do your job. But an AI salesman can convince your boss to fire you and replace you with a chatbot that can’t do your job.” For a while, anyway: Microsoft is halving its sales targets for AI.

And thus we get “slop” as the word of the year, per Merriam-Webster. Any time companies are this intent on foisting something on us – chatbots, ads – you have to know that they’re intent on favoring their interests, not ours.

***

Last week, Customs and Borders Patrol published a notice in the Federal Register proposing new rules for foreigners traveling to the US on an ESTA (“Electronic System for Travel Authorization”) as part of the visa waiver program. It has drawn a lot of discussion in the UK, which is one of the 42 affected countries. Under the new rules, applicants must install CBP’s app, into which they must submit a massive load of “high-value” personal information. The list is long, allows for a so-far-imaginary future of DNA sampling, and expects you to be able to give five years’ worth of family members’ residences, phone numbers, and places of birth, and all the email addresses you’ve used for ten years. CBP thinks the average applicant should be able to complete on their smartphone in 22 minutes. I think it would take hours of painful, resentful typing on a stupid touch keyboard, and even then I doubt I could fill it out with any certainty that the information I supplied was complete or accurate. Data collection at this scale makes it easy to find an error to use as an excuse to deny entry to or deport someone you want to get rid of. As Edward Hasbrouck writes at Papers, Please, “Welcome to the 2026 World Cup”.

“They have to be planning to use AI on all that data,” a friend commented last week. Probably – to build social graphs and find connections deemed suspicious. Privacy International predicts that the masses of data being demanded will in fact enable the AI tools necessary to implement automated decision making and calls the proposals “disproportionate for “a family’s visit to Disney World”,

One of the problems Hasbrouck highlights while opposing this level of suspicionless data collection is that CBP has not provided any way for would-be respondents to the Federal Register notice to examine the app’s source code. What other data might it be collecting?

As Hasbrouck adds in a follow-up, the rules the US imposes on visitors are often adopted by other countries as requirements for US travelers. In this game of ping-pong escalation, no one wins.

Review: The Seven Rules of Trust

The Seven Rules of Trust: Why It Is Today’s Essential Superpower
by Jimmy Wales
Bloomsbury
ISBN: 978-1-5266-6501-0

Probably most people have either forgotten or never known that when Jimmy Wales first founded Wikipedia it was widely criticized. A lot of people didn’t believe an encyclopedia written and edited by volunteers could be any good. Many others believed free access would destroy Britannica’s business model, and reacted resentfully. Teachers warned students against using it, despite the fact that Wikipedia’s talk pages offer rare transparency into how knowledge is curated.

Now we know the Internet is big enough for both Wikipedia and Britannica.

Much of Wikipedia’s immediate value lay in its infinite expandability; it covered in detail many subjects the more austere Britannica considered unworthy. But, as Wales writes at the beginning of his recent book, The Seven Rules of Trust, Wikipedia’s biggest challenge was finding a way to become trusted. Britannica must have faced this too, once. Its solution was to build upon the reputation of the paid experts who write its entries. Wikipedia settled on passion, transparency, and increasingly rigorous referencing. As it turns out, collectively we know a lot. Today, Wikipedia is nearly 100 times the size of Britannica, has hundreds of language editions, and is so widely trusted that most of us don’t even think about how often we consult it.

In The Seven Rules of Trust, Wales tells the story of: how Wikipedia got from joke to trusted resource. It began, he says, with its editors trusting each other. For this part of his story, he relies on Frances Frei‘s model of trust, a triangle balancing authenticity, empathy, and logic. Editors’ trust enabled the collaboration that could build public trust in their work, which is guided by Wikipedia’s five pillars.

Wales’s seven rules are not complicated: trust is personal, even at scale; people are born to connect and collaborate; successful collaboration requires a clear positive shared purpose; give trust to get trust; practice civility; stick to your mission and avoid getting involved in others’ disputes; embrace transparency. Some of these could be reframed as the traditional virtues, as when Wales talks about the principle of “assume good faith” when trying to negotiate the diversity of others’ opinions to reach consensus on how to present a topic. I think of this as “charity”. Either way, it’s not meant to be infinite; good faith can be abused, and Wales goes on to talk about how Wikipedia handles trolls, self-promoters, and other problems.

Yet, Wales’s account feels rosy. Many of his stories about remediating the site’s flaws revolve around one or two individuals who personally built up areas such as Wikipedia’s coverage of female scientists. I’m not sure he’s in a position to recognize how often would-be contributors are quickly deterred by an editor fiercely defending their domain or how difficult it’s become to create a new page and make sure it stays up. And, although he nods at the hope that the book will help recruit new editors, he doesn’t discuss the problem of churn Wikipedia surely faces.

Having steered the creation of something as gigantic and seemingly unlikely as Wikipedia, Wales has certainly earned the right to explain how he did it in the hope of helping others embarking on similarly large and unlikely projects. Wales argues that trust has enabled diversity of opinion, and the resulting internal disagreement has improved Wikipedia’s quality. Almost certainly true, but hard to apply to more diffuse missions; see today’s cross-party politics.

Sovereign immunity

At the Gikii conference in 2018, a speaker told us of her disquiet after receiving a warning from Tumblr that she had replied to several messages posted there by a Russian bot. After inspecting the relevant thread, her conclusion was that this bot’s postings were designed to increase the existing divisions within her community. There would, she warned, be a lot more of this.

We’ve seen confirming evidence over the years since. This week provided even more when X turned on location identification for all accounts, whether they wanted it or not. The result has been, as Jason Koebler writes at 404 Media, to expose the true locations of accounts purporting to be American, posting on political matters. A large portion of the accounts behind viral posts designed to exacerbate tensions are being run by people in countries like Bangladesh, Vietnam, India, Cambodia, and Russia, among others, with generative AI acting as an accelerant.

Unlike the speaker we began with, in his analysis, Koebler finds that the intention behind most of this is not to stir up divisions but simply to make money from an automated ecosystem that makes it easy. The US is the main target simply because it’s the most lucrative market. He also points out that while X’s new feature has led people to talk about it, the similar feature that has long existed on Facebook and YouTube has never led to change because, he writes, “social media companies do not give a fuck about this”. Cue the Upton Sinclair quote: “It is difficult to get a man to understand something when his Salary depends upon his not understanding it”

The incident reminded that this type of fraud in general seems to be endemic, especially in the online advertising ecosystem. In March, Portsmouth senior lecturer Karen Middleton submitted evidence (PDF) to a UK Parliamentary Select Committee Inquiry arguing that the advertising ecosystem urgently needs regulatory attention as a threat to information integrity. At the Financial Times, Martin Wolf thinks that users should be able to sue the platforms for reimbursement when they are tricked by fraudulent ads – a model that might work for fraudulent ads that cause quantifiable harm but not for those that cause wider, less tangible, social harm. Wolf cites a Reuters report from Jeff Horwitz, who analyzes internal Facebook documents to find that the company itself expected 10% of its 2024 revenues – $16 billion – to come from ads for scams and banned goods.

Search Engine Land, citing Juniper Research, estimated in 2023 that $84 billion in advertising spend would be lost to ad fraud that year, and predicted a rise to $172 billion by 2028. Spider Labs estimates 2024 losses at over $37.7 billion, based on traffic data it’s analyzed through its fraud prevention tool, and 2025 losses at $41.4 billion. For context, DataReportal puts global online ad revenue at close to $790.3 billion in 2024. Also for comparison, Adblock Tester estimated last week that ad blockers cut publishers’ advertising revenues on average by 25% in 2023, costing them up to $50 billion a year.

If Koebler is correct in his assessment, until or unless advertisers rebel the incentives are misplaced and change will not happen.

***

Enforcement of the Online Safety Act has continued to develop since it came into force in July. This week, Substack became the latest to announce it would implement age verification for whatever content it deems to be potentially harmful. Paid subscribers are exempt on the basis that they have signed up with credit cards, which are unavailable in the UK to those under 18.

In October, we noted the arrival of a lawsuit against Ofcom brought in US courts by 4Chan and Kiwi Farms. The lawyer’s name, Preston Byrne, sounded familiar; I now remember he talked bitcoin at the 2015 Tomorrow’s Transaction Forum.

James Titcomb writes at the Daily Telegraph that Ofcom’s lawyers have told the US court that it is a public regulatory authority and therefore has “sovereign immunity”. The lawsuit contends that Ofcom is run as a “commercial enterprise” and therefore doesn’t get to claim sovereign immunity. Plus: the First Amendment.

Meanwhile, with age verification spreading to Australia and the EU, on X Byrne is advocating that US states enact foreign censorship shield laws. One state – Wyoming – has already introduced one. The draft GRANITE Act was filed on November 19. Among other provisions, the law would permit US citizens who have been threatened with fines to demand three times the amount in damages – potentially billions for a company like Meta, which can be fined up to 10% of global revenue under various UK and EU laws. That clause would have to pass the US Congress. In the current mood, it might; in July in a report the House of Representatives Judiciary Committee called the EU’s Digital Services Act a foreign censorship threat.

It’s hard to know how – or when – this will end. In 1990s debates, many imagined that the competition to enforce national standards for speech across the world would lead either to unrestricted free speech or to a “least common denominator” regime in which the most restrictive laws applied everywhere. Byrne’s battle isn’t about that; it’s about who gets to decide.

Illustrations: A wild turkey strutting (by Frank Schulenberg at Wikimedia). Happy Thanksgiving!

Also this week:
At Plutopia, we interview Jennifer Granick, surveillance and cybersecurity counsel at ACLU.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Software is still forever

On October 14, a few months after the tenth anniversary of its launch, Microsoft will end support for Windows 10. That is, Microsoft will no longer issue feature or security updates or provide technical support, and everyone is supposed to either upgrade their computers to Windows 11 or, if Microsoft’s installer deems the hardware inadequate, replace them with newer models. People who “need more time”, in the company’s phrasing, can buy a year’s worth of security updates. Either way, Microsoft profits at our expense.

In 2014, Microsoft similarly end-of-lifed 13-year-old Windows XP. Then, many were unsympathetic to complaints about it; many thought it unreasonable to expect a company to maintain software for that long. Yet it was obvious even then that software lives on with or without support for far longer than people expect, and also that trashing millions of functional computers was stupidly wasteful. Microsoft is giving Windows 10 a *shorter* life, which is rather obviously the wrong direction for a planet drowning in electronic waste.

XP’s end came at a time when the computer industry was transitioning from adolescence to maturity. As long as personal computing was being constrained by the limited capabilities of hardware and research and development was improving them at a fast pace, a software company like Microsoft could count on frequent new sales. By 2014, that happy time had ended, and although computers continue to add power and speed, it’s not coming back. The same pattern has been repeated with phones, which no longer improve on an 18-month cycle as in the 2010s, and cameras.

For the vast majority, there’s no reason to replace their old machine unless a non-replaceable part is failing – and there should be less of that as manufacturers are forced to embrace repairability. Significantly, there’s less and less difference for many of us if we keep the old hardware and switch to Linux, eliminating Microsoft entirely.

Those fast-moving days were real obsolescence. What we have now is what we used to call “planned obsolescence”. That is, *forced* obsolescence that companies impose on us because it’s convenient and profitable for *them*.

This time round, people are more critical, not least because of the vast amounts of ewaste being generated. The Public Interest Research Group has written an open letter asking people to petition Microsoft to extend free support for Windows 10. As Ed Bott explains at ZDNet, you do have the option of kicking the can down the road by paying for updates for another three years.

The other antisocial side of terminating free security updates is that millions of those still-functional machines will remain in use, and will be increasingly insecure as new vulnerabilities are discovered and left unpatched.

Simultaneously, Windows is enshittifying; it’s harder to run Windows without a Microsoft login; avoid stupid gewgaws and unwanted news headlines, and turn off its “Copilot AI”. Tom Warren reports at The Verge that Microsoft wants to turn Copilot into an agent that can book restaurants and control its Edge browser. There are, it appears, ways to defeat all this in Windows 11, but for how long?

In a piece on solar technology, Doctorow outlines the process by which technology companies seize control once they can no longer rely on consumer demand to drive sales. They lock down their technology if they can, lock in customers, add advertising and block market entry claiming safety and/or security make it necessary. They write and lobby for legislation that enshrines their advantage. And they use technological changes to render past products obsolete. Many think this is the real story behind the insistence on forcing unwanted “AI” features into everything: it’s the one thing they can do to make their offerings sound new.

Seen in that light, the rush to build “AI” into everything becomes a rush to find a way to force people to buy new stuff. The problem is that – it feels like – most people don’t see much benefit in it, and go around turning off the AI features that are forced on them. Microsoft’s Recall feature, which takes a screen snapshot every few seconds, was so controversial at launch that the company rolled it back – for a while, anyway.

Carelessness about ewaste is everywhere, particularly with respect to the Internet of Things. This week: Logitech’s Pop smart home buttons. At least when Google ended support for older Nest thermostats they could go on working as “dumb” thermostats (which honestly seems like the best kind).

Ewaste is getting a whole lot worse when it desperately needs to be getting a whole lot better.

***

In the ongoing rollout of the Online Safety Act and age verification update, at 404 Media, Joseph Cox reports that Discord has become the first site reporting a hack of age verification data. Hackers have collected data pertaining to 70,000 users, including selfies, identity documents, email addresses, approximate residences, and so on, and are trying to extort Discord, which says the hackers breached one of its third-party vendors that handles age-related appeals. Security practitioners warned about this from the beginning.

In addition, Ofcom has launched a new consultation for the next round of Online Safety Act enforcement. Up next are livestreaming and algorithmic recommendations; the Open Rights Group has an explainer, as does lawyer Graham Smith. The consultation closes on October 20.

Illustrations: One use for old computers – movie stardom, as here in Brazil.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.