Saving no one

In the early 2010s, after “nano” and before “AI”, 3D printing was the technology that was going to change everything. Then it seemed to go quiet except for guns.

“First we will gain control over the shape of physical things. Then we will gain new levels of control over their composition, the materials they’re made of. Finally, we will gain control over the behavior of physical things,” Hod Lipson and Melba Kurman wrote in their 2013 book, Fabricated. As far as I can tell, we’re still pretty much in the era of making physical things that could be made by traditional methods rather than weird new shapes that could *only* be produced by additive manufacturing. More than 15 years after a fellow technology conference attendee excitedly lectured me that 3D printing was going to change everything, its growth remains largely hidden from most of us.

Until this past week, when I attended an event awash in puzzle makers and discovered that it’s been a godsend to them for making not only prototypes but also small runs of copies or published designs, freeing them from having to find space and capital for the kind of quantities required by traditional production. It’s good to see a formerly hyped technology supporting clever and entertaining human invention.

Exploding egg, anyone?

***

In one of the biggest fines in its history, the UK Information Commissioner’s Office has announced it is fining Reddit £14.5 million for failing to put in place an effective age verification mechanism to block under-13s from using the site under Reddit’s stated terms of service. The story is somewhat confused by timing: the fine is under data protection law and relates to the period before the arrival of the Online Safety Act, but the OSA’s requirement for age verification brought the changes that sparked the fine. Reddit says it will appeal.

In the UK terms and conditions Reddit announced in June 2025, the company says that “by using the services, you state that…you are at least 13 years old”. But Reddit didn’t require proof, and the ICO says that many under-13s use(d) the platform.

In July, when the Online Safety Act came into effect, Reddit added an age gate of 18 for “mature” content. Unlike many other social media sites that are just giant pools of content sorted by curation or algorithm, Reddit is a large set of distinct subReddits. Each of these communities has its own rules, social norms, and, most important, human moderators. Because of this, it’s comparatively easy to mark a particular subReddit as “for adults only”. After the July change, anyone in the UK wishing to access one of those subReddits was asked to submit a selfie or an image of a government-issued ID in order to prove their age.

The ICO’s findings state that Reddit failed to protect under-13s from accessing content that placed them at risk; that it processed under-13s’ data unlawfully (because they are too young to meaningfully consent); and that a simple statement is not a sufficient age verification mechanism (which is made clear in the OSA).

A Reddit spokesperson told the Guardian: “The ICO’s insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users’ online privacy and safety.”

I take their point; I’d rather skip the “mature” content than bear the privacy risk of uploading personal data to whatever third-party company Reddit is using for age verification. Last July, I decided I would just be a child. (Although: my Reddit account dates to 2015, so they could just do the math.)

Turns out, it may have been a wise decision. Reddit, saying it didn’t want to hold users’ personal data, chose the age verification provider Persona.

Persona deserves a look. Last week, Discord announced it would begin treating all users as teens until they’d been verified, also using Persona. The result as Ashley Belanger reports at Ars Technica was a user backlash. First, because last time Discord tried this, its now-former age verification provider’s pile of 70,000 users’ age check information was hacked.

Second, because The Rage reports that a group of security researchers found a Persona front end exposed to the open Internet on a US government server. On examination, that code shows that Persona performs 269 different verification checks and scours the Internet and government sources using your selfie and facial recognition. Discord has now announced it will delay introducing age verification – and won’t be using Persona after an apparently unsatisfactory trial in the UK last year.In a blog posting, Discord says that, like Reddit, it does not want to know its users’ identification details. It is adding more verification options.

If the world had already had a set of established trustworthy companies that specialized in age verification when the OSA came into effect, then it would make sense to turn to them to provide that service. But we aren’t in that situation. Instead, although providers have been working for more than a decade to build such systems, their deployment at scale is new.

Part of keeping children – and the rest of us – safe is protecting security and privacy – and child safety campaigners’ refusal to accept this has been an issue for decades. Creating new privacy risks doesn’t keep anyone safer – including children.

Illustrations: Six-panel early 1970s cartoon strip, “What the User Wanted”.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Information wants to be surveiled

It has long been the case that the person sitting next to you on an airplane may have paid a very different price for their seat than you did. For two reasons. First, airlines don’t price seats – they price itineraries. A seat from London to New York may be priced very differently depending on whether it’s one-way, one-half of a round trip, or one stage in a larger, multi-stage journey. Second, however, airlines are sophisticated about maximizing the value of your seat, responding to patterns of peak travel (Thanksgiving, August), when you buy it, and other factors. Despite their complexity, those prices are supposed to be based on published tariffs that anyone can, in theory, calculate for themselves and come up with the same answer.

In 2012, travel and data privacy expert Edward Hasbrouck explained all this as part of documenting and opposing the airlines’ desire to move to personalized pricing. Instead of acting as common carriers and publishing tariffs that apply across the board, with personalized pricing the airlines would use the information they have about you to charge what you can and would be willing to pay. In 2012, the International Air Transportation Association called it “New Distribution Capability”.

This is a much scarier proposition now than it was then; companies have a lot more data they can exploit. In 2012, they might simply have known your flying habits and credit score, while balancing their desire to get the most they can for the seat against the time they have left to sell it. Uber uses similar tactics; it raises the price of a ride – “surge pricing”, or “dynamic pricing” – when demand is high.

Today, an airline might know – or be able to find out – that you are racing against time to see a beloved relative before they die. And why should it stop with airlines? Exploitative possibilities abound.

Uber has already been dubiously accused of charging more when riders’ phone batteries are low. Retailers collect shopping histories through loyalty cards and apps; often customers already pay more for opting out.

Some types of price discrimination have long been illegal under consumer protection law. On February 12, US senators Ben Ray Luján (D-NM) and Jeff Merkley (D-OR) introduced the Stop Price Gouging in Grocery Stores Act of 2026 to block it in grocery stores. Other efforts are also underway in the US. This week’s news is calling this “surveillance pricing” or “predatory pricing”, which accurately reflects the data collection and surveillance capitalism underpinning.

This is not really a story about specific technologies, although “AI”; the issue is, as Hasbrouck writes, opacity and discrimination.

The technological pieces are in place to make this all much worse. Not just online, where prices are easily generated on the fly, but also in real-world retailers via wireless connections and electronic shelf tags, which already exist in some stores. A May 2025 study finds that these tags are so far not used to implement surge pricing but to update sale prices and offer discounts – but for how long?

In a May 2025 paper in the International Journal of Research in Marketing researchers examine the rise of algorithmic pricing – Uber, landlords, retailers – generally and personalized pricing in particular. The authors note that the latter requires market power, and buyers must have limited ability to exploit price differences. And also: it’s profitable (duh). The authors go on to discuss the role of privacy, data protection, consumer laws, and backlash, in curbing unfairness. At Big, which focuses on consolidation and market power, Matt Stoller warns of the potential power of Google’s plan to run pricing strategies for advertisers.

Hasbrouck returned to the subject last year while recapping the latest season of The Amazing Race to explain the airlines’ use of systems that deliver increasingly opaque and unpredictable pricing and the lack of enforcement enabling it.

In a pair of posts, the ACLU’s Jay Stanley follows Hasbrouck’s logic to new levels, laying out a possible future these developments may bring. The desire to wring every last possible bit of profit out of us – we’re talking everyone who sells goods or services now, not just airlines – in Stanley’s view will lead to collecting more and more detailed data. The digital identification infrastructure being built into airports – as planned here in 2013 and shown arriving in 2022 – will ensure there is no countermeasure we can take to escape monitoring and data collection. In Stanley’s projection, stores will be able to demand a digital identification sign-in as a condition of entry.

Again, a bit of this is here already. In the UK, Facewatch, which we first encountered in 2013, is used by some major retailers to identify and bar shoppers who have previously been caught shoplifting or being violent. Recently, the system flagged a shopper and staff ousted the wrong person, who found it difficult to prove the mistake.

In other words, all these technologies, wrapped up together, could enable a world not much different from that imagined by Ira Levin in his 1970 book This Perfect Day, where everything anyone wanted to do required permission from a centralized system. Although: the key to making that work was drugging the entire population.

Illustrations: Burmese python in Florida, 2011 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

R.I.P. Dave Farber

For decades, Internet pioneer David J. Farber, who died on February 7, aged 91, maintained the Interesting People email list. When he began, it was an unusual choice, as few academics then reached out in such a public way.

Often called the grandfather of the Internet because he invented technologies and taught students whose work became crucial to its development, Farber’s long and distinguished career, which others recount in better detail, included stints at many respected US institutions – Bell Labs, RAND, the University of Delaware, Carnegie Mellon, the University of Pennsylvania – and a presence on too many technical organizations and projects to count. He was a regular at conferences, and Ione of those, in 1998, is where I met him in person.

Among the obits lauding his professional career: Stevens,Institute of Technology, where he earned his first degree (“he created the future”); the Internet Hall of Fame (“a mentor to many generations”); the Electronic Frontier Foundation, where he was a board member; the Japan Times, Tools on Fire, the New York Times (“gregarious”).

“Gregarious” was crucial to building the Internet, as Times writer Peter Wayner writes. Until 2020 the mailing list, whose membership Wayner estimates at 25,000, was really how I knew him. Farber liked to collect information and knowledge in all forms, including human, though I thought of the list less as interesting *people* than alerts to interesting information. However, the sources of that forwarded information were often surprising. Farber seemed to know *everyone*.

In 2020, covid’s arrival led Farber to create a new platform, a weekly Zoom call, which he of course publicized on his mailing list. Over the last nearly six years, many of these calls featured invited speakers, who might cover any topic from medical privacy to quantum computing to Hong Kong real estate to Brexit. Many regulars were his old friends; many others, like me, were newcomers. He was welcoming to all and eager to ask questions.

He was particularly interested in hearing from Asian speakers since he had moved, at 83, to Japan in 2018 to take up a post at Keio University. The move to a new country and culture seems to have given him a berth in a place where his age was respected and his accomplishments were revered. In the last week of January, he wound up what turns out to be his final semester by sending in the final grades for his class of 37 students. He was a teacher and communicator to the last. R.I.P.

Illustration: Dave Farber on a Zoom call, drawn by his friend of 70-plus years, John De Pillis (used by permission).

Whooped

In 2022, we noted a discussion by Julia Powles and Toby Walsh, summarized here, that warned about the increasing collection of data about elite athletes. The data, they said, does not flow to the sports scientists who really can help athletes perform better and minimize their risk of injury, but to data scientists and data crunchers. The money could be better spent on healthier environments and financial support. Powles, with Jacqueline Alderson, followed up in 2023 with best practice principles.

Cue tennis. At this year’s Australian Open, which concluded on February 1, eventual men’s singles champion Carlos Alcaraz, women’s singles finalist Aryna Sabalenka, and men’s singles semifinalist Jannik Sinner were all told to remove their Whoop tracker devices before playing early-round matches.

An important part of this story is the ridiculously convoluted nature of tennis politics. The International Tennis Federation runs lower-level tournaments and junior events; the Grand Slams are laws unto themselves; the men’s and women’s pro tours are run by the ATP and WTA respectively. That’s seven powers – without the national tennis federations or the national and international anti-doping edifice.

The players were caught between conflicting decisions. In mid-December, the ITF approved Whoop devices in competition as long as haptic feedback is disabled, adding it to its list of permitted “Player Analysis Technology” devices. On a Whoop, “haptic feedback” means the device vibrates to alert you to…something.

The ITF published its detailed examination (see also a wearer’s review by Emilie Lavinia at The Independent). Whoop’s array of sensors can capture: heart rate, heart rate variability, sleep stages and performance, recovery, activity strain metrics, blood oxygenation (SpO2), skin temperature, respiratory rate, and blood pressure (only some models), and perform on-demand ECG and irregular heart rhythm notification (only some regions, where it’s approved). As the ITF notes, data capture takes place automatically whenever the athlete is wearing the device. Players are allowed to charge the device on-court using its battery pack.

So far, so good. Turning off haptic feedback requires the player to disable alarms, turn off the “Strength Trainer” screens; and turn off “Strain Target” if they want to use “Live Activity” mode. The player has to show tournament officials on request that their settings are compliant (or that they’re using the Whoop 3.0, which has no haptic feedback).

The device has no screen; viewing the data requires an Android or iOS device, the app, paired via Bluetooth, and a subscription. This is the Gillette razor, or “free puppy” business model: the device is free, you pay for ongoing access to your own data.

So basically, the ITF is allowing players to collect data using the Whoop device for future inspection and discussion with their teams, as long as it doesn’t vibrate on their wrist. The key here is the potential for the device to be a conduit for an automated form of in-match coaching; the definition of PAT devices given in Appendix III of the ITF’s Rules of Tennis (PDF) specifically directs readers to Rule 30, which covers coaching.

For most of tennis’s Open Era – basically since 1968, when the sport went professional – coaching from the stands was banned. The original argument in favor of the ban was that tennis was and is a highly unequal sport. Top players can afford any assistance they want. The lowest-ranked players scrounge, as Irish player Conor Niland, who topped out at 129 in the world, recounts in a Guardian interview and in his excellent 2024 book, The Racket. Until the 1990s, even mid-range players often toured alone. Therefore, allowing coaching from the stands during matches threatened to make the playing field even more unequal.

As money flowed into tennis, the numbers who could afford traveling coaches rose and the no-coaching rule was increasingly flouted. Following a series of trials, the WTA began allowing coaching in 2022. The ATP, the ITF, and the Grand Slams finally began allowing it in 2025. Rule 30 leaves it open for events’ governing bodies to prohibit it.

The Whoop controversy digitizes all this baggage. Rule 30 differentiates between off-court coaching (coaching from the stands), which is permitted, and on-court coaching, which is only permitted during specific team events. Ordinary watches are allowed, but smart watches and mobile phones are banned because they are capable of communications, Teresa Merklin explains at Fiend at Court.

“We have coaching. why can’t you have your own data?” former champion Todd Woodbridge asked on TV.

We are still at the beginning of these technologies and their controversies. Merklin digs up a forgotten incident: in 2013, Wimbledon shot down Bethanie Mattek-Sands’ query about wearing Google Glass. Devices will keep shrinking and becoming harder to spot.

Unlike in the past, however, PATs are cheap compared to hiring traveling personnel. While Powles and Walsh were undoubtedly right that data analysis is no substitute for physiologists’ and sports scientists’ expertise, PATs might give lower-ranked players previously unaffordable insight. Given the increasing heat stress at many tournaments, feedback that warns when your body is overstressed seems like a good idea. On the other hand, can you imagine how much bettors would love to have access to this kind of data in real time?

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

In search of the future Internet

“What kind of Internet do you want [him] to inherit?” “Him” was then measuring his age in weeks.

“Not *this* Internet.”

Now, when said son has grown to measure his life in months, my friend and I are no closer to a positive vision. But notably many more people seem to be asking the same kind of question.

In the last week I’ve been to two meetings convened to pull together a cross-section of activists, policy wonks, and techies to talk about building movements to push back against the spread of technological control. The goals of these groups, like my friend’s and mine remain fuzzy, but they reflect a widespread growing alarm about AI, US entanglement, and our other technological ills.

“When did the future stop being something we plan for and become something done to us?” a friend asked about five years ago. That sense of being held hostage by the inevitability narrative is there, too, in a jumble including job loss, the evils of capitalism, the embedding of companies like Palantir in the health service and soon in policing, the speed of change, widespread loneliness, sustainability, and existential threats. So the overall feel has been part-Occupy, part consciousness-raising session.

Those who did have visions to propose often seemed to be describing things that already exist: trusted, authoritative content (the BBC, Wikipedia); ending capitalism in favor of shared ownership and distributed power (“there’s always someone reinventing communism,” the person next to me muttered), and recreating the impossible dream of micropayments.

One meeting polled us with a list of concerns about AI and asked us to pick the most important. The winner, by far, was “consolidation of power”. This speaks to a wider movement than merely opposing AI or resisting the encroachment of the worst technology surveillance practices into daily life.

Similar discussions have been growing for at least a couple of years. At The Register, long-time open source advocate Liam Proven writes after attending the Open Source Policy Summit that Europe is reassessing its technological reliance on US IT services, which offers the potential for a US president to order disconnection. The lack of European billion-dollar technology companies leads people to forget the technology invented here that instead embraced openness: the web, Linux, Raspberry Pi, Open StreetMap, the Fediverse.

It’s a little alarming, however, that all of this discussion hovers at the application layer. Old-timers who’ve watched the Internet build up understand that underneath the social media and smartphones lies the physical layer, the infrastructure that is also condolidated and controlled: chips, cables, wireless spectrum. For younger folks, those elements are near-invisible; their adult lives have been dominated by concerns about data. Yet in the last year we’ve been warned of sabotage to undersea cables and chip shortages. There’s more general recognition of the issues surrounding data centers’ demand for power and water.

Even so, there’s a good amount of recognition that all the strands of our present polycrisis are intertwined – see for example the mission statement at Germany’s Cables of Resistance. A broader group, building on the 2024 conference convened by Cristina Caffarra, who called out policy makers at CPDP 2024 for ignoring physical infrastructure, is working on a EuroStack to provide a European cloud alternative.

At the political layer, we have Dutch News reporting that Dutch MPs are pushing their government to move away from depending on US technology companies to provide essential infrastructure. In the UK, LibDem and Green MPs are calling on the government to reconsider its contracts with Palantir.

A group called Pull the Plug will lead a “march against the machines” in London on February 28 to demand the UK government create citizens’ assemblies and implement their decisions on AI.

It feels like change is gathering here. In the US, the future still looks much like the past. In a blog post this week, here is Anthropic, presumably responding to OpenAI’s plan to add advertising to ChatGPT:

But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking…We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.

Compare and contrast to Google founders Sergey Brin and Larry Page in their 1998 Google-founding paper:

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users…we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers…we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

No wonder Anthropic adds this caution: “Should we need to revisit this approach, we’ll be transparent about our reasons for doing so.” Translation: we may need the money.” Of course they’ll frame it as serving the customer better.

Illustrations: (One of) the first Internet ad, for AT&T, on HotWired (via The Internet History Podcast.

Also this week:
At the Plutopia podcast we talk to science fiction writer Ken MacLeod.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.