Bedroom eyes

We’ve long known that much of today’s “AI” is humans all the way down. This week underlines this: in an investigation, Svenska Dagbladet and Göteborgs-Posten learn that Meta’s Ray-Ban smart glasses are capturing intimate details of people’s lives and sending them to Nairobi, Kenya. There, employees at Meta subcontractor Sama label and annotate the data for use in training models. Brings a new meaning to “bedroom eyes”.

This sort of violation is easily imposed on other people without their knowledge or consent. We worry about the police using live facial recognition, but what about being captured by random people on the street? In January’s episode of the TechGrumps podcast, we called the news of Meta’s new product “Return of the Glasshole“.

Two 2018 books, Mary L. Gray and Siddharth Suri’s Ghost Work and Sarah T. Roberts’ Behind the Screen made it clear that “machine learning” and “AI” depend on poorly-paid unseen laborers. Dataveillance is a stowaway in every “smart” device. But this is a whole new level: the Kenyans report glimpses of bank cards, bedroom intimacy, even bathroom visits. The journalists were able to establish that the glasses’ AI requires a connection to Meta’s servers to answer questions, and there’s no opt out.

The UK’s Information Commissioner’s Office is investigating, and at Ars Technica Sarah Perez reports that a US lawsuit has been filed.

As the original Swedish report goes on to say, the EU has no adequacy agreement with Kenya. More disturbing is the fact that probably hundreds of people within Meta worked on this without seeing a problem.

In 1974, the Watergate-related revelation that US president Richard Nixon had recorded everything taking place in his office inspired folksinger Bill Steele to write the song The Walls Have Ears (MP3). What struck him particularly was that everyone saw it as unremarkable. “Unfortunately still current,” he commented in his 1977 liner notes. Nearly 50 years later, ditto.

***

A lot of (especially younger) people don’t remember that before 9/11 you could walk into most buildings without showing ID. Many authorities – the EU in particular – have long been unhappy with anonymity online, and one conspiratorial theory about age gating and the digital ID infrastructure being built in many places is that the goal is complete and pervasive identification. In the UK, requiring ID for all Internet access has occasionally popped up as a child safety idea, even though security experts recommend lying about birth dates and other personal data in the interests of self-protection against identity theft.

Now we have generative AI, and along comes a new paper that finds that large language models can be used to deanonymize people online at large scale by analyzing profiles and conversations. In one exercise, they matched Hacker News posts to LinkedIn profiles. In another, they linked users across subReddit communities. In a third, they split Reddit profiles to mimic the use of pseudonymous posting. Pseudonymity doesn’t offer meaningful protection (though I’m not sure how much it ever did), and preventing this type of attack is difficult. They also suggest platforms should reconsider their data access policies in line with their findings.

It’s hard to imagine most platforms will care much; users have long been expected to assess their own risk. Even smaller communities with a more concerned administration will not be in a position to know how many other services their users access, what they post there, or how it can be cross-linked. The difficulty of remaining anonymous online has been growing ever since 2000, when Latanya Sweeney showed it was possible to identify 87% of the population recorded in census data given just Zip code, date of birth, and gender. As psychics know, most people don’t really remember what they’ve said and how it can be linked and exploited by someone who’s paying attention. The paper concludes: we need a new threat model for privacy online.

***

The Internet, famously, was designed to support communications in the face of a bomb outage.

Building it required physical links – undersea cables, fiber connections, data centers, routers. For younger folks who have grown up with wifi and mobile phone connections, that physical layer may be invisible. But it matters no less than it did twenty-five years ago, when experts agreed that ten backhoes (among other things) could do more effective damage than bombs.

This week’s horrible, spreading war in the Middle East has seen the closure of the Strait of Hormuz and the Red see to commercial traffic. Indranil Ghosh reports at Rest of World that that 17 undersea cables pass through the Red Sea alone, and billions, soon trillions, of dollars in US technology investment depends on fiber optic cables running through war zones. There’s been reporting before now about the links between various Middle Eastern countries and Silicon Valley (see for example the recent book Gilded Rage, by Jacob Silverman), but until now much less about the technological interdependence put in jeopardy by the conflict. Ghosh also reports that drones have struck two Amazon Web Services data centers in UAE and one in Bahrein.

The issue is not so much direct damage to the cables as the impossibility of repairing them as long as access is closed. The Internet, designed with war in mind, is a product of peace.

Illustrations: Monument to Anonymous, by Meredith Bergmann.

Also this week: At the Plutopia podcast, we interview Kate Devlin, who studies human-AI interaction.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Saving no one

In the early 2010s, after “nano” and before “AI”, 3D printing was the technology that was going to change everything. Then it seemed to go quiet except for guns.

“First we will gain control over the shape of physical things. Then we will gain new levels of control over their composition, the materials they’re made of. Finally, we will gain control over the behavior of physical things,” Hod Lipson and Melba Kurman wrote in their 2013 book, Fabricated. As far as I can tell, we’re still pretty much in the era of making physical things that could be made by traditional methods rather than weird new shapes that could *only* be produced by additive manufacturing. More than 15 years after a fellow technology conference attendee excitedly lectured me that 3D printing was going to change everything, its growth remains largely hidden from most of us.

Until this past week, when I attended an event awash in puzzle makers and discovered that it’s been a godsend to them for making not only prototypes but also small runs of copies or published designs, freeing them from having to find space and capital for the kind of quantities required by traditional production. It’s good to see a formerly hyped technology supporting clever and entertaining human invention.

Exploding egg, anyone?

***

In one of the biggest fines in its history, the UK Information Commissioner’s Office has announced it is fining Reddit £14.5 million for failing to put in place an effective age verification mechanism to block under-13s from using the site under Reddit’s stated terms of service. The story is somewhat confused by timing: the fine is under data protection law and relates to the period before the arrival of the Online Safety Act, but the OSA’s requirement for age verification brought the changes that sparked the fine. Reddit says it will appeal.

In the UK terms and conditions Reddit announced in June 2025, the company says that “by using the services, you state that…you are at least 13 years old”. But Reddit didn’t require proof, and the ICO says that many under-13s use(d) the platform.

In July, when the Online Safety Act came into effect, Reddit added an age gate of 18 for “mature” content. Unlike many other social media sites that are just giant pools of content sorted by curation or algorithm, Reddit is a large set of distinct subReddits. Each of these communities has its own rules, social norms, and, most important, human moderators. Because of this, it’s comparatively easy to mark a particular subReddit as “for adults only”. After the July change, anyone in the UK wishing to access one of those subReddits was asked to submit a selfie or an image of a government-issued ID in order to prove their age.

The ICO’s findings state that Reddit failed to protect under-13s from accessing content that placed them at risk; that it processed under-13s’ data unlawfully (because they are too young to meaningfully consent); and that a simple statement is not a sufficient age verification mechanism (which is made clear in the OSA).

A Reddit spokesperson told the Guardian: “The ICO’s insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users’ online privacy and safety.”

I take their point; I’d rather skip the “mature” content than bear the privacy risk of uploading personal data to whatever third-party company Reddit is using for age verification. Last July, I decided I would just be a child. (Although: my Reddit account dates to 2015, so they could just do the math.)

Turns out, it may have been a wise decision. Reddit, saying it didn’t want to hold users’ personal data, chose the age verification provider Persona.

Persona deserves a look. Last week, Discord announced it would begin treating all users as teens until they’d been verified, also using Persona. The result as Ashley Belanger reports at Ars Technica was a user backlash. First, because last time Discord tried this, its now-former age verification provider’s pile of 70,000 users’ age check information was hacked.

Second, because The Rage reports that a group of security researchers found a Persona front end exposed to the open Internet on a US government server. On examination, that code shows that Persona performs 269 different verification checks and scours the Internet and government sources using your selfie and facial recognition. Discord has now announced it will delay introducing age verification – and won’t be using Persona after an apparently unsatisfactory trial in the UK last year.In a blog posting, Discord says that, like Reddit, it does not want to know its users’ identification details. It is adding more verification options.

If the world had already had a set of established trustworthy companies that specialized in age verification when the OSA came into effect, then it would make sense to turn to them to provide that service. But we aren’t in that situation. Instead, although providers have been working for more than a decade to build such systems, their deployment at scale is new.

Part of keeping children – and the rest of us – safe is protecting security and privacy – and child safety campaigners’ refusal to accept this has been an issue for decades. Creating new privacy risks doesn’t keep anyone safer – including children.

Illustrations: Six-panel early 1970s cartoon strip, “What the User Wanted”.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Information wants to be surveiled

It has long been the case that the person sitting next to you on an airplane may have paid a very different price for their seat than you did. For two reasons. First, airlines don’t price seats – they price itineraries. A seat from London to New York may be priced very differently depending on whether it’s one-way, one-half of a round trip, or one stage in a larger, multi-stage journey. Second, however, airlines are sophisticated about maximizing the value of your seat, responding to patterns of peak travel (Thanksgiving, August), when you buy it, and other factors. Despite their complexity, those prices are supposed to be based on published tariffs that anyone can, in theory, calculate for themselves and come up with the same answer.

In 2012, travel and data privacy expert Edward Hasbrouck explained all this as part of documenting and opposing the airlines’ desire to move to personalized pricing. Instead of acting as common carriers and publishing tariffs that apply across the board, with personalized pricing the airlines would use the information they have about you to charge what you can and would be willing to pay. In 2012, the International Air Transportation Association called it “New Distribution Capability”.

This is a much scarier proposition now than it was then; companies have a lot more data they can exploit. In 2012, they might simply have known your flying habits and credit score, while balancing their desire to get the most they can for the seat against the time they have left to sell it. Uber uses similar tactics; it raises the price of a ride – “surge pricing”, or “dynamic pricing” – when demand is high.

Today, an airline might know – or be able to find out – that you are racing against time to see a beloved relative before they die. And why should it stop with airlines? Exploitative possibilities abound.

Uber has already been dubiously accused of charging more when riders’ phone batteries are low. Retailers collect shopping histories through loyalty cards and apps; often customers already pay more for opting out.

Some types of price discrimination have long been illegal under consumer protection law. On February 12, US senators Ben Ray Luján (D-NM) and Jeff Merkley (D-OR) introduced the Stop Price Gouging in Grocery Stores Act of 2026 to block it in grocery stores. Other efforts are also underway in the US. This week’s news is calling this “surveillance pricing” or “predatory pricing”, which accurately reflects the data collection and surveillance capitalism underpinning.

This is not really a story about specific technologies, although “AI”; the issue is, as Hasbrouck writes, opacity and discrimination.

The technological pieces are in place to make this all much worse. Not just online, where prices are easily generated on the fly, but also in real-world retailers via wireless connections and electronic shelf tags, which already exist in some stores. A May 2025 study finds that these tags are so far not used to implement surge pricing but to update sale prices and offer discounts – but for how long?

In a May 2025 paper in the International Journal of Research in Marketing researchers examine the rise of algorithmic pricing – Uber, landlords, retailers – generally and personalized pricing in particular. The authors note that the latter requires market power, and buyers must have limited ability to exploit price differences. And also: it’s profitable (duh). The authors go on to discuss the role of privacy, data protection, consumer laws, and backlash, in curbing unfairness. At Big, which focuses on consolidation and market power, Matt Stoller warns of the potential power of Google’s plan to run pricing strategies for advertisers.

Hasbrouck returned to the subject last year while recapping the latest season of The Amazing Race to explain the airlines’ use of systems that deliver increasingly opaque and unpredictable pricing and the lack of enforcement enabling it.

In a pair of posts, the ACLU’s Jay Stanley follows Hasbrouck’s logic to new levels, laying out a possible future these developments may bring. The desire to wring every last possible bit of profit out of us – we’re talking everyone who sells goods or services now, not just airlines – in Stanley’s view will lead to collecting more and more detailed data. The digital identification infrastructure being built into airports – as planned here in 2013 and shown arriving in 2022 – will ensure there is no countermeasure we can take to escape monitoring and data collection. In Stanley’s projection, stores will be able to demand a digital identification sign-in as a condition of entry.

Again, a bit of this is here already. In the UK, Facewatch, which we first encountered in 2013, is used by some major retailers to identify and bar shoppers who have previously been caught shoplifting or being violent. Recently, the system flagged a shopper and staff ousted the wrong person, who found it difficult to prove the mistake.

In other words, all these technologies, wrapped up together, could enable a world not much different from that imagined by Ira Levin in his 1970 book This Perfect Day, where everything anyone wanted to do required permission from a centralized system. Although: the key to making that work was drugging the entire population.

Illustrations: Burmese python in Florida, 2011 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Slop

Sometimes it doesn’t pay to be first. iRobot, the maker of the Roomba, has filed for Chapter 11 bankruptcy protection and been acquired by Picea, one of its Chinese suppliers, Lauren Almeida reports at the Guardian. The company’s value has cratered since 2021.

Given the wild enthusiasm that greeted the Roomba’s release in 2002, it seems incredible. Years before then, I recall an event where a speaker whose identity I don’t remember said that ever since he had mentioned the possibility of a robot vacuum sometime like the 1960s he’d gotten thousands of letters asking when it would be ready. There was definitely customer demand. It helped that the Roomba itself was kind of cute as it banged randomly into furniture. People named them, and took them on vacation. But, as often happens, the Roomba’s success attracted lower-cost competitors, and the first mover failed to keep up.

I got one in 2003. After a great few months, I realized that Roombas are not compatible with long hair, which ties them into knots that take longer to cut out than vacuuming. I gave it away within a year and haven’t tried again.

At Mashable, Leah Stodart warns that although the Roombas people already have will continue to work “for now”, users can’t be confident that this state of affairs will continue. Like so many other things that used to be things we owned and are now things we subscribe to (but still think we “buy”), newer-model Roombas are controlled by an app that the manufacturer can change or discontinue at will. She calls it “unplanned obsolescence”. Her advice not to buy a new one this year is sound from the consumer’s point of view, but hardly likely to help the company survive.

***

If generative AI is so great, why is everyone forcing it on us? The latest example, Luke James reports at Tom’s Hardware, is LG “smart” TVs whose users woke up the other day to find a new update had installed “CoPilot: Your AI Companion” without asking permission and that there was no option to remove it. The most you can do to disable it, James says, is keep your TV disconnected from the Internet.

There are of course many more, the automated summaries popping up everywhere being the most obvious. Then, Matthew Gault reports at 404 Media, a Discord moderator and an Anthropic executive added Anthropic’s Claude chatbot to a community for queer gamers, who had voted to restrict Claude to its own channel. Result: major exodus. Duh.

And, of course, as Lance Ulanoff reminds at TechRadar, there is “AI slop” everywhere – music playlists, YouTube videos, ebooks – threatening people’s livelihood even though, as Cory Doctorow has written, “AI can’t do your job. But an AI salesman can convince your boss to fire you and replace you with a chatbot that can’t do your job.” For a while, anyway: Microsoft is halving its sales targets for AI.

And thus we get “slop” as the word of the year, per Merriam-Webster. Any time companies are this intent on foisting something on us – chatbots, ads – you have to know that they’re intent on favoring their interests, not ours.

***

Last week, Customs and Borders Patrol published a notice in the Federal Register proposing new rules for foreigners traveling to the US on an ESTA (“Electronic System for Travel Authorization”) as part of the visa waiver program. It has drawn a lot of discussion in the UK, which is one of the 42 affected countries. Under the new rules, applicants must install CBP’s app, into which they must submit a massive load of “high-value” personal information. The list is long, allows for a so-far-imaginary future of DNA sampling, and expects you to be able to give five years’ worth of family members’ residences, phone numbers, and places of birth, and all the email addresses you’ve used for ten years. CBP thinks the average applicant should be able to complete on their smartphone in 22 minutes. I think it would take hours of painful, resentful typing on a stupid touch keyboard, and even then I doubt I could fill it out with any certainty that the information I supplied was complete or accurate. Data collection at this scale makes it easy to find an error to use as an excuse to deny entry to or deport someone you want to get rid of. As Edward Hasbrouck writes at Papers, Please, “Welcome to the 2026 World Cup”.

“They have to be planning to use AI on all that data,” a friend commented last week. Probably – to build social graphs and find connections deemed suspicious. Privacy International predicts that the masses of data being demanded will in fact enable the AI tools necessary to implement automated decision making and calls the proposals “disproportionate for “a family’s visit to Disney World”,

One of the problems Hasbrouck highlights while opposing this level of suspicionless data collection is that CBP has not provided any way for would-be respondents to the Federal Register notice to examine the app’s source code. What other data might it be collecting?

As Hasbrouck adds in a follow-up, the rules the US imposes on visitors are often adopted by other countries as requirements for US travelers. In this game of ping-pong escalation, no one wins.

ID is football

On Wednesday, Australia woke up to its new social media ban for under-16s. As Ange Lavoipierre explains at ABC News, the ban isn’t total. Under-16s are barred from owning their own accounts on a a list of big platforms – Facebook, Instagram, Threads, Twitch, YouTube, TikTok, X, Reddit, Kick, and Snapchat – but not barred from *using* those platforms. So, inevitably, there are already reports of errors and kids figuring out how to bypass the rules in order to stay in touch with their friends. The Washington Post’s report contains this contradiction: “Numerous recent polls indicate that a solid majority of Australians support the ban, but that young respondents largely don’t plan to comply.”

Helpfully, ABC News reported a couple of months ago that researchers, led by the UK’s Age Check Certification Scheme, have tested age assurance vendors, and found that “Old man” masks and other cheap party costumes apparently work to fool age estimation algorithms).

Edge cases are appearing, such as the country’s teen Olympians – skateboarders and triathletes – for whom the ban disrupts years of building fan communities, potentially also disrupting some of their funding.

Meanwhile, the BBC reports that a pair of 15-year-olds, backed by the Digital Freedom Project, are challenging the ban in court. The Josh Taylor reports at the Guardian that Reddit is also suing.

At Nature, Rachel Fieldhouse and Mohana Basu write that the ban’s wider effects will be assessed by scientists independently. This is good; defining “success” solely by the numbers of blocks bypassed substitutes an easy measure for the long-term impacts, which are diffuse, difficult to measure, and subject to many confounding variables.

But we know this: the ratchet effect applies. I first encountered it in the context of alternative medicine. Chronic illnesses have cycles; they improve, plateau, get worse. Apply a harmless remedy. If the patient gets better, the remedy is working. If it stays the same, the remedy has halted the decline. If it gets worse, the remedy came too late. In all cases, the answer is more of the remedy. So with online safety. In child safety, the answer is always that more restrictions are needed. In the UK, where the Online Safety Act has been in force for mere months, three members of the House of Lords have already proposed a similar ban as an amendment to the Children’s Wellbeing and Schools Bill.

***

Keir Starmer’s vague plan for a mandatory digital ID is back. This week saw a Westminster Hall debate, as required after nearly three million people signed an online petition opposing it.

At Computer Weekly, Liz Evenstead reports that MPs across all parties attacked the plan, making familiar points: the target such a scheme could create for criminals, the change it would bring to the relationship between citizens and the state, and the potential threat to civil liberties. They also attacked its absence from Labour’s election manifesto; last month, Fiona Brown reported at The National that on Times Radio UK head Louis Mosley said that Palantir would not bid on contracts for the digital ID because it hasn’t had “a clear, resounding ballot box”.

Also a potential issue is cost, which the Office of Budget Responsibility recently estimated at £1.8 billion. According to SA Mathieson at The Register, the government has rejected the figure but declined to provide an alternative estimate until its soon-to-be-launched consultation has been completed.

Also hovering in the background, weirdly ignored, is the digital identity and attributes trust framework, which has been in progress for the last several years at least.

Beyond that, we still have no real details. For this reason, in a panel I moderated at this week’s UK Internet Governance Forum, I asked panelists – Dave Birch, Karla Prudencio, and Mirca Madianou to try to produce some principles for what digital ID should and should not be. Birch in particular has often said he thinks Britain as a sovereign state in the 21st century sorely needs a digital identity infrastructure – by which he *doesn’t* mean anything like the traditional “ID card” so many are talking about. As we all agree, technology has changed a lot since 2005, when this was last attempted. Since then: blockchain, smartphones, social media, machine learning, generative. So we agree that far: anything the government proposes really should look very different than the last attempt, in 2005.

Here are the principles our discussion came up with:
– Design for edge cases, as a system that works for them will work for everyone.
– Design for plural identities.
– Don’t design the system as a hostile environment.
– Don’t create a target for hackers.
– Understand the real purpose .
– Identification is not authentication.
– Understand public-private partnerships as three-way relationships with users.
– Design to build public trust.

And one last thought:
– Sometimes, ID is football.

That last is from Madianou’s field work in Karen refugee camps along the border between Thailand and Myanmar. One teenaged boy really wanted an ID card so he could leave the camp and return safely without being arrested in order to go play football in a nearby village. It’s a reminder: identification can mean many different things in different situations.

Illustrations: The Mae La refugee camp in Thailand (by Tayzar44 at Wikimedia.

Also this week: TechGrumps 3.34 – ChatGPT is not my wingman.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A road not taken

Nearly 20 years ago, I attended a conference on road pricing. The piece I wrote about it (PDF) for Infosecurity magazine suggests it was in late 2007, three years after transport secretary Alistair Darling proposed bringing in a national road pricing scheme. The idea represented a profound change; until a few years earlier, congestion had always led to building more roads. In 2003, however, London mayor Ken Livingstone implemented instead the congestion charge – and both traffic and pollution levels had dropped.

So this conference explored the idea that road pricing would cut traffic to match road capacity, taking us off the vicious spiral of increasing road capacity and watching traffic rise to choke it. Darling’s proposal was for a satellite tracking following a 2004 feasibility study. In 2007, however, prime minister Tony Blair effectively dropped the idea after 1.8 million people signed a petition opposing it.

This week’s announcement of road pricing for electric vehicles is rather differently motivated, but reawakened my memory of the 2008 discussion. Roads must be paid for somehow, and, as foreseen by the the Institute for Fiscal Studies in 2012 the rise of electric vehicles inevitably eats away revenues from fuel taxes. EVs have many benefits: they can be powered without fossil fuels; their engines emit no carbon or other pollutants; and they are quieter. However, they weigh 10% to 30% more than internal combustion engine vehicles, and tire wear remains a significant pollutant.

Back in 2005 there were three main contenders for per-mile road pricing: automated number/license plate readers; tag and beacon; and time-distance-place. At the time, versions of these were already in use: the first was in place to administer London’s congestion charge; the second, effectively an update to paying at the tollbooth, was in place on turnpikes in the American northeast and in the UK at Dartford Crossing; the third was being used in Germany’s HGV system, which collects tolls for the kilometers driven on the country’s autobahns. In a 2007 paper, Cambridge researchers David N. Cottingham, Alastair Beresford, and Robert K. Harle analyzed the technologies available.

Whatever you call them, limited-access highways – autobahns, motorways, interstates, thruways – are a relatively simple problem because there are relatively few entry and exit points. Tracking, as transponders read by automated tollbooths have made possible, remains a privacy concern. Such a scheme was deemed unworkable for London, where TfL counted 227 entry points to the most congested area, and barriers would simply create new chokepoints. For this reason, and also because it estimated that 80% of cars entering the congestion zone are infrequent users, TfL opted for a system of cameras that read license plates on the fly and an automated system to send out penalty notices if someone hasn’t paid. This system also seems difficult to imagine scaling to a national level; every road, street, and back alley would have to have ANPR cameras. In the US, where Flock cameras are collecting ANPR data at scale, law enforcement and immigration authorities are already exploiting it in anti-democratic ways, as 404 Media reports.

In 2008, TDP, a much more likely approach for a nationwide system of per-mile pricing, would have required a box to be installed in every vehicle to track it, likely via GPS, and report time and location data via mobile networks for use to calculate what the owner should pay. No one was then sure whether road users would accept having tags in their vehicles or be willing to pay the considerable expense; as I seem to have written in that 2008 Infosecurity article, “‘We’re going to change your behavior and charge you for the privilege’ isn’t much of a sales pitch.” But such a system would enable proportionately charging people based on their actual road use.

If we were updating that discussion, parts would be unchanged. Congestion charge-style ANPR cameras everywhere will be no more feasible than then. Germany’s system for motorways will similarly not be feasible for smaller roads and within cities. TDP, however…

Here in 2025, most people are already carrying smart phones with GPS just part of the package. So there could be a choice: buy a box that is irretrievably embedded in the vehicle or download a TDP app that’s somehow tied to and paired with the car, perhaps via its electronic key, so that it won’t start unless the app-car link is enabled. (Fun for anyone whose battery dies in the course of an evening out.) In addition, cars already collect all sorts of data and send it to their manufacturers. So it’s also possible to imagine a government requring manufacturers active in the UK to transmit time and location data to a specified authority.

Obviously, the privacy implications of such a system would be staggering. Law enforcement would demand access. Businesses whose fleet patterns are commercially sensitive would hate it. And the UK’s successive governments have shown themselves to be highly partial to centralized databases that are built for one purpose and then are exploited in other ways. For this reason, Beresford’s idea in 2008 was for a privacy-protecting decentralized system using low-cost equipment that would allow cars to identify neighboring non-payers and report only those.

The good news is that the details we have so far government proposals suggest something far simpler: report the odometer reading at each year’s annual vehicle check and multiply by the per-mile charge. So unusual these days to see a government propose something so simple and cheap. Whether it’s a good idea to discourage the shift to EVs at this particular time is a different question.

Illustrations: A fork in a road (via Wikimedia).

At Plutopia, we interview Bruce Schneier about his new book, Rewiring Democracy, which examines the good and bad of what AI may bring to democracy.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Simplification

We were warned this was coming at this year’s Computers, Privacy, and Data Protection, and now it’s really here. The data protection NGO Noyb reports that a leaked internal draft (PDF) of the European Commission’s Digital Omnibus threatens to undermine the architecture the EU has been building around data protection, AI, cybersecurity, and privacy generally. At The Register, Connor Jones summarizes the changes; Noyb has detail.

The EU’s workings are, as always, somewhat inscrutable to outsiders. Noyb explains that the omnibus tool is intended to allow multiple laws to be updated simultaneously to “improve the quality of the law and streamline paperwork obligations”. In this case, Noyb argues that the European Commission is abusing this option to fast-track far more substantial and contentious changes that should be subject to impact assessments and feedback from other EU institutions, as well as legal services.

If the move succeeds – the final draft will be presented on November 19 – Noyb believes it could remove fundamental rights to privacy and data protection that Europeans have been building for more than 30 years. Noyb, European Digital Rights, and the Irish Council for Civil Liberties have sent an open letter of objection to the Commission. The basic argument: this isn’t “simplification” but deregulation. The package would still have to be accepted by the European Parliament and a majority of EU member states.

As far as I can recall, business has never much liked data protection. In the early 1990s, when the first laws were being written, I remember being told data protection was a “tax on small business”. Privacy advocates instead see data protection as a way of redressing the power imbalance between large organizations and individuals.

By 1998, when data protection law was implemented in all EU member states, US companies were publicly insisting that the US didn’t need a privacy law in order to be in compliance. Companies could use corporate policies and sectoral laws to provide a “layered approach” that would be just as protective. When I wrote about this for Scientific American in 1999, privacy advocates in the UK predicted a trade war over this, calling it a failure to understand that you can’t cut a deal with a fundamental right – like the First Amendment.

In early 2013, it looked entirely possible that the period of negotiations over data protection reform would end with rollback. GDPR was the focus of intense lobbying efforts. There were, literally, 4,000 proposed amendments, so many that I recall being shown software written to manage and understand them all.

And then…Snowden. His revelations of government spying shifted the mood noticeably, and, under his shadow, when GDPR was finally adopted in 2016 and came into force in 2018, it expanded citizens’ rights and increased penalties for non-compliance. Since then, other countries around the world have used GDPR as a model, including China and several US states.

Those few states aside, at the US federal level data protection law has never been popular, and the pile of law growing around it – the Digital Services Act, the Digital Markets Act, and the AI Act – is particularly unwelcome to the current administration, which sees it as a deliberate attack on US technology companies.

In the UK the in-progress Data (Use and Access) Act, which passed in June, also weakened some data protection provisions. It will be implemented over the year to June 2026.

At its blog, the Open Rights Group argues that some aspects of the DUAA rest on the claim that innovation, economic growth, and public security are harmed by data protection law, a dubious premise.

Until this leak, it seemed possible that the DUAA would break Britain’s adequacy decision and remove the UK from the list of countries to which the EU allows data transfers. The rule is that to qualify a country must have legal protections equivalent to those of the EU. It would be the wrong way round if instead of the UK enhancing its law to match the EU, the EU weakened its law to match the UK.

There’s a whole secondary issue here, which is that a law is only useful if it’s enforced. Noyb actively brings legal cases to force enforcement in the EU. In the UK, privacy advocates, like ORG, have long complained that the Information Commissioner’s Office is increasingly quiescent.

Many of the EU’s changes appear to be aimed at making it easier for AI companies to exploit personal data to develop models. It’s hard to know where that will end, given that every company is sprinkling “AI” over itself in order to sound exciting and new (until the next thing comes along), if this thing comes into force you have to think data protection law will increasingly only apply to small businesses running older technology that can’t be massaged to qualify for exemption..

I blame this willingness to undermine fundamental rights at least partly on the fantasy of the “AI race”. This is nation-state-level FOMO. What race? What’s the end point? What does it mean to “win”? Why the AI race, and not the net-zero race, the renewables race, or the sustainability race? All of those would produce tangible benefits and solve known problems of long standing and existential impact.

Illustrations: A drunk parrot in a Putney garden (photo by Simon Bisson; used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The panopticon in your home

In a series of stories, Lisa O’Carroll at the Guardian finds that His Majesty’s Revenue and Customs has had its hand in the cookie jar of airline passenger records. In hot pursuit of its goal of finding £350 million in benefit fraud, it’s been scouring these records to find people who have left the country for more than a month and not returned, so are no longer eligible.

In one case, a family was turned away at the gate when one of the children had an epileptic seizure; their child benefit was stopped because they had “emigrated” though they’d never left. A similar accusation was leveled at a women who booked a flight to Oslo even though she never checked in or flew.

These families can provide documentation proving they remained in the UK, but as one points out, the onus is on them to clean up an error they didn’t make. There are many others. Many simply traveled and returned by different routes. As of November 1, HMRC had reinstated 1,979 of the families affected but sticks to its belief that the rest have been correctly identified. HMRC also says it will check its PAYE records first for evidence someone is still here and working. This would help, but it’s not the only issue.

It’s unclear whether HMRC has the right to use this data in this way. The Guardian reports that the Information Commissioner’s Office, the data protection authority, has contacted HMRC to ask questions.

For privacy advocates, the case is disturbing. It is a clear example of the way data can mislead when it’s moved to a new context. For the people involved, it’s a hostage situation: there is no choice about providing the data siphoned from airlines to Home Office nor the financial information held by HMRC and no control over what happens next.

The essayist and former software engineer Ellen Ullman warned 20 years ago that she had never seen an owner of multiple databases who didn’t want to link them together. So this sort of “sharing” is happening all over the place.

In the US, Pro Publica reported this week that individual states have begun using a system provided by the Department of Homeland Security to check their voter rolls for non-citizens that has incorporated information from the Social Security Administration. Here again, data collected by one agency for one purpose is being shared with another for an entirely different one.

In both cases, data is being used for a purpose that wasn’t envisioned when it was collected. An airline collecting booking data isn’t checking it for errors or omissions that might cost a passenger their benefits. Similarly, the Social Security Administration isn’t normally concerned with whether you’re a citizen for voting purposes, just whether you qualify for one or another program – as it should be. Both changes of use fail to recognize the change in the impact of errors that goes along with them, especially at national scale.

I assume that in this age of AI-for-government-efficiency the goal for the future is to automate these systems even further while pulling in more sources of data.

Privacy advocates are used to encountering pushback that takes this form: “They know everything about me anyway.” I would dispute that. “They” certainly *can* collect a lot of uncorrelated data points about you if “they” aggregate the many available sources of data. But until recently, doing that was effortful enough that it didn’t happen unless you were suspected of something. Now, we’re talking sharing data and mining at scale as a matter of routine.

***

One of the most important lessons learned from 14 years of We, Robot conferences is that when someone shows a video clip of a robot doing something one should always ask how much it’s been speeded up.

This probably matters less in a home robot doing chores, as long as you don’t have to supervise. Leave a robot to fold laundry, and it can’t possibly matter if it takes all night.

From reports by Erik Kain at Forbes and Nilesh Christopher at the LA Times, it appears that 1X’s new Neo robot is indeed slow, even in its promotional video clips. The company says it has layers of security to prevent it from turning “murderous”, which seems an absurd bit of customer reassurance. However, 1X also calls it “lightweight”. The Neo is five foot six and weighs 66 pounds (30 kilos), which seems quite enough to hurt someone if it falls on them, even with padding. Granting the contributory design issues, Lime bikes weigh 50 pounds and break people’s legs. 1X’s website shows the Neo hugged by an avuncular taller man; imagine it instead with a five-foot 90-year-old woman.

Can we ask about hacking risks? And what happens if, like so many others, 1X shuts it down?

More incredibly, in buying one you must agree to allow a remote human operatorto drive the robot, along the way peering into your home. This is close to the original design of the panopticon, which chilled because those under surveillance never know whether they are being watched or not.

And it can be yours for the low, low price of $20,000 or $500 a month.

Illustrations: Jeremy Bentham’s original drawing of his design for the panopticon (via Wikimedia).

Also this week:
The Plutopia podcast interviews Sophie Nightingale on her research into deepfakes and the future of disinformation.
TechGrumps 3.33 podcast, The Final Step is Removing the Consumer, discusses AI web brorwsers, the Amazon outage, Python Foundation and DEI.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The gated web

What is an AI browser?

Or, in a more accurate representation of my mental reaction, *WTF* is an AI browser?

In wondering about this, I’m clearly behind the times. Tech sites are already doing roundups of their chosen “best” ones. At Mashable, Cecily Mouran compares “top” AI browsers because “The AI browser wars hath begun.”

Is the war that no one wants these things but they’re being forced on us anyway? Because otherwise…it’s just a bunch of heavily financed companies trying to own a market they think will be worth billions.

In Tim Berners-Lee’s original version, the web was meant to simplify sharing information. A key element was giving users control over presentation. Then came designers, who hated that idea. That battle between users’ preferences and browser makers’ interests continues to this day. What most people mean by the browser wars), though, was the late-1990s fight between Microsoft and Netscape, or the later burst of competition around smartphones. A big concern has long been market domination: a monopoly could seek to slowly close down the web by creating proprietary additions to the open standards and lock all others out.

Mouran, citing Casey Newton’s Platformer newsletter, suggests that Google specifically has exploited its browser to increase search use (and therefore ad revenues), partly by merging the address and search bars. I know I’m not typical, but for me search remains a separate activity. Most of the time I’m following a link or scanning familiar sites. Yes, when my browser history fills in a URL, I guess you could say I’m searching the browser history, but to me the better analogy is scanning an array of daily newspapers. Many people *also* use their browser to access cloud-based productivity software and email or play online games, none of which is search.

Nor are chatbots, since they don’t actually *find* information; they apply mathematics and statistics to a load of ingested text and create sentences by predicting the most likely next word. This is why Emily Bender and Alex Hanna call them “synthetic text extruding machines” in their book, The AI Con. I am in the business of trying to make sense of the impact of fast-moving technology, or at least of documenting the conflicts it creates. The only chatbot I’ve found of any value for this – or for personal needs such as a tech issue – is Perplexity, and that’s because it cites (or can be ordered to cite) sources one can check. There is every difference in the world between just wanting an answer and wanting the background from which to derive an answer that may possibly be new.

In any event, Newton’s take is that a company that’s serious about search must build its own browser. Therefore: AI companies are building them. Hence these roundups. Mauron’s pitch: “Imagine a browser that acts as your research assistant, plans trips, sends emails, and schedules meetings. As AI models become more advanced, they’re capable of autonomously handling more complex tasks on your behalf. For tech companies, the browser is the perfect medium for realizing this vision.”

OK, I can see exactly what it does for tech companies. It gives them control over what information you can access, how you use it, and who and how much you pay for the services its agent selects (plus it gets a commission).

I can also see what it does for employers. My browser agent can call your browser agent and negotiate a meeting plan. Then they attend the meeting on our behalf and send us both summaries, which they ingest and file, later forwarding them to our bosses’ agents to verify we were at work that day. In between, they can summarize emails, and decide which ones we need to see. (As Charles Arthur quipped at The Overspill, “Could they…send fewer emails?”)

Remember when part of the excitement of the Internet was the direct access it gave to people who were formerly inaccessible? Now, we appear to be building systems to ensure that every human is their own gated community.

What part of this is good for users? If you are fortunate enough not to care about the price of anything, maybe it’s great to replace your personal assistant with an agentic web browser. Most of us have struggled along doing things for ourselves and each other. At Cybernews, Mayank Sharma warns that AI browsers’ intentional preemption of efforts to browse for yourself, filtering anything they deem “irrelevant”, threaten the open web. Newton quantifies the drop in traffic news publishers are already seeing from generative AI. Will we soon be complaining about information underload?

At Pluralistic last year, Cory Doctorow wrote about the importance of faithful agents: software that is loyal to us rather than its maker. He particularly focused on browsers, which have gone from that initial vision of user control to become software that spies on us and reports home. In Mauron’s piece, Perplexity openly hopes to use chats to build user profiles and eventually show ads.

The good news, such as it is, is that from what I’ve read in writing this, most of these companies hope to charge for these browsers – AI as a subscription service. So avoiding them is also cheaper. Double win.

Illustrations: John Tenniel’s drawing of Davy Jones, sitting on his locker (via Wikimedia, published in Punch, 1892 with the caption, “AHA! SO LONG AS THEY STICK TO THEM OLD CHARTS, NO FEAR O’ MY LOCKER BEIN’ EMPTY!!”

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The bottom drawer

It only now occurs to me how weirdly archaic the UK government’s rhetoric around digital ID really is. Here’s prime minister Keir Starmer in India, quoted in the Daily Express (and many elsewheres):

“I don’t know how many times the rest of you have had to look in the bottom drawer for three bills when you want to get your kids into school or apply for this or apply for that – drives me to frustration.”

His image of the bottom drawer full of old bills is the bit. I asked an 82-year-old female friend: “What do you do if you have to supply a utility bill to confirm your address?” Her response: “I download one.”

Right. And she’s in the exact demographic geeks so often dismiss as technically incompetent. Starmer’s children are teenagers. Lots of people under 40 have never seen a paper statement.

Sure, many people can’t do that download, for various reasons. But they are the same people who will struggle with digital IDs, largely for the same reasons. So claiming people will want digital IDs because they’re more “convenient” is specious. The inconvenience isn’t in obtaining the necessary documentation. It lies in inconsistent, poorly designed submission processes – this format but not that, or requiring an in-person appointment. Digital IDs will provide many more opportunities for technical failure, as the system’s first targets, veterans, may soon find out.

A much cheaper solution for meeting the same goal would be interoperable systems that let you push a button to send the necessary confirmation direct to those who need it, like transferring a bank payment. This is, of course, close to the structure Mydex and researcher Derek McAuley have been working on for years, the idea being to invert today’s centralized databases to give us control of our own data. Instead, Starmer has rummaged in Tony Blair’s bottom drawer to pull out old ID proposals.

In an analysis published by the research organization Careful Industries, Rachel Coldicutt finds a clash: people do want a form of ID that would make life easier, but the government’s interest is in creating an ID that will make public services more efficient. Not the same.

Starmer himself has been in India this week, taking advantage to study its biometric ID system Aadhaar. Per Bloomberg, Starmer met with Infosys co-founder Nandan Nilekani, Aadhaar’s architect, because 16-year-old Aadhaar is a “massive success”.

According to the Financial Times, Aadhaar has 99% penetration in India, and “has also become the bedrock for India’s domestic online payments network, which has become the world’s largest, and enabled people to easily access capital markets, contributing to the country’s booming domestic investor base.” The FT also reports that Starmer claims Aadhaar has saved India $10 billion a year by reducing fraud and “leakages” in welfare schemes. In April, authentication using Aadhaar passed 150 billion transactions, and continues to expand through myriad sectors where its use was never envisioned. Visitors to India often come away impressed. However…

At Yale Insights, Ted O’Callahan tells the story of Aadhaar’s development. Given India’a massive numbers of rural poor with no way to identify themselves or access financial services, he writes, the project focused solely on identification.

Privacy International examines the gap between principle and practice. There have been myriad (and continuing) data breaches, many hit barriers to access, and mandatory enrollment for accessing many social protection schemes adds to preexisting exclusion.

In a posting at Open Democracy, Aman Sethi is even less impressed after studying Aadhaar for a decade. The claim of annual savings of $10 billion is not backed by evidence, he writes, and Aadhaar has brought “mass surveillance; a denial of services to the elderly, the impoverished and the infirm; compromised safety and security, and a fundamentally altered relationship between citizen and state.” As in Britain in 2003, when then-prime minister Tony Blair proposed the entitlement card, India cited benefit fraud as a key early justification for Aadhaar. Trying to get it through, Blair moved on to preventing illegal working and curbing identity theft. For Sethi, a British digital ID brings a society “where every one of us is a few failed biometrics away from being postmastered” (referring to the postmaster Horizon scandal).

In a recent paper for the Indian Journal of Law and Legal Research, Angelia Sajeev finds economic benefits but increased social costs. At the Christian Science Monitor, Riddhima Dave reports that many other countries that lack ID systems, particularly developing countries, are looking to India as a model. The law firm AM Legals warns of the spread of data sharing as Aadhaar has become ubiquitous, increasing privacy risks. Finally, at the Financial Times, John Thornhill noted in 2021 the system’s extraordinary mission creep: the “narrow remit” of 2009 to ease welfare payments and reduce fraud has sprawled throughout the public sector from school enrollment to hospital admissions, and into private companies.

Technology secretary Liz Kendall told Parliament this week that the digital ID will absolutely not be used for tracking. She is utterly powerless to promise that on behalf of the governments of the future.

If Starmer wants to learn from another country, he would do well to look at those problems and consider the opportunity costs. What has India been unable to do while pursuing Aadhaar? What could *we* do with the money and resources digital IDs will cost?

Illustrations: In 1980’s Yes, Minister (S01e04, “Big Brother”), minister Jim Hacker (Paul Eddington) tries to explain why his proposed National Integrated Database is not a “Big Brother”.

Update: Spelling of “Aadhaar” corrected.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.