ID is football

On Wednesday, Australia woke up to its new social media ban for under-16s. As Ange Lavoipierre explains at ABC News, the ban isn’t total. Under-16s are barred from owning their own accounts on a a list of big platforms – Facebook, Instagram, Threads, Twitch, YouTube, TikTok, X, Reddit, Kick, and Snapchat – but not barred from *using* those platforms. So, inevitably, there are already reports of errors and kids figuring out how to bypass the rules in order to stay in touch with their friends. The Washington Post’s report contains this contradiction: “Numerous recent polls indicate that a solid majority of Australians support the ban, but that young respondents largely don’t plan to comply.”

Helpfully, ABC News reported a couple of months ago that researchers, led by the UK’s Age Check Certification Scheme, have tested age assurance vendors, and found that “Old man” masks and other cheap party costumes apparently work to fool age estimation algorithms).

Edge cases are appearing, such as the country’s teen Olympians – skateboarders and triathletes – for whom the ban disrupts years of building fan communities, potentially also disrupting some of their funding.

Meanwhile, the BBC reports that a pair of 15-year-olds, backed by the Digital Freedom Project, are challenging the ban in court. The Josh Taylor reports at the Guardian that Reddit is also suing.

At Nature, Rachel Fieldhouse and Mohana Basu write that the ban’s wider effects will be assessed by scientists independently. This is good; defining “success” solely by the numbers of blocks bypassed substitutes an easy measure for the long-term impacts, which are diffuse, difficult to measure, and subject to many confounding variables.

But we know this: the ratchet effect applies. I first encountered it in the context of alternative medicine. Chronic illnesses have cycles; they improve, plateau, get worse. Apply a harmless remedy. If the patient gets better, the remedy is working. If it stays the same, the remedy has halted the decline. If it gets worse, the remedy came too late. In all cases, the answer is more of the remedy. So with online safety. In child safety, the answer is always that more restrictions are needed. In the UK, where the Online Safety Act has been in force for mere months, three members of the House of Lords have already proposed a similar ban as an amendment to the Children’s Wellbeing and Schools Bill.

***

Keir Starmer’s vague plan for a mandatory digital ID is back. This week saw a Westminster Hall debate, as required after nearly three million people signed an online petition opposing it.

At Computer Weekly, Liz Evenstead reports that MPs across all parties attacked the plan, making familiar points: the target such a scheme could create for criminals, the change it would bring to the relationship between citizens and the state, and the potential threat to civil liberties. They also attacked its absence from Labour’s election manifesto; last month, Fiona Brown reported at The National that on Times Radio UK head Louis Mosley said that Palantir would not bid on contracts for the digital ID because it hasn’t had “a clear, resounding ballot box”.

Also a potential issue is cost, which the Office of Budget Responsibility recently estimated at £1.8 billion. According to SA Mathieson at The Register, the government has rejected the figure but declined to provide an alternative estimate until its soon-to-be-launched consultation has been completed.

Also hovering in the background, weirdly ignored, is the digital identity and attributes trust framework, which has been in progress for the last several years at least.

Beyond that, we still have no real details. For this reason, in a panel I moderated at this week’s UK Internet Governance Forum, I asked panelists – Dave Birch, Karla Prudencio, and Mirca Madianou to try to produce some principles for what digital ID should and should not be. Birch in particular has often said he thinks Britain as a sovereign state in the 21st century sorely needs a digital identity infrastructure – by which he *doesn’t* mean anything like the traditional “ID card” so many are talking about. As we all agree, technology has changed a lot since 2005, when this was last attempted. Since then: blockchain, smartphones, social media, machine learning, generative. So we agree that far: anything the government proposes really should look very different than the last attempt, in 2005.

Here are the principles our discussion came up with:
– Design for edge cases, as a system that works for them will work for everyone.
– Design for plural identities.
– Don’t design the system as a hostile environment.
– Don’t create a target for hackers.
– Understand the real purpose .
– Identification is not authentication.
– Understand public-private partnerships as three-way relationships with users.
– Design to build public trust.

And one last thought:
– Sometimes, ID is football.

That last is from Madianou’s field work in Karen refugee camps along the border between Thailand and Myanmar. One teenaged boy really wanted an ID card so he could leave the camp and return safely without being arrested in order to go play football in a nearby village. It’s a reminder: identification can mean many different things in different situations.

Illustrations: The Mae La refugee camp in Thailand (by Tayzar44 at Wikimedia.

Also this week: TechGrumps 3.34 – ChatGPT is not my wingman.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A road not taken

Nearly 20 years ago, I attended a conference on road pricing. The piece I wrote about it (PDF) for Infosecurity magazine suggests it was in late 2007, three years after transport secretary Alistair Darling proposed bringing in a national road pricing scheme. The idea represented a profound change; until a few years earlier, congestion had always led to building more roads. In 2003, however, London mayor Ken Livingstone implemented instead the congestion charge – and both traffic and pollution levels had dropped.

So this conference explored the idea that road pricing would cut traffic to match road capacity, taking us off the vicious spiral of increasing road capacity and watching traffic rise to choke it. Darling’s proposal was for a satellite tracking following a 2004 feasibility study. In 2007, however, prime minister Tony Blair effectively dropped the idea after 1.8 million people signed a petition opposing it.

This week’s announcement of road pricing for electric vehicles is rather differently motivated, but reawakened my memory of the 2008 discussion. Roads must be paid for somehow, and, as foreseen by the the Institute for Fiscal Studies in 2012 the rise of electric vehicles inevitably eats away revenues from fuel taxes. EVs have many benefits: they can be powered without fossil fuels; their engines emit no carbon or other pollutants; and they are quieter. However, they weigh 10% to 30% more than internal combustion engine vehicles, and tire wear remains a significant pollutant.

Back in 2005 there were three main contenders for per-mile road pricing: automated number/license plate readers; tag and beacon; and time-distance-place. At the time, versions of these were already in use: the first was in place to administer London’s congestion charge; the second, effectively an update to paying at the tollbooth, was in place on turnpikes in the American northeast and in the UK at Dartford Crossing; the third was being used in Germany’s HGV system, which collects tolls for the kilometers driven on the country’s autobahns. In a 2007 paper, Cambridge researchers David N. Cottingham, Alastair Beresford, and Robert K. Harle analyzed the technologies available.

Whatever you call them, limited-access highways – autobahns, motorways, interstates, thruways – are a relatively simple problem because there are relatively few entry and exit points. Tracking, as transponders read by automated tollbooths have made possible, remains a privacy concern. Such a scheme was deemed unworkable for London, where TfL counted 227 entry points to the most congested area, and barriers would simply create new chokepoints. For this reason, and also because it estimated that 80% of cars entering the congestion zone are infrequent users, TfL opted for a system of cameras that read license plates on the fly and an automated system to send out penalty notices if someone hasn’t paid. This system also seems difficult to imagine scaling to a national level; every road, street, and back alley would have to have ANPR cameras. In the US, where Flock cameras are collecting ANPR data at scale, law enforcement and immigration authorities are already exploiting it in anti-democratic ways, as 404 Media reports.

In 2008, TDP, a much more likely approach for a nationwide system of per-mile pricing, would have required a box to be installed in every vehicle to track it, likely via GPS, and report time and location data via mobile networks for use to calculate what the owner should pay. No one was then sure whether road users would accept having tags in their vehicles or be willing to pay the considerable expense; as I seem to have written in that 2008 Infosecurity article, “‘We’re going to change your behavior and charge you for the privilege’ isn’t much of a sales pitch.” But such a system would enable proportionately charging people based on their actual road use.

If we were updating that discussion, parts would be unchanged. Congestion charge-style ANPR cameras everywhere will be no more feasible than then. Germany’s system for motorways will similarly not be feasible for smaller roads and within cities. TDP, however…

Here in 2025, most people are already carrying smart phones with GPS just part of the package. So there could be a choice: buy a box that is irretrievably embedded in the vehicle or download a TDP app that’s somehow tied to and paired with the car, perhaps via its electronic key, so that it won’t start unless the app-car link is enabled. (Fun for anyone whose battery dies in the course of an evening out.) In addition, cars already collect all sorts of data and send it to their manufacturers. So it’s also possible to imagine a government requring manufacturers active in the UK to transmit time and location data to a specified authority.

Obviously, the privacy implications of such a system would be staggering. Law enforcement would demand access. Businesses whose fleet patterns are commercially sensitive would hate it. And the UK’s successive governments have shown themselves to be highly partial to centralized databases that are built for one purpose and then are exploited in other ways. For this reason, Beresford’s idea in 2008 was for a privacy-protecting decentralized system using low-cost equipment that would allow cars to identify neighboring non-payers and report only those.

The good news is that the details we have so far government proposals suggest something far simpler: report the odometer reading at each year’s annual vehicle check and multiply by the per-mile charge. So unusual these days to see a government propose something so simple and cheap. Whether it’s a good idea to discourage the shift to EVs at this particular time is a different question.

Illustrations: A fork in a road (via Wikimedia).

At Plutopia, we interview Bruce Schneier about his new book, Rewiring Democracy, which examines the good and bad of what AI may bring to democracy.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: The Seven Rules of Trust

The Seven Rules of Trust: Why It Is Today’s Essential Superpower
by Jimmy Wales
Bloomsbury
ISBN: 978-1-5266-6501-0

Probably most people have either forgotten or never known that when Jimmy Wales first founded Wikipedia it was widely criticized. A lot of people didn’t believe an encyclopedia written and edited by volunteers could be any good. Many others believed free access would destroy Britannica’s business model, and reacted resentfully. Teachers warned students against using it, despite the fact that Wikipedia’s talk pages offer rare transparency into how knowledge is curated.

Now we know the Internet is big enough for both Wikipedia and Britannica.

Much of Wikipedia’s immediate value lay in its infinite expandability; it covered in detail many subjects the more austere Britannica considered unworthy. But, as Wales writes at the beginning of his recent book, The Seven Rules of Trust, Wikipedia’s biggest challenge was finding a way to become trusted. Britannica must have faced this too, once. Its solution was to build upon the reputation of the paid experts who write its entries. Wikipedia settled on passion, transparency, and increasingly rigorous referencing. As it turns out, collectively we know a lot. Today, Wikipedia is nearly 100 times the size of Britannica, has hundreds of language editions, and is so widely trusted that most of us don’t even think about how often we consult it.

In The Seven Rules of Trust, Wales tells the story of: how Wikipedia got from joke to trusted resource. It began, he says, with its editors trusting each other. For this part of his story, he relies on Frances Frei‘s model of trust, a triangle balancing authenticity, empathy, and logic. Editors’ trust enabled the collaboration that could build public trust in their work, which is guided by Wikipedia’s five pillars.

Wales’s seven rules are not complicated: trust is personal, even at scale; people are born to connect and collaborate; successful collaboration requires a clear positive shared purpose; give trust to get trust; practice civility; stick to your mission and avoid getting involved in others’ disputes; embrace transparency. Some of these could be reframed as the traditional virtues, as when Wales talks about the principle of “assume good faith” when trying to negotiate the diversity of others’ opinions to reach consensus on how to present a topic. I think of this as “charity”. Either way, it’s not meant to be infinite; good faith can be abused, and Wales goes on to talk about how Wikipedia handles trolls, self-promoters, and other problems.

Yet, Wales’s account feels rosy. Many of his stories about remediating the site’s flaws revolve around one or two individuals who personally built up areas such as Wikipedia’s coverage of female scientists. I’m not sure he’s in a position to recognize how often would-be contributors are quickly deterred by an editor fiercely defending their domain or how difficult it’s become to create a new page and make sure it stays up. And, although he nods at the hope that the book will help recruit new editors, he doesn’t discuss the problem of churn Wikipedia surely faces.

Having steered the creation of something as gigantic and seemingly unlikely as Wikipedia, Wales has certainly earned the right to explain how he did it in the hope of helping others embarking on similarly large and unlikely projects. Wales argues that trust has enabled diversity of opinion, and the resulting internal disagreement has improved Wikipedia’s quality. Almost certainly true, but hard to apply to more diffuse missions; see today’s cross-party politics.

Sovereign immunity

At the Gikii conference in 2018, a speaker told us of her disquiet after receiving a warning from Tumblr that she had replied to several messages posted there by a Russian bot. After inspecting the relevant thread, her conclusion was that this bot’s postings were designed to increase the existing divisions within her community. There would, she warned, be a lot more of this.

We’ve seen confirming evidence over the years since. This week provided even more when X turned on location identification for all accounts, whether they wanted it or not. The result has been, as Jason Koebler writes at 404 Media, to expose the true locations of accounts purporting to be American, posting on political matters. A large portion of the accounts behind viral posts designed to exacerbate tensions are being run by people in countries like Bangladesh, Vietnam, India, Cambodia, and Russia, among others, with generative AI acting as an accelerant.

Unlike the speaker we began with, in his analysis, Koebler finds that the intention behind most of this is not to stir up divisions but simply to make money from an automated ecosystem that makes it easy. The US is the main target simply because it’s the most lucrative market. He also points out that while X’s new feature has led people to talk about it, the similar feature that has long existed on Facebook and YouTube has never led to change because, he writes, “social media companies do not give a fuck about this”. Cue the Upton Sinclair quote: “It is difficult to get a man to understand something when his Salary depends upon his not understanding it”

The incident reminded that this type of fraud in general seems to be endemic, especially in the online advertising ecosystem. In March, Portsmouth senior lecturer Karen Middleton submitted evidence (PDF) to a UK Parliamentary Select Committee Inquiry arguing that the advertising ecosystem urgently needs regulatory attention as a threat to information integrity. At the Financial Times, Martin Wolf thinks that users should be able to sue the platforms for reimbursement when they are tricked by fraudulent ads – a model that might work for fraudulent ads that cause quantifiable harm but not for those that cause wider, less tangible, social harm. Wolf cites a Reuters report from Jeff Horwitz, who analyzes internal Facebook documents to find that the company itself expected 10% of its 2024 revenues – $16 billion – to come from ads for scams and banned goods.

Search Engine Land, citing Juniper Research, estimated in 2023 that $84 billion in advertising spend would be lost to ad fraud that year, and predicted a rise to $172 billion by 2028. Spider Labs estimates 2024 losses at over $37.7 billion, based on traffic data it’s analyzed through its fraud prevention tool, and 2025 losses at $41.4 billion. For context, DataReportal puts global online ad revenue at close to $790.3 billion in 2024. Also for comparison, Adblock Tester estimated last week that ad blockers cut publishers’ advertising revenues on average by 25% in 2023, costing them up to $50 billion a year.

If Koebler is correct in his assessment, until or unless advertisers rebel the incentives are misplaced and change will not happen.

***

Enforcement of the Online Safety Act has continued to develop since it came into force in July. This week, Substack became the latest to announce it would implement age verification for whatever content it deems to be potentially harmful. Paid subscribers are exempt on the basis that they have signed up with credit cards, which are unavailable in the UK to those under 18.

In October, we noted the arrival of a lawsuit against Ofcom brought in US courts by 4Chan and Kiwi Farms. The lawyer’s name, Preston Byrne, sounded familiar; I now remember he talked bitcoin at the 2015 Tomorrow’s Transaction Forum.

James Titcomb writes at the Daily Telegraph that Ofcom’s lawyers have told the US court that it is a public regulatory authority and therefore has “sovereign immunity”. The lawsuit contends that Ofcom is run as a “commercial enterprise” and therefore doesn’t get to claim sovereign immunity. Plus: the First Amendment.

Meanwhile, with age verification spreading to Australia and the EU, on X Byrne is advocating that US states enact foreign censorship shield laws. One state – Wyoming – has already introduced one. The draft GRANITE Act was filed on November 19. Among other provisions, the law would permit US citizens who have been threatened with fines to demand three times the amount in damages – potentially billions for a company like Meta, which can be fined up to 10% of global revenue under various UK and EU laws. That clause would have to pass the US Congress. In the current mood, it might; in July in a report the House of Representatives Judiciary Committee called the EU’s Digital Services Act a foreign censorship threat.

It’s hard to know how – or when – this will end. In 1990s debates, many imagined that the competition to enforce national standards for speech across the world would lead either to unrestricted free speech or to a “least common denominator” regime in which the most restrictive laws applied everywhere. Byrne’s battle isn’t about that; it’s about who gets to decide.

Illustrations: A wild turkey strutting (by Frank Schulenberg at Wikimedia). Happy Thanksgiving!

Also this week:
At Plutopia, we interview Jennifer Granick, surveillance and cybersecurity counsel at ACLU.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Mean time between failures

Normal people should not know names like “US-East-1”, someone said this week on Mastodon, or more or less. “US-East-1” is the section of Amazon Web Services that went out last month to widely disruptive effect. What this social media poster was getting at, while contemplating this week’s Cloudflare outage, is the fact that the series of recent Internet-related outages has made network nodes that were previously only known to technical specialists into household names.

For the history-minded, there was a moment like this in 1988, when a badly-written worm put the Internet on newspapers’ front pages for the first time. The Internet was then so little known that every story had to explain what it was – primarily, then, a network connecting government, academic, and corporate scientific research institutions. Now, stories are explaining network architecture. I guess that’s progress?

Much less detailed knowledge was needed to understand what happened on Tuesday, when Cloudflare went down, taking with it access to Spotify, Uber, Grindr, Ikea, Microsoft CoPilot, Politico, and even, in London, its VPN service (says Wikipedia). Cloudflare offers content delivery and protection against distributed denial of service attacks, and as such it interpolates itself into all sorts of Internet interactions. I often see it demanding action to prove I’m not a robot; in that mode it’s hard to miss. That said, many sites really do need the protection it offers against large-scale attacks. Attacks at scale require defense at scale.

Ironically, one of the sites lost in the Cloudflare outage was DownDetector, a site that helps you know if the site you can’t reach is down for everyone or just you, one of several such debugging tools for figuring out who needs to fix what.

So, Cloudflare was Tuesday. Amazon’s outage was just about a month ago, on October 20. Microsoft Azure, another DNS error, was just a week later. All three of these had effects across large parts of the network.

Is this a trend or just a random coincidental cluster in a sea of possibilities?

One thing that’s dispiriting about these outages is that so often the causes are traceable to issues that have been well-understood for years. With Amazon it was a DNS error. Microsoft also had a DNS issue “following an inadvertent configuration change”. Cloudflare’s issue may have been less predictable; The Verge reports its problem was a software crash caused by a “feature file” used by its bot management system abruptly doubling in size, taking it above the size the software was designed to handle.

Also at The Verge, Emma Roth thinks it’s enough of a trend that website owners need to start thinking about backup – that is, failover – plans. Correctly, she says the widespread impact of these outages shows how concentrated infrastructure service provision has become. She cites Signal CEO Meredith Whittaker: the encrypted messaging service can’t find an alternative to using one of the three or four major cloud providers.

At Krebs on Security, Brian Krebs warns sites that managed to pivot their domains away from Cloudflare to keep themselves available during the outage need to examine their logs for signs of the attacks Cloudflare normally protects them from and put effort into fixing the common vulnerabilities they find. And then also: consider spreading the load so there isn’t a single point of failure. As I understand it, Netflix did this after the 2017 AWS outage.

For any single one of these giant providers, significant outages are not common. This was, Jon Brodin says at Ars Technica, Cloudflare’s worst outage since 2019. That one was due to a badly written firewall rule. But increasing size also brings increasing complexity, and, as these outages have also shown, even the largest network can be disrupted at scale by a very small mistake.

Elsewhere, a software engineer friend and I have been talking about “mean time between failures”, a measure normally applied to hard drives, servers, or other components. There, it’s much more easily measured – run a load of drives, time when they fail, take an average… With the Internet, so much depends on your individual setup. But beyond that: what counts as failure? My friend suggested setting thresholds based on impact: number of people, length of time, extent of cascading failures. Being able to quantify outages might help get a better sense of whether it’s a trend or a random cluster. The bottom line, though, is clear already: increasing centralization means that when outages occur they are further-reaching and disruptive in unpredictable ways. This trend can only continue, even if the outages themselves become rarer.

Most of us have no control over the infrastructure decisions sites and services make, or even any real way to know what they are. We can counter this to some extent by diversifying our own dependencies.

In the first decade or two of the Internet, we could always revert to older ways of doing things. Increasingly, this is impossible because either those older methods have been turned off or because technology has taken us places where the old ways didn’t go. We need to focus a lot more on making the new systems robust, or face a future as hostages.

Illustrations: Traffic jam in New York’s Herald Square, 1973 (via Wikimedia).

Also this week
– At the Plutopia podcast, we interview Jennifer Granick, the Surveillance and Cybersecurity counsel at the ACLU about the expansion of government and corporate surveillance and the increasing threat to civil liberties.
– At Skeptical Inquirer, I interview juggling mathematician Colin Wright about spreading enthusiasm for mathematics.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Simplification

We were warned this was coming at this year’s Computers, Privacy, and Data Protection, and now it’s really here. The data protection NGO Noyb reports that a leaked internal draft (PDF) of the European Commission’s Digital Omnibus threatens to undermine the architecture the EU has been building around data protection, AI, cybersecurity, and privacy generally. At The Register, Connor Jones summarizes the changes; Noyb has detail.

The EU’s workings are, as always, somewhat inscrutable to outsiders. Noyb explains that the omnibus tool is intended to allow multiple laws to be updated simultaneously to “improve the quality of the law and streamline paperwork obligations”. In this case, Noyb argues that the European Commission is abusing this option to fast-track far more substantial and contentious changes that should be subject to impact assessments and feedback from other EU institutions, as well as legal services.

If the move succeeds – the final draft will be presented on November 19 – Noyb believes it could remove fundamental rights to privacy and data protection that Europeans have been building for more than 30 years. Noyb, European Digital Rights, and the Irish Council for Civil Liberties have sent an open letter of objection to the Commission. The basic argument: this isn’t “simplification” but deregulation. The package would still have to be accepted by the European Parliament and a majority of EU member states.

As far as I can recall, business has never much liked data protection. In the early 1990s, when the first laws were being written, I remember being told data protection was a “tax on small business”. Privacy advocates instead see data protection as a way of redressing the power imbalance between large organizations and individuals.

By 1998, when data protection law was implemented in all EU member states, US companies were publicly insisting that the US didn’t need a privacy law in order to be in compliance. Companies could use corporate policies and sectoral laws to provide a “layered approach” that would be just as protective. When I wrote about this for Scientific American in 1999, privacy advocates in the UK predicted a trade war over this, calling it a failure to understand that you can’t cut a deal with a fundamental right – like the First Amendment.

In early 2013, it looked entirely possible that the period of negotiations over data protection reform would end with rollback. GDPR was the focus of intense lobbying efforts. There were, literally, 4,000 proposed amendments, so many that I recall being shown software written to manage and understand them all.

And then…Snowden. His revelations of government spying shifted the mood noticeably, and, under his shadow, when GDPR was finally adopted in 2016 and came into force in 2018, it expanded citizens’ rights and increased penalties for non-compliance. Since then, other countries around the world have used GDPR as a model, including China and several US states.

Those few states aside, at the US federal level data protection law has never been popular, and the pile of law growing around it – the Digital Services Act, the Digital Markets Act, and the AI Act – is particularly unwelcome to the current administration, which sees it as a deliberate attack on US technology companies.

In the UK the in-progress Data (Use and Access) Act, which passed in June, also weakened some data protection provisions. It will be implemented over the year to June 2026.

At its blog, the Open Rights Group argues that some aspects of the DUAA rest on the claim that innovation, economic growth, and public security are harmed by data protection law, a dubious premise.

Until this leak, it seemed possible that the DUAA would break Britain’s adequacy decision and remove the UK from the list of countries to which the EU allows data transfers. The rule is that to qualify a country must have legal protections equivalent to those of the EU. It would be the wrong way round if instead of the UK enhancing its law to match the EU, the EU weakened its law to match the UK.

There’s a whole secondary issue here, which is that a law is only useful if it’s enforced. Noyb actively brings legal cases to force enforcement in the EU. In the UK, privacy advocates, like ORG, have long complained that the Information Commissioner’s Office is increasingly quiescent.

Many of the EU’s changes appear to be aimed at making it easier for AI companies to exploit personal data to develop models. It’s hard to know where that will end, given that every company is sprinkling “AI” over itself in order to sound exciting and new (until the next thing comes along), if this thing comes into force you have to think data protection law will increasingly only apply to small businesses running older technology that can’t be massaged to qualify for exemption..

I blame this willingness to undermine fundamental rights at least partly on the fantasy of the “AI race”. This is nation-state-level FOMO. What race? What’s the end point? What does it mean to “win”? Why the AI race, and not the net-zero race, the renewables race, or the sustainability race? All of those would produce tangible benefits and solve known problems of long standing and existential impact.

Illustrations: A drunk parrot in a Putney garden (photo by Simon Bisson; used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The panopticon in your home

In a series of stories, Lisa O’Carroll at the Guardian finds that His Majesty’s Revenue and Customs has had its hand in the cookie jar of airline passenger records. In hot pursuit of its goal of finding £350 million in benefit fraud, it’s been scouring these records to find people who have left the country for more than a month and not returned, so are no longer eligible.

In one case, a family was turned away at the gate when one of the children had an epileptic seizure; their child benefit was stopped because they had “emigrated” though they’d never left. A similar accusation was leveled at a women who booked a flight to Oslo even though she never checked in or flew.

These families can provide documentation proving they remained in the UK, but as one points out, the onus is on them to clean up an error they didn’t make. There are many others. Many simply traveled and returned by different routes. As of November 1, HMRC had reinstated 1,979 of the families affected but sticks to its belief that the rest have been correctly identified. HMRC also says it will check its PAYE records first for evidence someone is still here and working. This would help, but it’s not the only issue.

It’s unclear whether HMRC has the right to use this data in this way. The Guardian reports that the Information Commissioner’s Office, the data protection authority, has contacted HMRC to ask questions.

For privacy advocates, the case is disturbing. It is a clear example of the way data can mislead when it’s moved to a new context. For the people involved, it’s a hostage situation: there is no choice about providing the data siphoned from airlines to Home Office nor the financial information held by HMRC and no control over what happens next.

The essayist and former software engineer Ellen Ullman warned 20 years ago that she had never seen an owner of multiple databases who didn’t want to link them together. So this sort of “sharing” is happening all over the place.

In the US, Pro Publica reported this week that individual states have begun using a system provided by the Department of Homeland Security to check their voter rolls for non-citizens that has incorporated information from the Social Security Administration. Here again, data collected by one agency for one purpose is being shared with another for an entirely different one.

In both cases, data is being used for a purpose that wasn’t envisioned when it was collected. An airline collecting booking data isn’t checking it for errors or omissions that might cost a passenger their benefits. Similarly, the Social Security Administration isn’t normally concerned with whether you’re a citizen for voting purposes, just whether you qualify for one or another program – as it should be. Both changes of use fail to recognize the change in the impact of errors that goes along with them, especially at national scale.

I assume that in this age of AI-for-government-efficiency the goal for the future is to automate these systems even further while pulling in more sources of data.

Privacy advocates are used to encountering pushback that takes this form: “They know everything about me anyway.” I would dispute that. “They” certainly *can* collect a lot of uncorrelated data points about you if “they” aggregate the many available sources of data. But until recently, doing that was effortful enough that it didn’t happen unless you were suspected of something. Now, we’re talking sharing data and mining at scale as a matter of routine.

***

One of the most important lessons learned from 14 years of We, Robot conferences is that when someone shows a video clip of a robot doing something one should always ask how much it’s been speeded up.

This probably matters less in a home robot doing chores, as long as you don’t have to supervise. Leave a robot to fold laundry, and it can’t possibly matter if it takes all night.

From reports by Erik Kain at Forbes and Nilesh Christopher at the LA Times, it appears that 1X’s new Neo robot is indeed slow, even in its promotional video clips. The company says it has layers of security to prevent it from turning “murderous”, which seems an absurd bit of customer reassurance. However, 1X also calls it “lightweight”. The Neo is five foot six and weighs 66 pounds (30 kilos), which seems quite enough to hurt someone if it falls on them, even with padding. Granting the contributory design issues, Lime bikes weigh 50 pounds and break people’s legs. 1X’s website shows the Neo hugged by an avuncular taller man; imagine it instead with a five-foot 90-year-old woman.

Can we ask about hacking risks? And what happens if, like so many others, 1X shuts it down?

More incredibly, in buying one you must agree to allow a remote human operatorto drive the robot, along the way peering into your home. This is close to the original design of the panopticon, which chilled because those under surveillance never know whether they are being watched or not.

And it can be yours for the low, low price of $20,000 or $500 a month.

Illustrations: Jeremy Bentham’s original drawing of his design for the panopticon (via Wikimedia).

Also this week:
The Plutopia podcast interviews Sophie Nightingale on her research into deepfakes and the future of disinformation.
TechGrumps 3.33 podcast, The Final Step is Removing the Consumer, discusses AI web brorwsers, the Amazon outage, Python Foundation and DEI.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The gated web

What is an AI browser?

Or, in a more accurate representation of my mental reaction, *WTF* is an AI browser?

In wondering about this, I’m clearly behind the times. Tech sites are already doing roundups of their chosen “best” ones. At Mashable, Cecily Mouran compares “top” AI browsers because “The AI browser wars hath begun.”

Is the war that no one wants these things but they’re being forced on us anyway? Because otherwise…it’s just a bunch of heavily financed companies trying to own a market they think will be worth billions.

In Tim Berners-Lee’s original version, the web was meant to simplify sharing information. A key element was giving users control over presentation. Then came designers, who hated that idea. That battle between users’ preferences and browser makers’ interests continues to this day. What most people mean by the browser wars), though, was the late-1990s fight between Microsoft and Netscape, or the later burst of competition around smartphones. A big concern has long been market domination: a monopoly could seek to slowly close down the web by creating proprietary additions to the open standards and lock all others out.

Mouran, citing Casey Newton’s Platformer newsletter, suggests that Google specifically has exploited its browser to increase search use (and therefore ad revenues), partly by merging the address and search bars. I know I’m not typical, but for me search remains a separate activity. Most of the time I’m following a link or scanning familiar sites. Yes, when my browser history fills in a URL, I guess you could say I’m searching the browser history, but to me the better analogy is scanning an array of daily newspapers. Many people *also* use their browser to access cloud-based productivity software and email or play online games, none of which is search.

Nor are chatbots, since they don’t actually *find* information; they apply mathematics and statistics to a load of ingested text and create sentences by predicting the most likely next word. This is why Emily Bender and Alex Hanna call them “synthetic text extruding machines” in their book, The AI Con. I am in the business of trying to make sense of the impact of fast-moving technology, or at least of documenting the conflicts it creates. The only chatbot I’ve found of any value for this – or for personal needs such as a tech issue – is Perplexity, and that’s because it cites (or can be ordered to cite) sources one can check. There is every difference in the world between just wanting an answer and wanting the background from which to derive an answer that may possibly be new.

In any event, Newton’s take is that a company that’s serious about search must build its own browser. Therefore: AI companies are building them. Hence these roundups. Mauron’s pitch: “Imagine a browser that acts as your research assistant, plans trips, sends emails, and schedules meetings. As AI models become more advanced, they’re capable of autonomously handling more complex tasks on your behalf. For tech companies, the browser is the perfect medium for realizing this vision.”

OK, I can see exactly what it does for tech companies. It gives them control over what information you can access, how you use it, and who and how much you pay for the services its agent selects (plus it gets a commission).

I can also see what it does for employers. My browser agent can call your browser agent and negotiate a meeting plan. Then they attend the meeting on our behalf and send us both summaries, which they ingest and file, later forwarding them to our bosses’ agents to verify we were at work that day. In between, they can summarize emails, and decide which ones we need to see. (As Charles Arthur quipped at The Overspill, “Could they…send fewer emails?”)

Remember when part of the excitement of the Internet was the direct access it gave to people who were formerly inaccessible? Now, we appear to be building systems to ensure that every human is their own gated community.

What part of this is good for users? If you are fortunate enough not to care about the price of anything, maybe it’s great to replace your personal assistant with an agentic web browser. Most of us have struggled along doing things for ourselves and each other. At Cybernews, Mayank Sharma warns that AI browsers’ intentional preemption of efforts to browse for yourself, filtering anything they deem “irrelevant”, threaten the open web. Newton quantifies the drop in traffic news publishers are already seeing from generative AI. Will we soon be complaining about information underload?

At Pluralistic last year, Cory Doctorow wrote about the importance of faithful agents: software that is loyal to us rather than its maker. He particularly focused on browsers, which have gone from that initial vision of user control to become software that spies on us and reports home. In Mauron’s piece, Perplexity openly hopes to use chats to build user profiles and eventually show ads.

The good news, such as it is, is that from what I’ve read in writing this, most of these companies hope to charge for these browsers – AI as a subscription service. So avoiding them is also cheaper. Double win.

Illustrations: John Tenniel’s drawing of Davy Jones, sitting on his locker (via Wikimedia, published in Punch, 1892 with the caption, “AHA! SO LONG AS THEY STICK TO THEM OLD CHARTS, NO FEAR O’ MY LOCKER BEIN’ EMPTY!!”

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

It’s always DNS…

Years ago, someone in tech support at Telewest, then the cable supplier for southwest London, told me that if my broadband went out I should hope its television service went down too: the volume of complaints would get it fixed much faster. You could see this in action some years later, in 2017, when Amazon Web Services went down, taking with it Netflix. Until that moment few had realized that Netflix built its streaming service on Amazon’s cloud computing platform to take advantage of its flexibility in up- and down-sizing infrastructure. The source – an engineer’s typing error – was quickly traced and fixed, and later I was told the incident led Netflix to diversify its suppliers. You would think!

Even so, Netflix was one of the companies affected on Monday, when a DNS error took out a chunk of AWS, and people from gamers on Roblox to governments with mission-critical dependencies were affected. On the list of the affected are both the expected (Alexa and Ring) and the unexpected (Apple TV, Snapchat, Hulu, Google, Fortnite, Lyft, T-Mobile, Verizon, Venmo, Zoom, and the New York Times). To that add the UK government. At the Guardian, Simon Goodley says the UK government has awarded AWS £1.7 billion in contracts across 35 public sector authorities, despite warnings from the Treasury, the Financial Conduct Authority, and the Prudential Regulation Authority. Among the AWS-dependent: the Home Office, the Department of Work and Pensions, HM Revenue and Customs, and the Cabinet Office.

First, to explain the mistake – so common that experts said “It’s always DNS” and so old that early Internet pioneers said “We shouldn’t be having DNS errors any more”. The Domain Name System, conceived in 1983 by Paul Mockapetris, is a core piece of how the Internet routes traffic. When you type or click on a domain name such as “pelicancrossing.net”, behind the scenes a computer translates that name into a series of dotted numbers that identify the request’s destination. An error in those numbers, no matter how small, means the message – data, search request, email, whatever – can’t reach its destination, just as you can’t reach the recipient you want if you get a telephone number wrong. The upshot of all that is that DNS errors snarl traffic. In the AWS case, the error affected just one of its 30 regions, which is why Monday’s outages were patchy.

As Dan Milmo and Graham Wearden write at the Guardian, the outage has focused many minds on the need to diversify cloud computing. Taken together, Amazon (30%), Microsoft Azure (20%), and Google (13%) jointly control 63% of the market worldwide. There have been many such warnings.

At The Register, Carly Page reports on the individual level: smart homes turned dumb. Eightsleep beds stuck in an upright position and lost their temperature controls. App-controlled litter boxes stopped communicating. “Smart” light bulbs stayed dark. The Internet of Other People’s Things at its finest.

Also at The Register, Corey Quinn suggests the DNS error was ultimately attributable to an ongoing exodus of senior AWS engineers who took with them essential institutional memory. Once you’ve reached a certain level of scale, Quinn writes, every problem is complex and being able to remember that a similar issue on a previous occasion was traced improbably to a different system in a corner somewhere can be crucial. As departures continue, Quinn believes failures like these will become more common.

If that global picture is dispiriting, consider also the question of dependence within organizations; if your country depends on a single company’s infrastructure to power mission-critical systems, the diversity in the rest of the world won’t help you if that single company goes out. In the UK, Sam Trendall reports at Public Technology, the government activated incident-response mechanisms. Notable among the failures as prime minister Keir Starmer pushes for a mandatory digital ID: the government’s new One Login, as well as some UK banks. This outage provides evidence for the digital sovereignty many have been advocating.

I admit to mixed feelings. I agree with the many who believe the public sector should embrace digital sovereignty…but I also know that the UK government has a terrible record of failed IT projects, no matter who builds them. In 2010, fixing that was part of the motivation for setting up the Government Digital Service, as first GDS leader Mike Bracken writes at Public Digital. Yet the failures keep coming; see also the Post Office Horizon scandal. Bracken believes the solution is to invest in public sector capacity and digital expertise in order to end this litany of expensive failures.

At TechRadar, Benedict Collins rounds up further expert commentary, largely in agreement about the lessons we should learn. But will we? We should have learned in 2017.

Still, it would be a mistake to focus solely on Amazon. It is just one of many centralized points of failure. The is dangerously important as a unique resource for archived web pages. And the UK is not the only government flying at high-risk. Consider South Korea, where a few weeks ago a data center fire may have consumed 85TB of government data – with no backups. It seems we never really learn.

Illustrations: Traffic jam in New York’s Herald Square, 1973 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The bottom drawer

It only now occurs to me how weirdly archaic the UK government’s rhetoric around digital ID really is. Here’s prime minister Keir Starmer in India, quoted in the Daily Express (and many elsewheres):

“I don’t know how many times the rest of you have had to look in the bottom drawer for three bills when you want to get your kids into school or apply for this or apply for that – drives me to frustration.”

His image of the bottom drawer full of old bills is the bit. I asked an 82-year-old female friend: “What do you do if you have to supply a utility bill to confirm your address?” Her response: “I download one.”

Right. And she’s in the exact demographic geeks so often dismiss as technically incompetent. Starmer’s children are teenagers. Lots of people under 40 have never seen a paper statement.

Sure, many people can’t do that download, for various reasons. But they are the same people who will struggle with digital IDs, largely for the same reasons. So claiming people will want digital IDs because they’re more “convenient” is specious. The inconvenience isn’t in obtaining the necessary documentation. It lies in inconsistent, poorly designed submission processes – this format but not that, or requiring an in-person appointment. Digital IDs will provide many more opportunities for technical failure, as the system’s first targets, veterans, may soon find out.

A much cheaper solution for meeting the same goal would be interoperable systems that let you push a button to send the necessary confirmation direct to those who need it, like transferring a bank payment. This is, of course, close to the structure Mydex and researcher Derek McAuley have been working on for years, the idea being to invert today’s centralized databases to give us control of our own data. Instead, Starmer has rummaged in Tony Blair’s bottom drawer to pull out old ID proposals.

In an analysis published by the research organization Careful Industries, Rachel Coldicutt finds a clash: people do want a form of ID that would make life easier, but the government’s interest is in creating an ID that will make public services more efficient. Not the same.

Starmer himself has been in India this week, taking advantage to study its biometric ID system Aadhaar. Per Bloomberg, Starmer met with Infosys co-founder Nandan Nilekani, Aadhaar’s architect, because 16-year-old Aadhaar is a “massive success”.

According to the Financial Times, Aadhaar has 99% penetration in India, and “has also become the bedrock for India’s domestic online payments network, which has become the world’s largest, and enabled people to easily access capital markets, contributing to the country’s booming domestic investor base.” The FT also reports that Starmer claims Aadhaar has saved India $10 billion a year by reducing fraud and “leakages” in welfare schemes. In April, authentication using Aadhaar passed 150 billion transactions, and continues to expand through myriad sectors where its use was never envisioned. Visitors to India often come away impressed. However…

At Yale Insights, Ted O’Callahan tells the story of Aadhaar’s development. Given India’a massive numbers of rural poor with no way to identify themselves or access financial services, he writes, the project focused solely on identification.

Privacy International examines the gap between principle and practice. There have been myriad (and continuing) data breaches, many hit barriers to access, and mandatory enrollment for accessing many social protection schemes adds to preexisting exclusion.

In a posting at Open Democracy, Aman Sethi is even less impressed after studying Aadhaar for a decade. The claim of annual savings of $10 billion is not backed by evidence, he writes, and Aadhaar has brought “mass surveillance; a denial of services to the elderly, the impoverished and the infirm; compromised safety and security, and a fundamentally altered relationship between citizen and state.” As in Britain in 2003, when then-prime minister Tony Blair proposed the entitlement card, India cited benefit fraud as a key early justification for Aadhaar. Trying to get it through, Blair moved on to preventing illegal working and curbing identity theft. For Sethi, a British digital ID brings a society “where every one of us is a few failed biometrics away from being postmastered” (referring to the postmaster Horizon scandal).

In a recent paper for the Indian Journal of Law and Legal Research, Angelia Sajeev finds economic benefits but increased social costs. At the Christian Science Monitor, Riddhima Dave reports that many other countries that lack ID systems, particularly developing countries, are looking to India as a model. The law firm AM Legals warns of the spread of data sharing as Aadhaar has become ubiquitous, increasing privacy risks. Finally, at the Financial Times, John Thornhill noted in 2021 the system’s extraordinary mission creep: the “narrow remit” of 2009 to ease welfare payments and reduce fraud has sprawled throughout the public sector from school enrollment to hospital admissions, and into private companies.

Technology secretary Liz Kendall told Parliament this week that the digital ID will absolutely not be used for tracking. She is utterly powerless to promise that on behalf of the governments of the future.

If Starmer wants to learn from another country, he would do well to look at those problems and consider the opportunity costs. What has India been unable to do while pursuing Aadhaar? What could *we* do with the money and resources digital IDs will cost?

Illustrations: In 1980’s Yes, Minister (S01e04, “Big Brother”), minister Jim Hacker (Paul Eddington) tries to explain why his proposed National Integrated Database is not a “Big Brother”.

Update: Spelling of “Aadhaar” corrected.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.