Government identification as a service

This week, the clock started ticking on the UK’s Online Safety Act. Ofcom, the regulator charged with enforcing it, published its codes of practice and guidance, which come into force on March 17, 2025. At that point, websites that fall into scope – in Ofcom’s 2023 estimate 150,000 of them – must comply with requirements to conduct risk assessments, preemptively block child sexual abuse material, register a responsible person (who faces legal and financial liability), and much more.

Almost immediately, the first casualty made itself known: Dee Kitchen announced the closure of her site, which supports hundreds of interest-based forums. Ofcom’s risk assessment guidance (PDF), the personal liability would be overwhelming even if the forums produced enough in donations to cover the costs of compliance.

Russ Garrett has a summary for small sites. UK-linked blogs – even those with barely any readers – could certainly fit the definition per Ofcom’s checker tool, if users can comment on each other’s posts. Common sense says that’s ridiculous in many cases…but as Kitchen says all takes to ruin the blogger’s life is a malicious complainant wielding the OSA as their weapon.

Kitchen will certainly not be alone in concluding the requirements are prohibitively risky for web forums and bulletin boards that are run by volunteers and have minimal funding. Yet they are the Internet’s healthy social ecology, without the algorithms and business models that do most to create the harms the Act is meant to address. Promising Trouble and Power to Change are collaborating on a community of practice, and have asked Ofcom for a briefing on compliance for volunteers and small sites.

Garrett’s summary also points out that Ofcom’s rules leave it wide open for sites to censor *more* than is required, and many will do exactly that to minimize their risk. A side effect, as Garrett writes, will be to further centralize the Net, as moving communities to larger providers such as Discord will shift the liability onto *them*. This is what happens when rules controlling speech are written from the single lens of preventing harm rather than starting from a base of human rights.

More guidance to come from Ofcom next month. We haven’t even started on implementing age verification yet.

***

On Monday, I learned a new term I wish I hadn’t: “government identity as a service”. GIAAS?

The speaker was human rights campaigner Edward Hasbrouck, in a talk on identification Dave Farber‘s and Dan Gillmor‘s weekly CCRC/IP-Asia Zoom call.

Most people trace the accelerating rise of demands for identification in countries like the US and UK to 9/11. Based on that, there are now people old enough to drink in a US state who are not aware it was ever possible to just walk up to fly, get a hotel room, or enter an office. As Hasbrouck writes in a US election day posting, the rise in government demands for ID has been powered by the simultaneous rise of corporate tracking for commercial purposes. He calls it a “malign convergence of interest”.

It has long been obvious that anything companies collect can be subpoenaed by governments. Hasbrouck’s point, however, is that identification enables control as well as surveillance; it brings watchlists, blocklists, and automated bars to freedom of action – it makes us decision subjects as Gavin Freeguard said at the recent Foundation for Information Policy Research event.

Hasbrouck pinpoints three components that each present a vulnerability to control: identification, logging, decision making. As an example, consider the UK’s in-progress eVisa system, in which the government confirms an individual’s visa status online in real time with no option for physical documentation. This gives the government enormous power to stop individuals from doing vital but mundane things like rent a home, board an aircraft, or get a job. Its heart is identification – and a law delegating border enforcement to myriad civil intermediaries and normalizes these checks.

Many in the UK were outraged by proposals to give the Department of Work and Pensions the power to examine people’s bank accounts. In the US, Hasbrouck points to a recent report from the House Judiciary Committee on the Weaponization of the Federal Government that documents the Treasury Department’s Financial Crimes Enforcement Network’s collaboration with the FBI to push banks to submit reports of suspicious activity while it trawled for possible suspects after the January 6 insurrection. Yes, the destructors should be caught and punished; but also any weapon turned against people we don’t like can also be turned against us. Did anyone vote to let the FBI conduct financial surveillance by the million?

Now imagine that companies outsource ID checks to the government and offload the risk of running their own. That is how the no-fly list works. That’s how airlines operate *now*. GIAAS.

Then add the passive identification that systems like facial recognition are spreading. You can no longer reliably know whether you have been identified and logged, who gets that information, or what hidden decision they may make based on it. Few of us are sure of our rights in any situation, and few of us even ask why. In his slides (PDF), Hasbrouck offers a list of ways to fight back. He has hope.

Illustrations: Edward Hasbrouck at CPDP in 2017.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Loose ends

Privacy technologies typically fail for one of two reasons: 1) they’re too complicated and/or expensive to find widespread adoption among users; 2) sites and services ignore, undermine, or bypass them in order to preserve their business model. In the first category are numerous privacy-enhancing technologies that failed to make their case in the marketplace. Among examples of the first category are numerous encryption-related attempts to secure communications. Repeated failures in the marketplace, usually because the resulting products were too technically difficult for most users, they never found mass adoption. In the end, encrypted messaging didn’t really took off until WhatsApp built it into its service.

This week saw a category two failure: Mozilla announced it is removing the Do Not Track option from Firefox’s privacy settings. DNT is simple enough to implement if you can stand to check and change settings, but it falls on the wrong side of modern business models and, other than in California, the US has no supporting legislation to make it enforceable. Granted, Firefox is a minority browser now, but the moment feels significant for this 13-year-old technology.

As Kevin Purdy explains at Ars Technica, DNT began as an FTC proposal, based on work by Christopher Soghoian and Sid Stamm, that aimed to create a mechanism for the web similar to the “Do Not Call” list for telephone networks.

The world in which DNT seemed a hopeful possibility seems almost quaint now: then, one could still imagine that websites might voluntarily respect the signal web browsers sent indicating users’ preferences. Do Not Call, by contrast, was established by US federal legislation. Despite various efforts, the US failed to pass legislation underpinning DNT, and it never became a web standard. The closest it has come to the latter is Section 2.12 of the W3C’s Ethical Web Principles, which says, “People must be able to change web pages according to their needs.” Can I say I *need* to not be tracked?

Even at the time it seemed doubtful that web companies would comply. But it also suffered from unfortunate timing. DNT arrived just as the twin onslaught of smartphones and social media was changing the ethos that built the open web. Since then, as Cory Doctor wrote earlier this year, the incentives have aligned to push web browsers to become faithless user agents, and conventions mean less and less.

Ultimately, DNT only ever worked insofar as users could trust websites to honor their preference. As it’s become clear they can’t, ad blockers have proliferated, depriving sites of ad revenue they need to survive. Had DNT been successful, perhaps we’d have all been better off.

***

Also on the way out this week is Cruise’s San Francisco robotaxis. My last visit to San Francisco, about a year ago, was the first time I saw these in person. Most of the ones I saw were empty Waymos, perhaps in transit to a passenger, perhaps just pointlessly clogging the streets. Around then, a Cruise robotaxi ran over a pedestrian who’d been hit by another car and then dragged her 20 feet. San Francisco promptly suspended Cruise’s license. Technology critic Paris Marx thought the incident would likely be Cruise’s “death knell”. And so it’s proving. The announcement from GM, which acquired Cruise in 2016 for $1 billion, leaves just Waymo standing in the US self-driving taxi business, with Tesla saying it will enter the market late next year.

I always associate robotaxis with Vernor Vinge‘s 2006 novel Rainbows End. In it, Vinge imagined a future in which robotaxis arrived within minutes of being hailed and replaced both public transport and private car ownership. By 2012 or so, his fictional imagining had become real-life projection, and many were predicting that our streets would imminently be filled with self-driving cars, taxis or not. In 2017, the conversation was all about what ethics to program into them and reclaiming urban space. Now, that imagined future seems to be receding, as skeptics predicted it would.

***

American journalism has long operated under the presumption that the stories it produces should be “neutral”. Now, at the LA Times, CEO Patrick Soon-Shiong thinks he can enforce this neutrality by running an AI-based “bias meter” over the paper’s stories. If you remember, in the late stages of the US presidential election, Soon-Shiong blocked the paper from endorsing Kamala Harris. Reports say that the bias meter, due out next month, is meant to identify any bias the story’s source has and then deliver “both sides” of that story.

This is absurd. Few news stories have just two competing sides. A biased source can’t be countered by rewriting the story unless you include more sources and points of view, which means additional research. Most important, AI can’t think.

But readers can. And so what this story says is that Soon-Shiung doesn’t trust either the journalists who work for him or the paper’s readers to draw the conclusions he wants. If he knew more about journalism, he’d know that readers generally don’t adopt opinions just because someone tells them to. The far greater power, I recall reading years ago, lies in determining what readers *think about* by deciding what topics are important enough to cover. There’s bias there, too, but Soon-Shiong’s meter won’t show it.

Illustrations: Dominic Wilccox‘s concept driverless sleeper car, 2014.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Return of the Four Horsemen

The themes at this week’s Scrambling for Safety, hosted by the Foundation for Information Policy Research, are topical but not new since the original 1997 event: chat control; the online safety act; and AI in government decision making.

The EU proposal chat control would require platforms served with a detection order to scan people’s phones for both new and previously known child sexual abuse materialclient-side scanning. Robin Wilton prefers to call this “preemptive monitoring” to clarify that it’s an attack.

Yet it’s not fit even for its stated purpose, as Claudia Peersman showed, based on research conducted at REPHRAIN. They set out to develop a human-centric evaluation framework for the AI tools needed at the scale chat control would require. Their main conclusion: AI tools are not ready to be deployed on end-to-end-encrypted private communications. This was also Ross Anderson‘s argument in his 2022 paper on chat control (PDF) showing why it won’t meet the stated goals. Peersman also noted an important oversight: none of the stakeholder groups consulted in developing these tools include the children they’re supposed to protect.

This led Jen Persson to ask: “What are we doing to young people?” Children may not understand encryption, she said, but they do know what privacy means to them, as numerous researchers have found. If violating children’s right to privacy by dismantling encryption means ignoring the UN Convention on the Rights of the Child, “What world are we leaving for them? How do we deal with a lack of privacy in trusted relationships?”

All this led Wilton to comment that if the technology doesn’t work, that’s hard evidence that it is neither “necessary” nor “proportionate”, as human rights law demands. Yet, Persson pointed out, legislators keep passing laws that technologists insist are unworkable. Studies in both France and Australia have found that there is no viable privacy-preserving age verification technology – but the UK’s Online Safety Act (2023) still requires it.

In both examples – and in introducing AI into government decision making – a key element is false positives, which swamp human adjudicators in any large-scale automated system. In outlining the practicality of the Online Safety Act, Graham Smith cited the recent case of Marieha Hussein, who carried a placard at a pro-Palestinian protest that depicted former prime minister Rishi Sunak and former home secretary Suella Braverman as coconuts. After two days of evidence, the judge concluded the placard was (allowed) political satire rather than (criminal) racial abuse. What automated system can understand that the same image means different things in different contexts? What human moderator has two days? Platforms will simply remove content that would never have led to a conviction in court.

Or, asked Monica Horten suggested, how does a platform identify the new offense of coercive control?

Lisa Sugiura, who campaigns to end violence against women and girls, had already noted that the same apps parents install so they can monitor their children (and are reluctant to give up later) are openly advertised with slogans like “Use this to check up on your cheating wife”. (See also Cindy Southworth, 2010, on stalker apps.) The dots connect into reports Persson heard at last week’s Safer Internet Forum that young women find it hard to refuse when potential partners want parental-style monitoring rights and then find it even harder to extricate themselves from abusive situations.

Design teams don’t count the cost of this sort of collateral damage, just as their companies have little liability for the human cost of false positives, and the narrow lens of child safety also ignores these wider costs. Yet they can be staggering: the 1990s US law requiring ISPs to facilitate wiretapping, CALEA, created the vulnerability that enabled widescale Chinese spying in 2024.

Wilton called laws that essentially treat all of us as suspects “a rule to make good people behave well, instead of preventing bad people from behaving badly”. Big organized crime cases like the Silk Road, Encrochat, and Sky ECC, relied on infiltration, not breaking encryption. Once upon a time, veterans know, there were four horsemen always cited by proponents of such laws: organized crime, drug dealers, terorrists, and child abusers. We hear little about the first three these days.

All of this will take new forms as the new government adopts AI in decision making with the same old hopes: increased efficiency, lowered costs. Government is not learning from the previous waves of technoutopianism, which brought us things like the Post Office Horizon scandal, said Gavin Freeguard. Under data protection law we were “data subjects”; now we are becoming “decision subjects” whose voices are not being heard.

There is some hope: Swee Leng Harris sees improvements in the reissued data bill, though she stresses that it’s important to remind people that the “cloud” is really material data centers that consume energy (and use water) at staggering rates (see also Kate Crawford’s book, Atlas of AI). It’s no help that UK ministers and civil servants move on to other jobs at pace, ensuring there is no accountability. As Sam Smith said, computers have made it possible to do things faster – but also to go wrong faster at a much larger scale.

Illustrations: Time magazine’s 1995 “Cyberporn” cover, the first children and online pornography scare, based on a fraudulent study.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Follow the business models

In a market that enabled the rational actions of economists’ fantasies, consumers would be able to communicate their preferences for “smart” or “dumb” objects by exercising purchasing power. Instead, everything from TVs and vacuum cleaners to cars is sprouting Internet connections and rampant data collection.

I would love to believe we will grow out of this phase as the risks of this approach continue to become clearer, but I doubt it because business models will increasingly insist on the post-sale money, which never existed in the analog market. Subscriptions to specialized features and embedded ads seem likely to take ever everything. Essentially, software can change the business model governing any object’s manufacture into Gillette’s famous gambit: sell the razors cheap, and make the real money selling razor blades. See also in particular printer cartridges. It’s going to be everywhere, and we’re all going to hate it.

***

My consciousness of the old ways is heightened at the moment because I spent last weekend participating in a couple of folk music concerts around my old home town, Ithaca, NY. Everyone played acoustic instruments and sang old songs to celebrate 58 years of the longest-running folk music radio show in North America. Some of us hadn’t really met for nearly 50 years. We all look older, but everyone sounded great.

A couple of friends there operate a “rock shop” outside their house. There’s no website, there’s no mobile app, just a table and some stone wall with bits of rock and other findings for people to take away if they like. It began as an attempt to give away their own small collection, but it seems the clearing space aspect hasn’t worked. Instead, people keep bringing them rocks to give away – in one case, a tray of carefully laid-out arrowheads. I made off with a perfect, peach-colored conch shell. As I left, they were taking down the rock shop to make way for fantastical Halloween decorations to entertain the neighborhood kids.

Except for a brief period in the 1960s, playing folk music has never been lucrative. However it’s still harder now: teens buy CDs to ensure they can keep their favorite music, and older people buy CDs because they still play their old collections. But you can’t even *give* a 45-year-old a CD because they have no way to play it. At the concert, Mike Agranoff highlighted musicians’ need for support in an ecosystem that now pays them just $0.014 (his number) for streaming a track.

***

With both Halloween and the US election scarily imminent, the government the UK elected in July finally got down to its legislative program this week.

Data protection reform is back in the form of the the Data Use and Access Bill, Lindsay Clark reports at The Register, saying the bill is intended to improve efficiency in the NHS, the police force, and businesses. It will involve making changes to the UK’s implementation of the EU’s General Data Protection Regulation. Care is needed to avoid putting the UK’s adequacy decision at risk. At the Open Rights Group Mariano della Santi warns that the bill weakens citizens’ protection against automated decision making. At medConfidential, Sam Smith details the lack of safeguards for patient data.

At Computer Weekly, Bill Goodwin and Sebastian Klovig Skelton outline the main provisions and hopes: improve patient care, free up police time to spend more protecting the public, save money.

‘Twas ever thus. Every computer system is always commissioned to save money and improve efficiency – they say this one will save 140,000 a years of NHS staff time! Every new computer system also always brings unexpected costs in time and money and messy stages of implementation and adaptation during which everything becomes *less* efficient. There are always hidden costs – in this case, likely the difficulties of curating data and remediating historical bias. An easy prediction: these will be non-trivial.

***

Also pending is the draft United Nations Convention Against Cybercrime; the goal is to get it through the General Assembly by the end of this year.

Human Rights Watch writes that 29 civil society organizations have written to the EU and member states asking them to vote against the treaty’s adoption and consider alternative approaches that would safeguard human rights. The EFF is encouraging all states to vote no.

Internet historians will recall that there is already a convention on cybercrime, sometimes called the Budapest Convention. Drawn up in 2001 by the Council of Europe to come into force in 2004, it was signed by 70 countries and ratified by 68. The new treaty has been drafted by a much broader range of countries, including Russia and China, is meant to be consistent with that older agreement. However, the hope is it will achieve the global acceptance its predecessor did not, in part because of the broader

However, opponents are concerned that the treaty is vague, failing to limit its application to crimes that can only be committed via a computer, and lacks safeguards. It’s understandable that law enforcement, faced with the kinds of complex attacks on computer systems we see today want their path to international cooperation eased. But, as EFF writes, that eased cooperation should not extend to “serious crimes” whose definition and punishment is left up to individual countries.

Illustrations: Halloween display seen near Mechanicsburg, PA.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Invisible Rulers

Invisible Rulers: The People Who Turn Lies Into Reality
by Renée DiResta
Hachette
ISBN: 978-1-54170337-7

For the last week, while violence has erupted in British cities, commentators asked, among other things: what has social media contributed to the inflammation? Often, the focus lands on specific famous people such as Elon Musk, who told exTwitter that the UK is heading for civil war (which basically shows he knows nothing about the UK).

It’s a particularly apt moment to read Renée DiResta‘s new book, Invisible Rulers: The People Who Turn Lies Into Reality. Until June, DiResta was the technical director of the Stanford Internet Observatory, which studies misinformation and disinformation online.

In her book, DiResta, like James Ball in The Other Pandemic and Naomi Klein in Doppelganger, traces how misinformation and disinformation propagate online. Where Ball examined his subject from the inside out (having spent his teenaged years on 4chan) and Klein is drawn from the outside in, DiResta’s study is structural. How do crowds work? What makes propaganda successful? Who drives engagement? What turns online engagement into real world violence?

One reason these questions are difficult to answer is the lack of transparency regarding the money flowing to influencers, who may have audiences in the millions. The trust they build with their communities on one subject, like gardening or tennis statistics, extends to other topics when they stray. Someone making how-to knitting videos one day expresses concern about their community’s response to a new virus, finds engagement, and, eventually, through algorithmic boosting, greater profit in sticking to that topic instead. The result, she writes, is “bespoke realities” that are shaped by recommendation engines and emerge from competition among state actors, terrorists, ideologues, activists, and ordinary people. Then add generative AI: “We can now mass-produce unreality.”

DiResta’s work on this began in 2014, when she was checking vaccination rates in the preschools she was looking at for her year-old son in the light of rising rates of whooping cough in California. Why, she wondered, were there all these anti-vaccine groups on Facebook, and what went on in them? When she joined to find out, she discovered a nest of evangelists promoting lies to little opposition, a common pattern she calls “asymmetry of passion”. The campaign group she helped found succeeded in getting a change in the law, but she also saw that the future lay in online battlegrounds shaping public opinion. When she presented her discoveries to the Centers for Disease Control, however, they dismissed it as “just some people online”. This insouciance would, as she documents in a later chapter, come back to bite during the covid emergency, when the mechanisms already built whirred into action to discredit science and its institutions.

Asymmetry of passion makes those holding extreme opinions seem more numerous than they are. The addition of boosting algorithms and “charismatic leaders” such as Musk or Robert F. Kennedy, Jr (your mileage may vary) adds to this effect. DiResta does a good job of showing how shifts within groups – anti-vaxx groups that also fear chemtrails and embrace flat earth, flat earth groups that shift to QAnon – lead eventually from “asking questions” to “take action”. See also today’s UK.

Like most of us, DiResta is less clear on potential solutions. She gives some thought to the idea of prebunking, but more to requiring transparency: platforms around content moderation decisions, influencers around their payment for commercial and political speech, and governments around their engagement with social media platforms. She also recommends giving users better tools and introducing some friction to force a little more thought before posting.

The Observatory’s future is unclear, as several other key staff have left; Stanford told The Verge in June that the Observatory would continue under new leadership. It is just one of several election integrity monitors whose future is cloudy; in March Facebook announced it would shut down research tool CrowdTangle on August 14. DiResta’s book is an important part of its legacy.

Outbound

As the world and all knows by now, the UK is celebrating this year’s American Independence Day by staging a general election. The preliminaries are mercifully short by US standards, in that the period between the day it was called and the day the winners will be announced is only about six weeks. I thought the announcement would bring more sense of relief than it did. Instead, these six weeks seem interminable for two reasons: first, the long, long wait for the announcement, and second, the dominant driver for votes is largely negative – voting against, rather than voting for.

Labour, which is in polling position to win by a lot, is best served by saying and doing as little as possible, lest a gaffe damage its prospects. The Conservatives seem to be just trying not to look as hopeless as they feel. The only party with much exuberance is the far-right upstart Reform, which measures success in terms of whether it gets a larger share of the vote than the Conservatives and whether Nigel Farage wins a Parliamentary seat on his eighth try. And the Greens, who are at least motivated by genuine passion for their cause, and whose only MP is retiring this year. For them, sadly, success would be replacing her.

Particularly odd is the continuation of the trend visible in recent years for British right-wingers to adopt the rhetoric and campaigning style of the current crop of US Republicans. This week, they’ve been spinning the idea that Labour may win a dangerous “supermajority”. “Supermajority” has meaning in the US, where the balance of powers – presidency, House of Representatives, Senate – can all go in one party’s direction. It has no meaning in the UK, where Parliament is sovereign. All it means is Labour could wind up with a Parliamentary majority so large that they can pass any legislation they want. But this has been the Conservatives’ exact situation for the last five years, ever since the 2019 general election gave Boris Johnson a majority of 86. We should probably be grateful they largely wasted the opportunity squabbling among themselves.

This week saw the launch, day by day, of each party manifesto in turn. At one time, this would have led to extensive analysis and comparisons. This year, what discussion there is focuses on costs: whose platform commits to the most unfunded spending, and therefore who will raise taxes the most? Yet my very strong sense is that few among the electorate are focused on taxes; we’d all rather have public services that work and an end to the cost-of-living crisis. You have to be quite wealthy before private health care offers better value than paying taxes. But here may lie the explanation for both this and the weird Republican-ness of 2024 right-wing UK rhetoric: they’re playing to the same wealthy donors.

In this context, it’s not surprising that there’s not much coverage of what little the manifestos have to say about digital rights or the Internet. The exception is Computer Weekly, which finds the Conservatives promising more of the same and Labour offering a digital infrastructure plan, which includes building data centers and easing various business regulations but not to reintroduce the just-abandoned Data Protection and Digital Information bill.

In the manifesto itself: “Labour will build on the Online Safety Act, bringing forward provisions as quickly as possible, and explore further measures to keep everyone safe online, particularly when using social media. We will also give coroners more powers to access information held by technology companies after a child’s death.” The latter is a reference to recent cases such as that of 14-year-old Molly Russell, whose parents fought for five years to gain access to her Instagram account after her death.

Elsewhere, the manifesto also says, “Too often we see families falling through the cracks of public services. Labour will improve data sharing across services, with a single unique identifier, to better support children and families.”

“A single unique identifier” brings a kind of PTSD flashback: the last Labour government, in power from 1997 to 2010, largely built the centralized database state, and was obsessed with national ID cards, which were finally killed by David Cameron’s incoming coalition government. At the time, one of the purported benefits was streamlining government interaction. So I’m suspicious: this number could easily be backed by biometrics and checked via phone apps on the spot, anywhere and grow into…?

In terms of digital technologies, the LibDems mostly talk about health care, mandating interoperability for NHS systems and improving both care and efficiency. That can only be assessed if the detail is known. Also of interest: the LibDems’ proposed anti-SLAPP law, increasingly needed.

The LibDems also commit to advocate for a “Digital Bill of Rights”. I’m not sure it’s worth the trouble: “digital rights” as a set of civil liberties separate from human rights is antiquated, and many aspects are already enshrined in data protection, competition, and other law. In 2019, under the influence of then-deputy leader Tom Watson, this was a Labour policy. The LibDems are unlikely to have any power; but they lead in my area.

I wish the manifestos mattered and that we could have a sensible public debate about what technology policy should look like and what the priorities should be. But in a climate where everyone votes to get one lot out, the real battle begins on July 5, when we find out what kind of bargain we’ve made.

Illustrations: Polling station in Canonbury, London, in 2019 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Last year’s news

It was tempting to skip wrapping up 2023, because at first glance large language models seemed so thoroughly dominant (and boring to revisit), but bringing the net.wars archive list up to date showed a different story. To be fair, this is partly personal bias: from the beginning LLMs seemed fated to collapse under the weight of their own poisoning; AI Magazine predicted such an outcome as early as June.

LLMs did, however, seem to accelerate public consciousness of three long-running causes of concern: privacy and big data; corporate cooption of public and private resources; and antitrust enforcement. That acceleration may be LLMs’ more important long-term effect. In the short term, the justifiably bigger concern is their propensity to spread disinformation and misinformation in the coming year’s many significant elections.

Enforcement of data protection laws has been slowly ramping up in any case, and the fines just keep getting bigger, culminating in May’s fine against Meta for €1.2 billion. Given that fines, no matter how large, seem insignificant compared to the big technology companies’ revenues, the more important trend is issuing constraints on how they do business. That May fine came with an order to stop sending EU citizens’ data to the US. Meta responded in October by announcing a subscription tier for European Facebook users: €160 a year will buy freedom from ads. Freedom from Facebook remains free.

But Facebook is almost 20 years old; it had years in which to grow without facing serious regulation. By contrast, ChatGPT, which OpenAI launched just over a year ago, has already faced investigation by the US Federal Trade Commission and been banned temporarily by the Italian data protection authority (it was reinstated a month later with conditions). It’s also facing more than a dozen lawsuits claiming copyright infringement; the most recent of these was filed just this week by the New York Times. It has settled one of these suits by forming a partnership with Axel Springer.

It all suggests a lessening tolerance for “ask forgiveness, not permission”. As another example, Clearview AI has spent most of the four years since Kashmir Hill alerted the world to its existence facing regulatory bans and fines, and public disquiet over the rampant spread of live facial recognition continues to grow. Add in the continuing degradation of exTwitter, the increasing number of friends who say they’re dropping out of social media generally, and the revival of US antitrust actions with the FTC’s suit against Amazon, and it feels like change is gathering.

It would be a logical time, for an odd reason: each of the last few decades as seen through published books has had a distinctive focus with respect to information technology. I discovered this recently when, for various reasons, I reorganized my hundreds of books on net.wars-type subjects dating back to the 1980s. How they’re ordered matters: I need to be able to find things quickly when I want them. In 1990, a friend’s suggestion of categorizing by topic seemed logical: copyright, privacy, security, online community, robots, digital rights, policy… The categories quickly broke down and cross-pollinated. In rebuilding the library, what to replace it with?

The exercise, which led to alphabetizing by author’s name within decade of publication, revealed that each of the last few decades has been distinctive enough that it’s remarkably easy to correctly identify a book’s decade without turning to the copyright page to check. The 1980s and 1990s were about exploration and explanation. Hype led us into the 2000s, which were quieter in publishing terms, though marked by bursts of business books that spanned the dot-com boom, bust, and renewal. The 2010s brought social media, content moderation, and big data, and a new set of technologies to hype, such as 3D printing and nanotechnology (about which we hear nothing now). The 2020s, it’s too soon to tell…but safe to say disinformation, AI, and robots are dominating these early years.

The 2020s books to date are trying to understand how to rein in the worst effects of Big Tech: online abuse, cryptocurrency fraud, disinformation, the loss of control as even physical devices turn into manufacturer-controlled subscription services, and, as predicted in 2018 by Christian Wolmar, the ongoing failure of autonomous vehicles to take over the world as projected just ten years ago.

While Teslas are not autonomous, the company’s Silicon Valley ethos has always made them seem more like information technology than cars. Bad idea, as Reuters reports; its investigation found a persistent pattern of mishaps such as part failures and wheels falling off – and an equally persistent pattern of the company blaming the customer, even when the car was brand new. If we don’t want shoddy goods and data invasion with everything to be our future, fighting back is essential. In 2032, I hope looking back shows that story.

The good news going into 2024 is, as the Center for the Public Domain at Duke University, Public Domain Review and Cory Doctorow write, the bumper crop of works entering the public domain: sound recordings (for the first time in 40 years), DH Lawrence’s Lady Chatterley’s Lover, Agatha Christie’s The Mystery of the Blue Train, Ben Hecht and Charles MacArthur’s play The Front Page. and the first of Mickey Mouse. Happy new year.

Illustrations: Promotional still from the 1928 production of The Front Page, which enters the public domain on January 1, 2024 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Surveillance machines on wheels

After much wrangling and with just a few days of legislative time between the summer holidays and the party conference season, on Tuesday night the British Parliament passed the Online Safety bill, which will become law as soon as it gets royally signed (assuming they can find a pen that doesn’t leak). The government announcement brims with propagandist ecstasy, while the Open Rights Group’s statement offers the reality: Briton’s online lives will be less secure as a result. Which means everyone’s will.

Parliament – and the net.wars archive – dates the current version of this bill to 2022, and the online harms white paper on which it’s based to 2020. But it *feels* like it’s been a much longer slog; I want to say six years.

This is largely because the fight over two key elements – access to encrypted messaging and age verification – *is* that old. Age verification was enshrined in the Digital Economy Act (2017), and we reviewed the contenders to implement it in 2016. If it’s ever really implemented, age verification will make Britain the most frustrating place in the world to be online.

Fights over strong encryption have been going on for 30 years. In that time, no new mathematics has appeared to change the fact that it’s not possible to create a cryptographic hole that only “good guys” can use. Nothing will change about that; technical experts will continue to try to explain to politicians that you can have secure communications or you can have access on demand, but you can’t have both.

***

At the New York Times, Farhood Manjou writes that while almost every other industry understands that the huge generation of aging Boomers is a business opportunity, outside of health care Silicon Valley is still resolutely focused on under-30s. This, even though the titans themselves age; boy-king Mark Zuckerberg is almost 40. Hey, it’s California; they want to turn back aging, not accept it.

Manjou struggles to imagine the specific directions products might take, but I like his main point: where’s the fun? What is this idea that after 65 you’re just something to send a robot to check up on? Yes, age often brings impairments, but why not build for them? You would think that given the right affordances, virtual worlds and online games would have a lot to offer people whose lives are becoming more constrained.

It’s true that by the time you realize that ageism pervades our society you’re old enough that no one’s listening to you any more. But even younger people must struggle with many modern IT practices: the pale, grey type that pervades the web, the picklists, the hidden passwords you have to type twice… And captchas, which often display on my desktop too small to see clearly and are resistant to resizing upwards. Bots are better at captchas than humans anyway, so what *is* the point?

We’re basically back where we were 30 years ago, when the new discipline of human-computer interaction fought to convince developers that if the people who struggle to operate their products look stupid the problem is bad design. And all this is coming much more dangerously to cars; touch screens that can’t be operated by feel are Exhibit A.

***

But there is much that’s worse about modern cars. A few weeks ago, the Mozilla Foundation published a report reviewing the privacy of modern cars. Tl;dr: “Cars are the worst product category we have ever reviewed for privacy.”

The problems are universal across the 25 brands Mozilla researchers Jen Caltrider, Misha Rykov, and Zoë MacDonald reviewed: “Modern cars are surveillance-machines on wheels souped-up with sensors, radars, cameras, telematics, and apps that can detect everything we do inside.” Cars can collect all the data that phones and smart home devices can. But unlike phones, space is a non-issue, and unlike smart speakers, video cameras, and thermostats, cars move with you and watch where you go. Drivers, passengers, passing pedestrians…all are fodder for data collection in the new automotive industry, where heated seats and unlocking extra battery range are subscription add-ons, and the car you buy isn’t any more yours than the £6-per-hour Zipcar in the designated space around the corner.

Then there are just some really weird clauses in the companies’ privacy policies. Some collect “genetic data” (here the question that arises is not only “why?” but “how?). Nissan says it can collect information about owners’ “sexual activity” for use in “direct marketing” or to share with marketing partners. ” The researchers ask, “What on earth kind of campaign are you planning, Nissan?”

Still unknown: whether the data is encrypted while held on the car; how securely it’s held; and whether the companies will resist law enforcement requests at all. We do know that that car companies share and sell the masses of intimate information they collect, especially the cars’ telematics with insurance companies.

The researchers also note that new features allow unprecedented levels of control. VW’s Car-Net, for example, allows parents – or abusers – to receive a phone alert if the car is driven outside of set hours or in or near certain locations. Ford has filed a patent on a system for punishing drivers who miss car payments.

“I got old at the right time,” a friend said in 2019. You can see his point.

Illustrations: Artist Dominic Wilcox‘s imagined driverless sleeper car of the future, as seen at the Science Museum in 2019.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Five seconds

Careful observers posted to Hacker News this week – and the Washington Post reported – that the X formerly known as Twitter (XFKAT?) appeared to be deliberately introducing a delay in loading links to sites the owner is known to dislike or views as competitors. These would be things like the New York Times and selected other news organizations, and rival social media and publishing services like Facebook, Instagram, Bluesky, and Substack.

The 4.8 seconds users clocked doesn’t sound like much until you remember, as the Post does, that a 2016 Google study found that 53% of mobile users will abandon a website that takes longer than three seconds to load. Not sure whether desktop users are more or less patient, but it’s generally agreed that delay is the enemy.

The mechanism by which XFKAT was able to do this is its built-in link shortener, t.co, through which it routes all the links users post. You can see this for yourself if you right-click on a posted link and copy the results. You can only find the original link by letting the t.co links resolve and copying the real link out of the browser address bar after the page has loaded.

Whether or not the company was deliberately delaying these connections, the fact is that it *can* – as can Meta’s platforms and many others. This in itself is a problem; essentially it’s a failure of network neutrality. This is the principle that a telecoms company should treat all traffic equally, and it is the basis of the egalitarian nature of the Internet. Regulatory insistence on network neutrality is why you can run a voice over Internet Protocol connection over broadband supplied by a telco or telco-owned ISP even though the services are competitors. Social media platforms are not subject to these rules, but the delaying links story suggests maybe they should be once they reach a certain size.

Link shorteners have faded into the landscape these days, but they were controversial for years after the first such service – TinyURL – was launched in 2002 (per Wikipedia). Critics cited several main issues: privacy, persistence, and obscurity. The latter refers to users’ inability to know where their clicks are taking them; I feel strongly about this myself. The privacy issue is that the link shorteners-in-the-middle are in a position to collect traffic data and exploit it (bad actors could also divert links from their intended destination). The ability to collect that data and chart “impact” is, of course, one reason shorteners were widely adopted by media sites of all types. The persistence issue is that intermediating links in this way creates one or more central points of failure. When the link shortener’s server goes down for any reason – failed Internet connection, technical fault, bankrupt owner company – the URL the shortener encodes becomes unreachable, even if the page itself is available as normal. You can’t go directly to the page, or even located a cached copy at the Internet Archive, without the original URL.

Nonetheless, shortened links are still widely used, for the same reasons why they were invented. Many URLs are very long and complicated. In print publications, they are visually overwhelming, and unwieldy to copy into a web address bar; they are near-impossible to proofread in footnotes and citations. They’re even worse to read out on broadcast media. Shortened links solve all that. No longer germane is the 140-character limit Twitter had in its early years; because the URL counted toward that maximum, short was crucial. Since then, the character count has gotten bigger, and URLs aren’t included in the count any more.

If you do online research of any kind you have probably long since internalized the routine of loading the linked content and saving the actual URL rather than the shortened version. This turns out to be one of the benefits of moving to Mastodon: the link you get is the link you see.

So to network neutrality. Logically, its equivalent for social media services ought to include the principle that users can post whatever content or links they choose (law and regulation permitting), whether that’s reposted TikTok videos, a list of my IDs on other systems, or a link to a blog advocating that all social media companies be forced to become public utilities. Most have in fact operated that way until now, infected just enough with the early Internet ethos of openness. Changing that unwritten social contract is very bad news even though no one believed XFKAT’s CEO when he insisted he was a champion of free speech and called the now-his site the “town square”.

If that’s what we want social media platforms to be, someone’s going to have to force them, especially if they begin shrinking and their owners start to feel the chill wind of an existential threat. You could even – though no one is, to the best of my knowledge – make the argument that swapping in a site-created shortened URL is a violation of the spirit of data protection legislation. After all, no one posts links on a social media site with the view that their tastes in content should be collected, analyzed, and used to target ads. Librarians have long been stalwarts in resisting pressure to disclose what their patrons read and access. In the move online in general, and to corporate social media in particular, we have utterly lost sight of the principle of the right to our own thoughts.

Illustrations: The New York City public library in 2006..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series she is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Media, Net life, UncategorizedTags , Leave a comment on Five seconds

The safe place

For a long time, fear that technical decisions – new domain names ($)(, cooption of open standards or software, laws mandating data localization – would splinter the Internet. “Balkanize” was heard a lot.

A panel at the UK Internet Governance Forum a couple of weeks ago focused on this exact topic, and was mostly self-congratulatory. Which is when it occurred to me that the Internet may not *be* fragmented, but it *feels* fragmented. Almost every day I encounter some site I can’t reach: email goes into someone’s spam folder, the site or its content is off-limits because it’s been geofenced to conform with copyright or data protection laws, or the site mysteriously doesn’t load, with no explanation. The most likely explanation for the latter is censorship built into the Internet feed by the ISP or the establishment whose connection I’m using, but they don’t actually *say* that.

The ongoing attrition at Twitter is exacerbating this feeling, as the users I’ve followed for years continue to migrate elsewhere. At the moment, it takes accounts on several other services to keep track of everyone: definite fragmentation.

Here in the UK, this sense of fragmentation may be about to get a lot worse, as the long-heralded Online Safety bill – written and expanded until it’s become a “Frankenstein bill”, as Mark Scott and Annabelle Dickson report at Politico – hurtles toward passage. This week saw fruitless debates on amendments in the House of Lords, and it will presumably be back in the Commons shortly thereafter, where it could be passed into law by this fall.

A number of companies have warned that the bill, particularly if it passes with its provisions undermining end-to-end encryption intact, will drive them out of the country. I’m not sure British politicians are taking them seriously; so often such threats are idle. But in this case, I think they’re real, not least because post-Brexit Britain carries so much less global and commercial weight, a reality some politicians are in denial about. WhatsApp, Signal, and Apple have all said openly that they will not compromise the privacy of their masses of users elsewhere to suit the UK. Wikipedia has warned that including it in the requirement to age-verify its users will force it to withdraw rather than violate its principles about collecting as little information about users as possible. The irony is that the UK government itself runs on WhatsApp.

Wikipedia, Ian McRae, the director of market intelligence for prospective online safety regulator Ofcom, showed in a presentation at UKIGF, would be just one of the estimated 150,000 sites within the scope of the bill. Ofcom is ramping up to deal with the workload, an effort the agency expects to cost £169 million between now and 2025.

In a legal opinion commissioned by the Open Rights Group, barristers at Matrix Chambers find that clause 9(2) of the bill is unlawful. This, as Thomas Macaulay explains at The Next Web, is the clause that requires platforms to proactively remove illegal or “harmful” user-generated content. In fact: prior restraint. As ORG goes on to say, there is no requirement to tell users why their content has been blocked.

Until now, the impact of most badly-formulated British legislative proposals has been sort of abstract. Data retention, for example: you know that pervasive mass surveillance is a bad thing, but most of us don’t really expect to feel the impact personally. This is different. Some of my non-UK friends will only use Signal to communicate, and I doubt a day goes by that I don’t look something up on Wikipedia. I could use a VPN for that, but if the only way to use Signal is to have a non-UK phone? I can feel those losses already.

And if people think they dislike those ubiquitous cookie banners and consent clickthroughs, wait until they have to age-verify all over the place. Worst case: this bill will be an act of self-harm that one day will be as inexplicable to future generations as Brexit.

The UK is not the only one pursuing this path. Age verification in particular is catching on. The US states of Virginia, Mississippi, Louisiana, Arkansas, Texas, Montana, and Utah have all passed legislation requiring it; Pornhub now blocks users in Mississippi and Virginia. The likelihood is that many more countries will try to copy some or all of its provisions, just as Australia’s law requiring the big social media platforms to negotiate with news publishers is spawning copies in Canada and California.

This is where the real threat of the “splinternet” lies. Think of requiring 150,000 websites to implement age verification and proactively police content. Many of those sites, as the law firm Mischon de Reya writes may not even be based in the UK.

This means that any site located outside the UK – and perhaps even some that are based here – will be asking, “Is it worth it?” For a lot of them, it won’t be. Which means that however much the Internet retains its integrity, the British user experience will be the Internet as a sea of holes.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.