Conundrum

It took me six hours of listening to people with differing points of view discuss AI and copyright at a workshop, organized by the Sussex Centre for Law and Technology at the Sussex Humanities Lab (SHL), to come up with a question that seemed to me significant: what is all this talk about who “wins the AI race”? The US won the “space race” in 1969, and then for 50 years nothing happened.

Fretting about the “AI race”, an argument at least one participant used to oppose restrictions on using copyrighted data for training AI models, is buying into several ideas that are convenient for Big Tech.

One: there is a verifiable endpoint everyone’s trying to reach. That isn’t anything like today’s “AI”, which is a pile of math and statistics predicting the most likely answers to prompts. Instead, they mean artificial general intelligence, which would be as much like generative AI as I am like a mushroom.

Two: it’s a worthy goal. But is it? Why don’t we talk about the renewables race, the zero carbon race, or the sustainability race? All of those could be achievable. Why just this well-lobbied fantasy scenario?

Three: we should formulate public policy to eliminate “barriers” that might stop us from winning it. *This* is where we run up against copyright, a subject only a tiny minority used to care about, but that now affects everyone. And, accordingly, everyone has had time to formulate an opinion since the Internet first challenged the historical operation of intellectual property.

The law as it stands is clear: making a copy is the exclusive right of the rightsholder. This is the basis of AI-related lawsuits. For training data to escape that law, it would have to be granted an exemption: ruled fair use (as in the Anthropic and Meta cases), create an exception for temporary copies, or shoehorned into existing exceptions such as parody. Even then, copyright law is administered territorially, so the US may call it fair use but the rest of the world doesn’t have to agree. This is why the esteemed legal scholar Pamela Samuelson has said copyright law poses an existential threat to generative AI.

But, as one participant pointed out, although the entertainment industry dominates these discussions, there are many other sectors with different needs. Science, for example, both uses and studies AI, and is built on massive amounts of public funding. Surely that data should be free to access?

I wanted to be at this meeting because what should happen with AI, training data, and copyright is a conundrum. You do not have to work for a technology company to believe that there is value in allowing researchers both within and outwith companies to work on machine learning and build AI tools. When people balk at the impossible scale of securing permission from every copyright holder of every text, image, or sound, they have a point. The only organizations that could afford that are the companies we’re already mad at for being too big, rich, and powerful.

At the same time, why should we allow those big, rich, powerful companies to plunder our cultural domain without compensating anyone and extract even larger fortunes while doing it? To a published author who sees years of work reflected in a chatbot’s split-second answer to a prompt, it’s lost income and readers.

So for months, as Parliament has wrangled over the Data bill, the argument narrowed to copyright. Should there be an exception for data mining? Should technology companies have to get permission from creators and rights holders? Or should use of their work be automatically allowed, unless they opt out? All answers seem equally impossible. Technology companies would have to find every copyright holder of every datum to get permission. Licensing by the billion.

If creators must opt out, does that mean one piece at a time? How will they know when they need to opt out and who they have to notify? At the meeting, that was when someone said that the US and China won’t do this. Britain will fall behind internationally. Does that matter?

And yet, we all seemed to converge on this: copyright is the wrong tool. As one person said, technologies that threaten the entertainment industry always bring demands to tighten or expand copyright. See the last 35 years, in which Internet-fueled copying spawned the Digital Millennium Copyright Act and the EU Copyright Directive, and copyright terms expanded from 28 years, renewable once, to author’s life plus 70.

No one could suggest what the right tool would be. But there are good questions. Such as: how do we grant access to information? With business models breaking, is copyright still the right way to compensate creators? One of us believed strongly in the capabilities of collection societies – but these tend to disproportionately benefit the most popular creators, who will survive anyway.

Another proposed the highly uncontroversial idea of taxing the companies. Or levies on devices such as smartphones. I am dubious on this one: we have been there before.

And again, who gets the money? Very successful artists like Paul McCartney, who has been vocal about this? Or do we have a broader conversation about how to enable people to be artists? (And then, inevitably, who gets to be called an artist.)

I did not find clarity in all this. How to resolve generative AI and copyright remains complex and confusing. But I feel better about not having an answer.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Second sight

“The colors came back,” a friend said. He was talking about the change after he had cataract surgery. My clinician, a retired surgeon, said something similar, that patients come in and exclaim: “I can’t believe how blue the sky is!”

I didn’t have that.

Cataracts develop slowly, so many don’t perceive how bad their vision has become. I knew. Because: my cataracts made it progressively harder for opticians to fully correct my myopia, an effect I first noticed in early 2018. For the next five or six years, my prescription notched up, and I’d be able to see reasonably clearly for about six months after getting new glasses. The next six months I’d curse the lack of clarity. Repeat until late 2023, when they said I was “ready” for surgery. At that point, I was about 20/40 with glasses.

I delayed for a while, and then had my right eye done in March. The right eye is due in a couple of weeks. So I’ve had four months to explore the difference.

It has been fascinating. And very different from John Berger‘s account, or James Thurber’s fuzzy few days without glasses in The Admiral on the Wheel.

I told the clinician I thought that even if I hadn’t been *seeing* colors exactly right I was interpreting them correctly. That turns out to be mostly true. The sky looks blue, or blue enough, and greens, reds, and yellows render fairly accurately.

This makes sense. The clinician says they generally believe that cataracts block blue light. “Isn’t it just a yellow cast over everything?” a friend asked. Not really.

To review: the primary colors of light are red, blue, and green. Red and blue make magenta; blue and green make cyan; green and red make yellow. Color printers print all colors using those three mixed colors, plus black: CMYK. It’s the difference between additive and subtractive color mixing – that is, starting with black (no color) and adding light, versus starting with white (all colors) and subtracting it. This seems weird at first encounter because schools teach the primary colors of mixing paints, which, as an increasing number of people are pointing out, is all wrong for the digital era.

In real life, my biggest cataract-related color shift turns bright purple flowers a dead greyish pink. A friend’s bright lavender walls grey down. Given that the remaining cataract has continued to densify, my original assessment holds up: I wasn’t losing much color information. I knew my friend’s walls were lavender without being told.

The biggest difference for me is that opticians can fully correct my eyesight again. So the operation has made the world brighter, whiter, and brought back crisp focus. At a recent conference, I could sit in the back and read the slides for the first time in probably five years. Although: blue highlights on the metal chair frames from overhead spotlights disappeared when I closed my post-op eye. Fun!

But then I watched an episode of the TV show Hacks in which Jean Smart wears a bright gold and black dress (above). When I closed my right eye, it turned….salmon. My real-life sweater of nearly the identical color *does not do this*. It is clearly an artifact of cataract plus screen.

Using an RGB color generator, I can say my right eye sees the dress as close to 255-220-0. My left eye sees it as roughly 255-220-150. Bright orange (~255-153-51) on my laptop screen also notably shifts, to a medium hot pink (~255-153-150 ). This suggests my cataract shifts green. Why?

Most other things look close to the same. Among the few exceptions: on an episode of Curb Your Enthusiasm, Larry David’s dark olive shirt looks grey with the pre-op eye, and Cheryl Hines’s pale yellow shirt turns almost white. Does this mean that eye is seeing more blue?

Fluorescent lighting also produces interesting artifacts: a bright lime green poster seen through the cataract seemed aqua.

There’s obviously a logical explanation for this; I just can’t quite work it out. Someone who understands the composition of these lighting conditions could doubtless easily explain what’s going on there (and I hope someone will!).

One final story. A couple of years ago, I saw a particularly stunning sunset out my loft window. Went to get the phone, and snapped a shot. I got back a pale, washed-out nothingburger. Went and got the better, more controllable, digital camera and tried again. Still washed out. Well, damn modern cameras and their autocorrection to what they think you should have seen. I knew about this, because in 2020, when Californians tried to take pictures of their wildfire-caused orange sky, they got grey. Bah.

Cut to April 2025. Same window. My left eye sees a really intense pink and orange sunset. My right eye…sees a washed-out nothingburger. *It wasn’t the camera.*

So, by next month I will have a fully sharp, crisp, bright world on both sides instead of a slightly dim fuzzball on one side. I will feel better balanced, and be better able to play tennis and bike. And I won’t go blind. But there’s a price. Because my post-op eye can’t do close-up the same way, I will grieve the loss of the superpower of being able to read the tiniest print unaided for the rest of my life. And I’ll lose the good sunsets.

Illustrations: Deborah Vance (Jean Smart), in Hacks (S03e01, “Just for Laughs”).

Addendum: With the pre-operative (left) eye, the purple and pink-ish flowers in this photo look the same color. The orangey flowers are slightly pinker, so *nearly* but noticeably not, the same color.

Three groups of flowers: purple, purplish pink, and orange-pink (salmon).
My pre-operative eye sees all these flowers as about the same color. The orangey ones are most noticeably different, but still closer to pink than they really are.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Notes from Old Songs 2025

This is chiefly aimed at anyone who saw me at the Old Songs Folk Festival this past weekend. (If you missed it, better luck next year!)

The folk page on my website is here. There is a link on it to my page on open guitar tunings, which I intend to update with the extra tunings discussed at the workshop. For now, as Andy Cohen explained, Martin Carthy’s tuning was CGCDGA (you will need heavier strings on the top and bottom to avoid buzzing).

At the Friday night concert, Andy Cohan and I sang My Sweet Wyoming Home, by Bill Staines. It is (without Andy) on The Last Trip Home CD.

At the “You’ve Got to Be Kidding” workshop, I sang The Cowboy Fireman, by Harry McClintock; Old Zip Coon, traditional, which I learned from Michael Cooney; Cold, Blow, and the Rainy Night, learned from Planxty; and The Bionic Consumer, by Bill Steele.

At the ballad workshop, I sang Mary Hamilton, learned from Caroline Paton, who learned it from Hallie Wood; and Queen Amang the Heather, which I learned from a variety of Scottish singers, who learned it from Belle Stewart. Both of those are on The Last Trip Home CD.

At the “This Spoke to Me” workshop, I sang The Last Trip Home, written by Davy Steele; The Spirit of Mother Jones, written by Andy Irvine; and Griselda’s Waltz, written by Bill Steele. The Last Trip Home and Griselda’s Waltz are also on The Last Trip Home CD.

Great to see everyone and thanks for coming!

wg

Revival

There appears to be media consensus: “Bluesky is dead.”

At The Commentary, James Meigs calls Bluesky “an expression of the left’s growing hypersensitivity to ideas leftists find offensive”, and says he accepts exTwitter’s “somewhat uglier vibe” in return for “knowing that right-wing views aren’t being deliberately buried”. Then he calls Bluesky “toxic” and a “hermetically sealed social-media bubble”.

At New Media and Marketing, Rich Meyer says Bluesky is in decline and engagement is dropping, and exTwitter is making a comeback.

At Slate, Alex Kirshner and Nitish Pahwa complain that Bluesky feels “empty”, say that its too-serious users are abandoning it because it isn’t fun, and compare it to a “small liberal arts college” and exTwitter to a “large state university”.

At The Spectator, Sean Thomas regrets that “Bluesky is dying” – and claims to have known it would fail from his first visit to the site, “a bad vegan cafe, full of humorless puritans”.

Many of these pieces – Mark Cuban at Fortune, for example, and Megan McArdle at the Washington Post – blame a “lack of diversity of thought”.

As Mike Masnick writes on TechDirt in its defense (Masnick is a Bluesky board member), “It seems a bit odd: when something is supposedly dying or irrelevant, journalists can’t stop writing about it.”

Have they so soon forgotten 2014, when everyone was writing that Twitter was dead?

Commentators may be missing that success for Bluesky looks different: it’s trying to build a protocol-driven ecosystem, not a site. Twitter had one, but destroyed it as its ad-based business model took over. Both Bluesky and Mastodon, which media largely ignores, aim to let users create their own experience and are building tools that give users as much control as possible. It seems to offend some commentators that one of them lets you block people you don’t want to deal with, but that’s weird, since it’s the one every social site has.

All social media have ups and downs, especially when they’re new (I really wonder how many of these commentators experienced exTwitter in its early days or have looked at Truth Social’s user numbers). Settling into a new environment and rebuilding take time – it may look like the old place, but its affordances are different, and old friends are missing. Meanwhile, anecdotally, some seem to be leaving social media entirely, driven away by privacy issues, toxic behavior, distaste for platform power and its owners, or simply distracted by life. Few of us *have* to use social media.

***

In 2002, the UK’s Financial Services Authority was the first to implement an EU directive allowing private organizations to issue their own electronic money without a banking license if they could meet the capital requirements. At the time, the idea seemed kind of cute, especially since there was a plan to waive some of the requirements for smaller businesses. Everyone wanted micropayments; here was a framework of possibility.

And then nothing much happened. The Register’s report (the first link above) said that organizations such as the Post Office, credit card companies, and mobile operators were considering launching emoney offerings. If they did, the results sank without trace. Instead, we’re all using credit/debit cards to pay for stuff online, just as we were 23 years ago. People are relucrtant to trust weird, new-fangled forms of money.

Then, in 2008, came cryptocurrencies – money as lottery ticket.

Last week, the Wall Street Journal reported that Amazon, Wal-Mart, and other multinationals are exploring stablecoins as a customer payment option – in other words, issuing their own cryptocurrencies, pegged to the US dollar. As Andrew Kassel explains at Investopedia, the result could be to bypass credit cards and banks, saving billions in fees.

It’s not clear how this would work, but I’m suspicious of the benefits to consumers. Would I have to buy a company’s stablecoin before doing business with it? And maintain a floating balance? At Axios, Brady Dale explores other possibilities. Ultimately, it sounds like a return to the 1970s, before multipurpose credit cards, when people had store cards from the retailers they used frequently, and paid a load of bills every month. Dale seems optimistic that this could be a win for consumers as well as retailers, but I can’t really see it.

In other words, the idea seems less cute now, less fun technological experiment, more rapacious. There’s another, more disturbing, possibility: the return of the old company town. Say you work for Amazon or Wal-Mart, and they offer you a 10% bonus for taking your pay in their stablecoin. You can’t spend it anywhere but their store, but that’s OK, right, because they stock everything you could possibly want? A modern company town doesn’t necessarily have to be geographical.

I’ve long thought that company towns, which allowed companies to effectively own employees, are the desired endgame for the titans. Elon Musk is heading that way with Starbase, Texas, now inhabited primarily by SpaceX employees, as Elizabeth Crisp reports at The Hill.

I don’t know if the employees who last month voted enthusiastically for the final incorporation of Starbase realize how abusive those old company towns were.

Illustrations: The Starbase sign adjoining Texas Highway 4, in 2023 (via Jenny Hautmann at Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A thousand small safety acts

“The safest place in the world to be online.”

I think I remember that slogan from Tony Blair’s 1990s government, when it primarily related to ecommerce. It morphed into child safety – for example, in 2010, when the first Digital Economy Act was passed, or 2017, when the Online Safety Act, passed in 2023 and entering into force in March 2025, was but a green paper. Now, Ofcom is charged with making it reality.

As prior net.wars posts attest, the 2017 green paper began with the idea that social media companies could be forced to pay, via a levy, for the harm they cause. The key remaining element of that is a focus on the large, dominant companies. The green paper nodded toward designing proportionately for small businesses and startups. But the large platforms pull the attention: rich, powerful, and huge. The law that’s emerged from these years of debate takes in hundreds of thousands of divergent services.

On Mastodon, I’ve been watching lawyer Neil Brown scrutinize the OSA with a particular eye on its impact on the wide ecosystem of what we might call “the community Internet” – the thousands of web boards, blogs, chat channels, and who-knows-what-else with no business model because they’re not businesses. As Brown keeps finding in his attempts to help provide these folks with tools they can use are struggling to understand and comply with the act.

First things first: everyone agrees that online harm is bad. “Of course I want people to be safe online,” Brown says. “I’m lucky, in that I’m a white, middle-aged geek. I would love everyone to have the same enriching online experience that I have. I don’t think the act is all bad.” Nonetheless, he sees many problems with both the act itself and how it’s being implemented. In contacts with organizations critiquing the act, he’s been surprised to find how many unexpectedly agree with him about the problems for small services. However, “Very few agreed on which was the worst bit.”

Brown outlines two classes of problem: the act is “too uncertain” for practical application, and the burden of compliance is “too high for insufficient benefit”.

Regarding the uncertainty, his first question is, “What is a user?” Is someone who reads net.wars a user, or just a reader? Do they become a user if they post a comment? Do they start interacting with the site when they read a comment, make a comment, or only when they comment to another user’s comment? In the fediverse, is someone who reads postings he makes via his private Mastodon instance its user? Is someone who replies from a different instance to that posting a user of his instance?

His instance has two UK users – surely insignificant. Parliament didn’t set a threshold for the “significant number of UK users” that brings a service into scope, so Ofcom says it has no answer to that question. But if you go by percentage, 100% of his user base is in Britain. Does that make Britain his “target market”? Does having a domain name in the UK namespace? What is a target market for the many community groups running infrastructure for free software projects? They just want help with planning, or translation; they’re not trying to sign up users.

Regarding the burden, the act requires service providers to perform a risk assessment for every service they run. A free software project will probably have a dozen or so – a wiki, messaging, a documentation server, and so on. Brown, admittedly not your average online participant, estimates that he himself runs 20 services from his home. Among them is a photo-sharing server, for which the law would have him write contractual terms of service for the only other user – his wife.

“It’s irritating,” he says. “No one is any safer for anything that I’ve done.”

So this is the mismatch. The law and Ofcom imagine a business with paid staff signing up users to profit from them. What Brown encounters is more like a stressed-out woman managing a small community for fun after she puts the kids to bed.

Brown thinks a lot could be done to make the act less onerous for the many sites that are clearly not the problem Parliament was trying to solve. Among them, carve out low-risk services. This isn’t just a question of size, since a tiny terrorist cell or a small ring sharing child sexual abuse material can pose acres of risk. But Brown thinks it shouldn’t be too hard to come up with criteria to rule services out of scope such as a limited user base coupled with a service “any reasonable person” would consider low risk.

Meanwhile, he keeps an In Memoriam list of the law’s casualties to date. Some have managed to move or find new owners; others are simply gone. Not on the list are non-UK sites that now simply block UK users. Others, as Brown says, just won’t start up. The result is an impoverished web for all of us.

“If you don’t want a web dominated by large, well-lawyered technology companies,” Brown sums up, “don’t create a web that squeezes out small low-risk services.”

Illustrations: Early 1970s cartoon illustrating IT project management.

Wendy M. Grossman is an award-winning journalist. Her Web site has extensive links to her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Negative externalities

A sheriff’s office in Texas searched a giant nationwide database of license plate numbers captured by automatic cameras to look for a woman they suspected of self-managing an abortion. As Rindala Alajazi writes at EFF, that’s 83,000 cameras in 6,809 networks belonging to Flock Safety, many of them in states where abortion is legal or protected as a fundamental right until viability.

We’ve known something like this was coming ever since 2022, when the US Supreme Court overturned Roe v. Wade and returned the power to regulate abortion to the individual US states. The resulting unevenness made it predictable that the strongest opponents to legal abortion would turn their attention to interstate travel.

The Electronic Frontier Foundation has been warning for some time about Flock’s database of camera-captured license plates. Recently, Jason Koebler reported at 404 Media that US Immigration and Customs Enforcement has been using Flock’s database to find prospects for deportation. Since ICE does not itself have a contract with Flock, it’s been getting local law enforcement to perform search on its behalf. “Local” refers only to the law enforcement personnel; they have access to camera data that’s shared nationally.

The point is that once the data has been collected it’s very hard to stop mission creep. On its website, Flock says its technology is intended to “solve and eliminate crime” and “protect your community”. That might have worked when we all agreed what was a crime.

***

A new MCTD Cambridge report makes a similar point about menstrual data, when sold at scale. Now, I’m from the generation that managed fertility with a paper calendar, but time has moved on, and fertility tracking apps allow a lot more of the self-quantification that can be helpful in many situations. As Stephanie Felsberger writes in introducing the report, menstrual data is highly revealing of all sorts of sensitive information. Privacy International has studied period-tracking apps, and found that they’ve improved but still pose serious privacy risks.

On the other hand, I’m not so sure about the MCTD report’s third recommendation – that government build a public tracker app within the NHS. The UK doesn’t have anything like the kind of divisive rhetoric around abortion that the US does, but the fact remains that legal abortion is a 1967 carve-out from an 1861 law. In the UK, procuring an abortion is criminal *except* during the first 24 weeks, or if the mother’s life is in danger, or if the fetus has a serious abnormality. And even then, sign-off is required from two doctors.

Investigations and prosecutions of women under that 1861 law have been rising, as Shanti Das reported at the Guardian in January. Pressure in the other direction from US-based anti-choice groups such as the Alliance for Defending Freedom has also been rising. For years it’s seemed like this was a topic no one really wanted to reopen. Now, health care providers are calling for decriminalization, and, as Hannah Al-Oham reported this week, there are two such proposals currently in front of Parliament.

Also relevant: a month ago, Phoebe Davis reported at the Observer that in January the National Police Chiefs’ Council quietly issued guidance advising officers to search homes for drugs that can cause abortions in cases of stillbirths and to seize and examine devices to check Internet searches, messages, and health apps to “establish a woman’s knowledge and intention in relation to the pregnancy”. There was even advice on how to bypass the requirement for a court order to access women’s medical records.

In this context, it’s not clear to me that a publicly owned app is much safer or more private than a commercial one. What’s needed is open source code that can be thoroughly examined that keeps all data on the device itself, encrypted, in a segregated storage space over which the user has control. And even then…you know, paper had a lot of benefits.

***

This week the UK Parliament passed the Data (Use and Access) bill, which now just needs a royal signature to become law. At its site, the Open Rights Group summarizes the worst provisions, mostly a list of ways the bill weakens citizens’ rights over their data.

Brexit was sold to the public on the basis of taking back national sovereignty. But, as then-MEP Felix Reda said the morning after the vote, national sovereignty is a fantasy in a globalized world. Decisions about data privacy can’t be made imagining they are only about *us*.

As ORG notes, the bill has led European Digital Rights to write to the European Commission asking for a review of the UK’s adequacy status. This decision, granted in 2020, was due to expire in June 2025, but the Commission granted a six-month extension to allow the bill’s passage to complete. In 2019, when the UK was at peak Brexit chaos, it seemed possible that the Conservative then-government would allow the UK to leave the EU with no deal in place, net.wars noted the risk to data flows. The current Labour government, with its AI and tech policy ambitions, ought to be more aware of the catastrophe losing adequacy would present. And yet.

Illustrations: Map from the Center for Reproductive Rights showing the current state of abortion rights across the US.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast and a regular guest on the TechGrumps podcast. Follow on Mastodon or Bluesky.

Nephology

For an hour yesterday (June 5, 2025), we were treated to the spectacle of the US House Judiciary Committee, both Republicans and Democrats, listening – really listening, it seemed – to four experts defending strong encryption. The four: technical expert Susan Landau and lawyers Caroline Wilson-Palow, Richard Salgado, and Gregory Nejeim.

The occasion was a hearing on the operation of the Clarifying Lawful Overseas Use of Data Act (2018), better known as the CLOUD Act. It was framed as collecting testimony on “foreign influence on Americans’ data”. More precisely, the inciting incident was a February 2025 Washington Post article revealing that the UK’s Home Office had issued Apple with a secret demand that it provide backdoor law enforcement access to user data stored using the Advanced Data Protection encryption feature it offers for iCloud. This type of demand, issued under S253 of the Investigatory Powers Act (2016), is known as a “technical capability notice”, and disclosing its existence is a crime.

The four were clear, unambiguous, and concise, incorporating the main points made repeatedly over the last the last 35 years“>35 years. Backdoors, they all agreed, imperil everyone’s security; there is no such thing as a hole only “good guys” can use. Landau invoked Salt Typhoon and, without ever saying “I warned you at the time”, reminded lawmakers that the holes in the telecommunications infrastructure that they mandated in 1994 became a cybersecurity nightmare in 2024. All four agreed that with so much data being generated by all of us every day, encryption is a matter of both national security as well as privacy. Referencing the FBI’s frequent claim that its investigations are going dark because of encryption, Nojeim dissented: “This is the golden age of surveillance.”

The lawyers jointly warned that other countries such as Canada and Australia have similar provisions in national legislation that they could similarly invoke. They made sensible suggestions for updating the CLOUD Act to set higher standards for nations signing up to data sharing: set criteria for laws and practices that they must meet; set criteria for what orders can and cannot do; and specify additional elements countries must include. The Act could be amended to include protecting encryption, on which it is currently silent.

The lawmakers reserved particular outrage for the UK’s audacity in demanding that Apple provide that backdoor access for *all* users worldwide. In other words, *Americans*.

Within the UK, a lot has happened since that February article. Privacy advocates and other civil liberties campaigners spoke up in defense of encryption. Apple soon withdrew ADP in the UK. In early March, the UK government and security services removed advice to use Apple encryption from their websites – a responsible move, but indicative of the risks Apple was being told to impose on its users. A closed-to-the-public hearing was scheduled for March 14. Shortly before it, Privacy International, Liberty, and two individual claimants filed a complaint with the Investigatory Powers Tribunal seeking for the hearing to be held in public, and disputing the lawfulness, necessity, and secrecy of TCNs in general. Separately, Apple appealed against the TCN.

On April 7, the IPT released a public judgment summarizing the more detailed ruling it provided only to the UK government and Apple. Short version: it rejected the government’s claim that disclosing the basic details of the case will harm the public interest. Both this case and Apple’s appeal continue.

As far as the US is concerned, however, that’s all background noise. The UK’s claim to be able to compel the company to provide backdoor access worldwide seems to have taken Congress by surprise, but a day like this has been on its way ever since 2014, when the UK included extraterritorial power in the Data Retention and Investigatory Powers Act (2014). At the time, no one could imagine how they would enforce this novel claim, but it was clearly something other governments were going to want, too.

This Judiciary Committee hearing was therefore a festival of ironies. For one thing, the US’s own current administration is hatching plans to merge government departments’ carefully separated databases into one giant profiling machine for US citizens. Second, the US has always regarded foreigners as less deserving of human rights than its own citizens; the notion that another country similarly privileges itself went down hard.

More germane, subsidiaries of US companies remain subject to the PATRIOT Act, under which, as the late Caspar Bowden pointed out long ago, the US claims the right to compel them to hand over foreign users’ data. The CLOUD Act itself was passed in response to Microsoft’s refusal to violate Irish data protection law by fulfilling a New York district judge’s warrant for data relating to an Irish user. US intelligence access to European users’ data under the PATRIOT Act has been the big sticking point that activist lawyer Max Schrems has used to scuttle a succession of US-EU data sharing arrangements under GDPR. Another may follow soon: in January, the incoming Trump administration fired most of the Privacy and Civil Liberties Oversight board tasked to protect Europeans’ rights under the latest such deal.

But, no mind. Feast, for a moment, on the thought of US lawmakers hearing, and possibly willing to believe, that encryption is a necessity that needs protection.

Illustrations: Gregory Nejeim, Richard Salgado, Caroline Wilson-Palow, and Susan Landau facing the Judiciary Committee on June 5, 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Sovereign

On May 19, a group of technologists, researchers, economists, and scientists published an open letter calling on British prime minister Keir Starmer to prioritize the development of “sovereign advanced AI capabilities through British startups and industry”. I am one of the many signatories. Britain’s best shot at the kind of private AI research lab under discussion was Deepmind, sold to Google in 2014; the country has nothing now that’s domestically owned. ”

Those with long memories know that Leo was the first computer used for a business application – running Lyons tea rooms. In the 1980s, Britain led personal computing.

But the bigger point is less about AI in specific and more about information technology generally. At a panel at Computers, Privacy, and Data Protection in 2022, the former MEP Jan Philipp Albrecht, who was the special rapporteur for the General Data Protection Regulation, outlined his work building up cloud providers and local hardware as the Minister for Energy, Agriculture, the Environment, Nature and Digitalization of Schleswig-Holstein. As he explained, the public sector loses a great deal when it takes the seemingly easier path of buying proprietary software and services. Among the lost opportunities: building capacity and sovereignty. While his organization used services from all over the world, it set its own standards, one of which was that everything must be open source,

As the events of recent years are making clear, proprietary software fails if you can’t trust the country it’s made in, since you can’t wholly audit what it does. Even more important, once a company is bedded in, it can be very hard to excise it if you want to change supplier. That “customer lock-in” is, of course, a long-running business strategy, and it doesn’t only apply to IT. If we’re going to spend large sums of money on IT, there’s some logic to investing it in building up local capacity; one of the original goals in setting up the Government Digital Service was shifting to smaller, local suppliers instead of automatically turning to the largest and most expensive international ones.

The letter calls relying on US technology companies and services a “national security risk. Elsewhere, I have argued that we must find ways to build trusted systems out of untrusted components, but the problem here is more complex because of the sensitivity of government data. Both the US and China have the right to command access to data stored by their companies, and the US in particular does not grant foreigners even the few privacy rights it grants its citizens.

It’s also long past time for countries to stop thinking in terms of “winning the AI race”. AI is an umbrella term that has no single meaning. Instead, it would be better to think in terms of there being many applications of AI, and trying to build things that matter.

***

As predicted here two years ago, AI models are starting to collapse, Stephen J. Vaughan writes at The Register.

The basic idea is that as the web becomes polluted with synthetically-generated data, the quality of the data used to train the large language models degrades, so the models themselves become less useful. Even without that, the AI-with-everything approach many search engines are taking is poisoning their usefulness. Model collapse just makes it worse.

We would point out to everyone frantically adding “AI” to their services that the historical precedents are not on their side. In the late 1990s, every site felt it had to be a portal, so they all had search, and weather, and news headlines, and all sorts of crap that made it hard to find the search results. The result? Google disrupted all that with a clean, white page with no clutter (those were the days). Users all switched. Yahoo is the most obvious survivor from that period, and I think it’s because it does have some things – notably financial data – that it does extremely well.

It would be more satisfying to be smug about this, but the big issue is that companies are going on spraying toxic pollution over the services we all need to be able to use. How bad does it have to get before they stop?

***

At Privacy Law Scholars this week, in a discussion of modern corporate oligarchs and their fantasies of global domination, an attendee asked if any of us had read the terms of service for Starlink. She wanted to draw out attention to the following passage, under “Governing Law”:

For Services provided to, on, or in orbit around the planet Earth or the Moon, this Agreement and any disputes between us arising out of or related to this Agreement, including disputes regarding arbitrability (“Disputes”) will be governed by and construed in accordance with the laws of the State of Texas in the United States. For Services provided on Mars, or in transit to Mars via Starship or other spacecraft, the parties recognize Mars as a free planet and that no Earth-based government has authority or sovereignty over Martian activities. Accordingly, Disputes will be settled through self-governing principles, established in good faith, at the time of Martian settlement.

Reminder: Starlink has contracts worth billions of dollars to provide Internet infrastructure in more than 100 countries.

So who’s signing this?

Illustrations: The Martian (Ray Walston) in the 1963-1966 TV series My Favorite Martian.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Dangerous corner

This year’s Computers. Privacy, and Data Protection conference arrived at a crossroads moment. The European Commission, wanting to compete to “win the AI race”, is pursuing an agenda of simplification. Based on a recent report by former European Central Bank president Mario Draghi, it’s looking to streamline or roll back some of the regulation the EU is famous for.

Cue discussion of “The Brussels Effect”, derived from The California Effect, which sees compliance with regulation voluntarily shift towards the strictest regime. As Mireille Hildebrandt explained in her opening keynote, this phenomenon requires certain conditions. In the case of data protection legislation, that means three things: that companies will comply with the most stringent rules to ensure they are universally compliant, and that they want and need to compete in the EU. If you want your rules to dominate, it seems like a strategy. Except: China’s in-progress data protection regime may well be the strongest when it’s complete, but in that very different culture it will include no protection against the government. So maybe not a winning game?

Hildebrandt went on to prove with near-mathematical precision that na artificial general intelligence can never be compatible with the General Data Protection Regulation – AGI is “based on an incoherent conceptualization” and can’t be tested.

“Systems built with the goal of performing any task under any circumstances are fundamentally unsafe,” she said. “They cannot be designed for safety using fundamental engineering principles.”

AGI failing to meet existing legal restrictions seems minor in one way, since AGI doesn’t exist now, and probably never will. But as Hildebrandt noted, huge money is being poured into it nonetheless, and the spreading impact of that is unavoidable even if it fails.

The money also makes politicians take the idea seriously, which is the likely source of the EU’s talk of “simplification” instead of fundamental rights. Many fear that forthcoming simplification packages will reopen GDPR with a view to weakening the core principles of data minimization and purpose limitation. As one conference attendee asked, “Simplification for whom?”

In a panel on conflicting trends in AI governance, Shazeda Ahmed agreed: “There is no scientific basis around the idea of sentient AI, but it’s really influential in policy conversations. It takes advantage of fear and privileges technical knowledge.”

AI is having another impact technology companies may not have notidced yet: it is aligning the interests of the environmental movement and the privacy field.

Sustainability and privacy have often been played off against each other. Years ago, for example, there were fears that councils might inspect household garbage for elements that could have been recycled. Smart meters may or may not reduce electricity usage, but definitely pose privacy risks. Similarly, many proponents of smart cities stress the sustainability benefits but overlook the privacy impact of the ubiquitous sensors.

The threat generative AI poses to sustainability is well-documented by now. The threat the world’s burgeoning data centers pose to the transition to renewables is less often clearly stated and it’s worse than we might think. Claude Turmes, for example, highlighted the need to impose standards for data centers. Where an individual is financially incentivized to charge their electric vehicle at night and help even out the load on the grid, the owners of data centers don’t care. They just want the power they need – even if that means firing up coal plants to get it. Absent standards, he said, “There will be a whole generation of data centers that…use fossil gas and destroy the climate agenda.” Small nuclear power reactors, which many are suggesting, won’t be available for years. Worse,, he said, the data centers refuse to provide information to help public utilities plan despite their huge cosumption.

Even more alarming was the panel on the conversion of the food commons into data spaces. So far, most of what I had heard about agricultural data revolved around precision agriculture and its impact on farm workers, as explored in work (PDF) by Karen Levy, Solon Barocas, and Alexandra Mateescu. That was plenty disturbing, covering the loss of autonomy as sensors collect massive amounts of fine-grained information, everything from soil moisture to the distribution of seeds and fertilizer.

Much more alarming to see Monja Sauvagerd connect up in detail the large companies that are consolidating our food supply into a handful of platforms. Chinese government-owned Sinochem owns Syngenta; John Deere expanded by buying the machine learning company Blue River; and in 2016, Bayer bought Monsanto.

“They’re blurring the lines between seeds, agrichemicals, bio technology, and digital agriculture,” Sauvagerd said. So: a handful of firms in charge of our food supply are building power based on existing concentration. And, selling them cloud and computing infrastructure services, the array of big technology platforms that are already dangerously monopolistic. In this case, “privacy”, which has always seemed abstract, becomes a factor in deciding the future of our most profoundly physical system. What rights should farmers have to the data their farms generate?

In her speech, Hildebrandt called the goals of TESCREAL – transhumanism, extropianism, singularitarianism, cosmism, rationalist ideology, effective altruism, and long-termism – “paradise engineering”. She proposed three questions for assessing new technologies: What will it solve? What won’t it solve? What new problems will it create? We could add a fourth: while they’re engineering paradise, how do we live?

Illustrations: Brussels’ old railway hub, next to its former communications hub, the Maison de la Poste, now a conference center.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: The Promise and Peril of CRISPR

The Promise and the Peril of CRISPR
Edited by Neal Baer
Johns Hopkins University Press
ISBN: 978-1-4214493-02

It’s an interesting question: why are there so many articles in which eminent scientists fear an artificial general superintelligence (which is pure fantasy for the foreseeable future)…and so few that are alarmed by human gene editing tools, which are already arriving? The pre-birth genetic selection in the 1997 movie Gattaca is closer to reality than an AGI that decides to kill us all and turn us into paperclips.

In The Promise and the Peril of CRISPR, Neal Baer collects a series of essays considering the ethical dilemmas posed by a technology that could be used to eliminate whole classes of disease and disabilities. The promise is important: gene editing offers the possibility of curing chronic, painful, debilitating congenital conditions. But for everything, there may be a price. A recent episode of HBO Max’s TV series The Pitt showed the pain that accompanies sickle cell anemia. But that same condition confers protection against malaria, which was an evolutionary advantage in some parts of the world. There may be many more such tradeoffs whose benefits are unknown to us.

Baer started with a medical degree, but quickly found work as a TV writer. He is best known for his work on the first seven years of ER and seasons two through 12 of Law and Order: Special Victims Unit. In his medical career as an academic pediatrician, he writes extensively and works with many health care-related organizations.

Most books on new technologies like CRISPR (for clustered regularly interspaced short palindromic repeats) are either all hype or all panic. In pulling together the collection of essays that make up The Promise and Peril of CRISPR, Baer has brought in voices rarely heard in discussions of new technologies. Ethan Weiss tells the story of his daughter, who was born with albinism, which has more difficult consequences than simple lack of pigmentation. Had the technology been available, he writes, they might have opted to correct the faulty gene that causes it; lacking that, they discovered new richness in life that they would never wish to give up. In another essay, Florence Ashley explores the potential impact from several directions on trans people, who might benefit from being able to alter their bodies through genetics rather than frequent medical interventions such as hormones. And in a third, Krystal Tsosie considers the impact on indigenous peoples, warning against allowing corporate ownership of DNA.

Other essays consider the potential for conditions such as cystic fibrosis (Sandra Sufian) and deafness (Carol Padden and Jacqueline Humphries), and international human rights. One use he omits, despite its status as intermittent media fodder since techniques for gene editing were first developed, is performance enhancement in sports. There is so far no imaginable way to test athletes for it. And anyone who’s watched junior sports knows there are definitely parents crazy enough to adopt any technology that will improve their kids’ abilities. Baer was smart to skip this; it will be a long time before CRISPR is cheap enough and advanced enough to be accessible for that sort of thing.

In one essay, molecular biologist Ellen D. Jorgenson discusses a class she co-designed to facilitate teaching CRISPR to anyone who cared to learn. At the time, the media were focused on its dangers, and she believed that teaching it would help alleviate public fear. Most uses, she writes, are benign. Based on previous experience with scientific advances, it will depend who wields it and for what purpose.