Magic math balls

So many ironies, so little time. According to the Financial Times (and syndicated at Ars Technica), the US government, which itself has traditionally demanded law enforcement access to encrypted messages and data, is pushing the UK to drop its demand that Apple weaken its encryption. Normally, you want to say, Look here, countries are entitled to have their own laws whether the US likes it or not. But this is not a law we like!

This all began in February, when the Washington Post reported that the UK’s Home Office had issued Apple with a Technical Capability Notice. Issued under the Investigatory Powers Act (2016) and supposed to be kept secret, the TCN demanded that Apple undermine the end-to-end encryption used for iCloud’s Advanced Data Protection feature. Much protest ensued, followed by two legal cases in front of the Investigatory Powers Tribunal, one brought by Apple, the other by Privacy International and Liberty. WhatsApp has joined Apple’s legal challenge.

Meanwhile, Apple withdrew ADP in the UK. Some people argued this didn’t really matter, as few used it, which I’d call a failure of user experience design rather than an indication that people didn’t care about it. More of us saw it as setting a dangerous precedent for both encryption and the use of secret notices undermining cybersecurity.

The secrecy of TCNs is clearly wrong and presents a moral hazard for governments that may prefer to keep vulnerabilities secret so they can take advantage for surveillance purposes. Hopefully, the Tribunal will eventually agree and force a change in the law. The Foundation for Information Policy Research (obDisclosure: I’m a FIPR board member) has published a statement explaining the issues.

According to the Financial Times, the US government is applying a sufficiently potent threat of tariffs to lead the UK government to mull how to back down. Even without that particular threat, it’s not clear how much the UK can resist. As Angus Hanton documented last year in the book Vassal State, the US has many well-established ways of exerting its influence here. And the vectors are growing; Keir Starmer’s Labour government seems intent on embedding US technology and companies into the heart of government infrastructure despite the obvious and increasing risks of doing so. When I read Hanton’s book earlier this year, I thought remaining in the EU might have provided some protection, but Caroline Donnelly warns at Computer Weekly that they, too, are becoming dangerously dependent on US technology, specifically Microsoft.

It’s tempting to blame everything on the present administration, but the reality is that the US has long used trade policy and treaties to push other countries into adopting laws regardless of their citizens’ preferences.

***

As if things couldn’t get any more surreal, this week the Trump administration *also* issued an executive order banning “woke AI” in the federal government. AI models are in future supposed to be “politically neutral”. So, as Kevin Roose writes at the New York Times, the culture wars are coming for AI.

The US president is accusing chatbots of “Marxist lunacy”, where the rest of the world calls them inaccurate, biased toward repeating and expanding historical prejudices, and inconsistent. We hear plenty about chatbots adopting Nazi tropes; I haven’t heard of one promoting workers’ and migrants’ rights.

If we know one thing about AI models it’s that they’re full of crap all the way down. The big problem is that people are deploying them anyway. At the Canary, Steve Topple reports that the UK’s Department of Work and Pensions admits in a newly-published report that its algorithm for assessing whether benefit claimants might commit fraud is ageist and and racist. A helpful executive order would set must-meet standards for *accuracy*. But we do not live in those times.

The Guardian reports that two more Trump EOs expedite building new data centers, promote exports of American AI models, expand the use of AI in the federal government, and intend to solidify US dominance in the field. Oh, and Trump would really like if it people would stop calling it “artificial” and find a new name. Seven years ago, aspirational intelligence” seemed like a good idea. But that was back when we heard a lot about incorporating ethics. So…”magic math ball”?

These days, development seems to proceed ethics-free. DWP’s report, for example, advocates retraining its flawed algorithm but says continuing to operate it is “reasonable and proportionate”. In 2021, for European Digital Rights Initiative, Agathe Balayn and Seda Gürses found, “Debiasing locates the problems and solutions in algorithmic inputs and outputs, shifting political problems into the domain of design, dominated by commercial actors.” In other words, no matter what you think is “neutral”, training data, model, and algorithms are only as “neutral” as their wider context allows them to be.

Meanwhile, nothing to curb the escalating waste. At 404 Media, Emanuel Maiberg finds that Spotify is publishing AI-generated songs from dead artists without anyone’s’ permission. On Monday, MSNBC’s Rachel Maddow told viewers that there’s so much “AI slop ” about her that they’ve posted Is That Really Rachel? to catalog and debunk them.

As Ed Zitron writes, the opportunity costs are enormous.

In the UK, the US, and many other places, data centers are threatening the water supply.

But sure, let’s make more of that.

Illustrations: Magic 8 ball toy (via frankieleon at Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her website has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Cautionary tales

I’ve been online for nearly 34 years, and I’m thinking of becoming a child. Or at least, a child to big user-to-user social media services, which next week will start asking for proof of adulthood. On July 25, the new age verification requirements under the Online Safety Act come into effect in the UK. The regulator, Ofcom, has published a guide.

Plenty of companies aim to join this new market. Some are familiar: credit scorers Experian and Transunion. Others are new: Yoti, which we saw demonstrated back in 2016, and Sam Altman and Andreessen Horowitz-backed six-year-old startup World, which recently did a promotional tour for the UK launch of its Orb identification system. Summary: many happy privacy words, but still dubious.

Reddit picked Persona; Dearbail Jordan at the BBC says Redditors will need to upload either a selfie for age estimation or a government-issued ID. Reddit says it will not see this data, only storing each user’s verification status along with the birth date they’ve (optionally) provided.

Bluesky has chosen Kids’ Web Services from Epic Games. The announcement says KWS accepts multiple options: payment cards, ID scans, and face scans. Users who decline to supply this information will be denied access to adult content and direct messaging. How much do I care about either? Would I rather just be a child to two-year-old Bluesky?

On older sites my adulthood ought to be obvious: I joined Twitter/X in 2008 and Reddit in 2015. Do the math, guys! I suppose there is a chance I could have created the account, forgotten it, and then revived it for a child (the “older brother problem”), but I’m not sure these third-party verifiers solve that either.

Everyone wants to protect children. But it doesn’t make sense to do it by creating a system that exposes everyone, including children, to new privacy risks. In its report on how to fix the OSA, the Open Rights Group argues that interoperability and portability should be first principles, and that users should be able to choose providers and methods. Today, the social media companies don’t see age verification data; in five years will they be buying up those providers? These first steps matter, as they are setting the template for what is to come.

This is the opening of a floodgate. On June 27 the US Supreme Court ruled in Free Speech Coalition v. Paxton to uphold a law requiring pornographic websites to verify users’ ages through government-issued ID. At TechDirt, Mike Masnick called the ruling taking a chainsaw to the First Amendment.

It’s easy to predict that here will be scandals surrounding the data age verifiers collect, and others where technological failures let children access the wrong sort of content. We’ll hear less about the frustrations of people who are blocked by age verification from essential information. Meanwhile, child safety folks will continue pushing for new levels of control.

The big question is this: how will we know if it’s working? What does “success” look like?

***

At Platformer, Casey Newton covers Substack’s announcement that it has closed series C funding round of $100 million, valuing the company at $1.1 billion. The eight-year-old company gets to say it’s a unicorn.

Newton tries to understand how Substack is worth that. He predicts – logically – that its only choice to justify its venture capitalists’ investment will be rampant enshittification. These guys don’t put in that kind of money without expecting a full-bore return, which is why Newton is dubious about the founders’ promise to invest most of that newly-raised capital in creators. Recall the stages Cory Doctorow laid out: first they amass as many users as possible; then they abuse those users to amass as many business customers (advertisers) as possible; then they squeeze everyone.

Substack, which announced four months ago that it – or, more correctly, its creators – has more than 5 million paid subscriptions, is different in that its multi-sided market structure is more like Uber or Amazon Marketplace than like a social media site or traditional publisher. It has users (readers and listeners), creators (like Uber’s drivers or Amazon’s sellers), and customers (advertisers). Viewed that way, it’s easy to see Substack’s most likely path: raise prices (users and advertisers), raise thresholds and commissions (creators), and, like Amazon, force sellers (creators) into using fee-based additional services in order to stay afloat. Plus, it must crush the competition. See similar math from Anil Dash.

Less ponderable is the headwind of Substack’s controversial hospitality to extremists, noted in 2023 by Jonathan Katz at The Atlantic. Some creators – like Newton – have opted to leave for competitor Ghost, which is both open source and cheaper. Many friends refuse to pay Substack even when they want to support creators whose work they admire. At the time, Stephen Bush responded at the Financial Times that Substack should admit that it’s not a publisher but a “handy bit of infrastructure for sending newsletters”. Is that worth $1.1 billion?

Like earlier Silicon Valley companies, Substack is planning to reverse its previous disdain for advertising, as Benjamin Mullin and Jessica Testa report at the New York Times. The company is apparently also looking forward to embracing social networking.

So, no really new ideas, then?

Illustrations: Unicorn (by Pearson Scott Foresman via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Conundrum

It took me six hours of listening to people with differing points of view discuss AI and copyright at a workshop, organized by the Sussex Centre for Law and Technology at the Sussex Humanities Lab (SHL), to come up with a question that seemed to me significant: what is all this talk about who “wins the AI race”? The US won the “space race” in 1969, and then for 50 years nothing happened.

Fretting about the “AI race”, an argument at least one participant used to oppose restrictions on using copyrighted data for training AI models, is buying into several ideas that are convenient for Big Tech.

One: there is a verifiable endpoint everyone’s trying to reach. That isn’t anything like today’s “AI”, which is a pile of math and statistics predicting the most likely answers to prompts. Instead, they mean artificial general intelligence, which would be as much like generative AI as I am like a mushroom.

Two: it’s a worthy goal. But is it? Why don’t we talk about the renewables race, the zero carbon race, or the sustainability race? All of those could be achievable. Why just this well-lobbied fantasy scenario?

Three: we should formulate public policy to eliminate “barriers” that might stop us from winning it. *This* is where we run up against copyright, a subject only a tiny minority used to care about, but that now affects everyone. And, accordingly, everyone has had time to formulate an opinion since the Internet first challenged the historical operation of intellectual property.

The law as it stands is clear: making a copy is the exclusive right of the rightsholder. This is the basis of AI-related lawsuits. For training data to escape that law, it would have to be granted an exemption: ruled fair use (as in the Anthropic and Meta cases), create an exception for temporary copies, or shoehorned into existing exceptions such as parody. Even then, copyright law is administered territorially, so the US may call it fair use but the rest of the world doesn’t have to agree. This is why the esteemed legal scholar Pamela Samuelson has said copyright law poses an existential threat to generative AI.

But, as one participant pointed out, although the entertainment industry dominates these discussions, there are many other sectors with different needs. Science, for example, both uses and studies AI, and is built on massive amounts of public funding. Surely that data should be free to access?

I wanted to be at this meeting because what should happen with AI, training data, and copyright is a conundrum. You do not have to work for a technology company to believe that there is value in allowing researchers both within and outwith companies to work on machine learning and build AI tools. When people balk at the impossible scale of securing permission from every copyright holder of every text, image, or sound, they have a point. The only organizations that could afford that are the companies we’re already mad at for being too big, rich, and powerful.

At the same time, why should we allow those big, rich, powerful companies to plunder our cultural domain without compensating anyone and extract even larger fortunes while doing it? To a published author who sees years of work reflected in a chatbot’s split-second answer to a prompt, it’s lost income and readers.

So for months, as Parliament has wrangled over the Data bill, the argument narrowed to copyright. Should there be an exception for data mining? Should technology companies have to get permission from creators and rights holders? Or should use of their work be automatically allowed, unless they opt out? All answers seem equally impossible. Technology companies would have to find every copyright holder of every datum to get permission. Licensing by the billion.

If creators must opt out, does that mean one piece at a time? How will they know when they need to opt out and who they have to notify? At the meeting, that was when someone said that the US and China won’t do this. Britain will fall behind internationally. Does that matter?

And yet, we all seemed to converge on this: copyright is the wrong tool. As one person said, technologies that threaten the entertainment industry always bring demands to tighten or expand copyright. See the last 35 years, in which Internet-fueled copying spawned the Digital Millennium Copyright Act and the EU Copyright Directive, and copyright terms expanded from 28 years, renewable once, to author’s life plus 70.

No one could suggest what the right tool would be. But there are good questions. Such as: how do we grant access to information? With business models breaking, is copyright still the right way to compensate creators? One of us believed strongly in the capabilities of collection societies – but these tend to disproportionately benefit the most popular creators, who will survive anyway.

Another proposed the highly uncontroversial idea of taxing the companies. Or levies on devices such as smartphones. I am dubious on this one: we have been there before.

And again, who gets the money? Very successful artists like Paul McCartney, who has been vocal about this? Or do we have a broader conversation about how to enable people to be artists? (And then, inevitably, who gets to be called an artist.)

I did not find clarity in all this. How to resolve generative AI and copyright remains complex and confusing. But I feel better about not having an answer.

Illustrations: Drunk parrot in a Putney garden (by Simon Bisson; used by permission).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Second sight

“The colors came back,” a friend said. He was talking about the change after he had cataract surgery. My clinician, a retired surgeon, said something similar, that patients come in and exclaim: “I can’t believe how blue the sky is!”

I didn’t have that.

Cataracts develop slowly, so many don’t perceive how bad their vision has become. I knew. Because: my cataracts made it progressively harder for opticians to fully correct my myopia, an effect I first noticed in early 2018. For the next five or six years, my prescription notched up, and I’d be able to see reasonably clearly for about six months after getting new glasses. The next six months I’d curse the lack of clarity. Repeat until late 2023, when they said I was “ready” for surgery. At that point, I was about 20/40 with glasses.

I delayed for a while, and then had my right eye done in March. The right eye is due in a couple of weeks. So I’ve had four months to explore the difference.

It has been fascinating. And very different from John Berger‘s account, or James Thurber’s fuzzy few days without glasses in The Admiral on the Wheel.

I told the clinician I thought that even if I hadn’t been *seeing* colors exactly right I was interpreting them correctly. That turns out to be mostly true. The sky looks blue, or blue enough, and greens, reds, and yellows render fairly accurately.

This makes sense. The clinician says they generally believe that cataracts block blue light. “Isn’t it just a yellow cast over everything?” a friend asked. Not really.

To review: the primary colors of light are red, blue, and green. Red and blue make magenta; blue and green make cyan; green and red make yellow. Color printers print all colors using those three mixed colors, plus black: CMYK. It’s the difference between additive and subtractive color mixing – that is, starting with black (no color) and adding light, versus starting with white (all colors) and subtracting it. This seems weird at first encounter because schools teach the primary colors of mixing paints, which, as an increasing number of people are pointing out, is all wrong for the digital era.

In real life, my biggest cataract-related color shift turns bright purple flowers a dead greyish pink. A friend’s bright lavender walls grey down. Given that the remaining cataract has continued to densify, my original assessment holds up: I wasn’t losing much color information. I knew my friend’s walls were lavender without being told.

The biggest difference for me is that opticians can fully correct my eyesight again. So the operation has made the world brighter, whiter, and brought back crisp focus. At a recent conference, I could sit in the back and read the slides for the first time in probably five years. Although: blue highlights on the metal chair frames from overhead spotlights disappeared when I closed my post-op eye. Fun!

But then I watched an episode of the TV show Hacks in which Jean Smart wears a bright gold and black dress (above). When I closed my right eye, it turned….salmon. My real-life sweater of nearly the identical color *does not do this*. It is clearly an artifact of cataract plus screen.

Using an RGB color generator, I can say my right eye sees the dress as close to 255-220-0. My left eye sees it as roughly 255-220-150. Bright orange (~255-153-51) on my laptop screen also notably shifts, to a medium hot pink (~255-153-150 ). This suggests my cataract shifts green. Why?

Most other things look close to the same. Among the few exceptions: on an episode of Curb Your Enthusiasm, Larry David’s dark olive shirt looks grey with the pre-op eye, and Cheryl Hines’s pale yellow shirt turns almost white. Does this mean that eye is seeing more blue?

Fluorescent lighting also produces interesting artifacts: a bright lime green poster seen through the cataract seemed aqua.

There’s obviously a logical explanation for this; I just can’t quite work it out. Someone who understands the composition of these lighting conditions could doubtless easily explain what’s going on there (and I hope someone will!).

One final story. A couple of years ago, I saw a particularly stunning sunset out my loft window. Went to get the phone, and snapped a shot. I got back a pale, washed-out nothingburger. Went and got the better, more controllable, digital camera and tried again. Still washed out. Well, damn modern cameras and their autocorrection to what they think you should have seen. I knew about this, because in 2020, when Californians tried to take pictures of their wildfire-caused orange sky, they got grey. Bah.

Cut to April 2025. Same window. My left eye sees a really intense pink and orange sunset. My right eye…sees a washed-out nothingburger. *It wasn’t the camera.*

So, by next month I will have a fully sharp, crisp, bright world on both sides instead of a slightly dim fuzzball on one side. I will feel better balanced, and be better able to play tennis and bike. And I won’t go blind. But there’s a price. Because my post-op eye can’t do close-up the same way, I will grieve the loss of the superpower of being able to read the tiniest print unaided for the rest of my life. And I’ll lose the good sunsets.

Illustrations: Deborah Vance (Jean Smart), in Hacks (S03e01, “Just for Laughs”).

Addendum: With the pre-operative (left) eye, the purple and pink-ish flowers in this photo look the same color. The orangey flowers are slightly pinker, so *nearly* but noticeably not, the same color.

Three groups of flowers: purple, purplish pink, and orange-pink (salmon).
My pre-operative eye sees all these flowers as about the same color. The orangey ones are most noticeably different, but still closer to pink than they really are.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Notes from Old Songs 2025

This is chiefly aimed at anyone who saw me at the Old Songs Folk Festival this past weekend. (If you missed it, better luck next year!)

The folk page on my website is here. There is a link on it to my page on open guitar tunings, which I intend to update with the extra tunings discussed at the workshop. For now, as Andy Cohen explained, Martin Carthy’s tuning was CGCDGA (you will need heavier strings on the top and bottom to avoid buzzing).

At the Friday night concert, Andy Cohan and I sang My Sweet Wyoming Home, by Bill Staines. It is (without Andy) on The Last Trip Home CD.

At the “You’ve Got to Be Kidding” workshop, I sang The Cowboy Fireman, by Harry McClintock; Old Zip Coon, traditional, which I learned from Michael Cooney; Cold, Blow, and the Rainy Night, learned from Planxty; and The Bionic Consumer, by Bill Steele.

At the ballad workshop, I sang Mary Hamilton, learned from Caroline Paton, who learned it from Hallie Wood; and Queen Amang the Heather, which I learned from a variety of Scottish singers, who learned it from Belle Stewart. Both of those are on The Last Trip Home CD.

At the “This Spoke to Me” workshop, I sang The Last Trip Home, written by Davy Steele; The Spirit of Mother Jones, written by Andy Irvine; and Griselda’s Waltz, written by Bill Steele. The Last Trip Home and Griselda’s Waltz are also on The Last Trip Home CD.

Great to see everyone and thanks for coming!

wg