Non-playing characters

It’s the most repetitive musical time of the year. Stores have been torturing their staff with an endlessly looping soundtrack of the same songs – in some cases since August. Even friends are playing golden Christmas oldies from the 1930s to 1950s.

Once upon a time – within my lifetime, in fact – stores and restaurants were silent. Into that silence came Muzak. I may be exaggerating: Wikipedia tells me the company dates to 1934. But it feels true.

The trend through all those years has been toward turning music into a commodity and pushing musicians into the poorly paid background by rerecording “for hire” to avoid paying royalties, among other tactics.

That process has now reached its nadir with the revelation by Liz Pelly at Harper’s Magazine that Spotify has taken to filling its playlists with “fake” music – that is, music created at scale by production companies and assigned to “ghost artists” who don’t really exist. For users looking for playlists of background music, it’s good enough; for Spotify it’s far more lucrative than streaming well-known artists who must be paid royalties (even at greatly reduced rates from the old days of radio).

Pelly describes the reasoning behind the company’s “Perfect Fit Content” program this way: “Why pay full-price royalties if users were only half listening?” This is music as lava lamp.

And you thought AI was going to be the problem. But no, the problem is not the technology, it’s the business model. At The New Yorker, Hua Hsu ruminates on Pelly’s imminently forthcoming book, Mood Machine, in terms of opportunity costs: what is the music we’re not hearing as artists desperate to make a living divert to conform to today’s data-driven landscape? I was particularly struck by Hsu’s data point that Spotify has stopped paying royalties on tracks that are streamed fewer than 1,000 times in a year. From those who have little, everything is taken.

The kind of music I play – traditional and traditional-influenced contemporary – is the opposite of all this. Except for a brief period in the 1960s (“the folk scare”), folk musicians made our own way. We put out our own albums long before it became fashionable, and sold from the stage because we had to. If the trend continues, most other musicians will either become like us or be non-playing characters in an industry that couldn’t exist without them.

***

The current Labour government is legislating the next stage of reforming the House of Lords: the remaining 92 hereditary peers are to be ousted. This plan is a mere twig compared to Keir Starmer’s stated intention in 2020 and 2022 to abolish it entirely. At the Guardian, Simon Jenkins is dissatisfied: remove the hereditaries, sure, but, “There is no mention of bishops and donors, let alone Downing Street’s clothing suppliers and former secretaries. For its hordes of retired politicians, the place will remain a luxurious club that makes the Garrick [club] look like a greasy spoon.”

Jenkins’ main question is the right one: what do you replace the Lords with? It is widely known among the sort of activists who testify in Parliament that you get deeper and more thoughtful questions in the Lords than you ever do in the Commons. Even if you disagree with members like Big Issue founder John Bird and children’s rights campaigner and filmmaker Beeban Kidron, or even the hereditary Earl of Erroll, who worked in the IT industry and has been a supporter of digital rights for years, it’s clear they’re offering value. Yet I’d be surprised to see them stand for election, and as a result it’s not clear that a second wholly elected chamber would be an upgrade.

With change afoot, it’s worth calling out the December 18 Lords Grand Committee debate on the data bill. I tuned in late, just in time to hear Kidron and Timothy Clement-Jones dig into AI and UK copyright law. This is the Labour plan to create an exception to copyright law so AI companies can scrape data at will to train their models. As Robert Booth writes at the Guardian, there has been, unsurprisingly, widespread opposition from the creative sector. Among other naysayers, Kidron compared the government’s suggested system to asking shopkeepers to “opt out of shoplifters”.

So they’re in this ancient setting, wearing modern clothes, using the – let’s call it – *vintage* elocutionary styling of the House of Lords…and talking intelligently and calmly about the iniquity of vendors locking schools into expensive contracts for software they don’t need, and AI companies’ growing disregard for robots.txt. Awesome. Let’s keep that, somehow.

***

In our 20 years of friendship I never knew that John “JI” Ioannidis, who died last month, had invented technology billions of people use every day. As a graduate student at Columbia, where he received his PhD in 1993, in work technical experts have called “transformative”, Ioannidis solved the difficult problem of forwarding Internet data to devices moving around from network to network: Mobile IP, in other words. He also worked on IPSec, trust management, and prevention of denial of service attacks.

“He was a genius,” says one of his colleagues, and “severely undercredited”. He is survived by his brother and sister, and an infinite number of friends who went for dim sum with him. RIP.

Illustrations: Cartoon by veteran computer programmer Jef Poskanzer. Used by permission.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Government identification as a service

This week, the clock started ticking on the UK’s Online Safety Act. Ofcom, the regulator charged with enforcing it, published its codes of practice and guidance, which come into force on March 17, 2025. At that point, websites that fall into scope – in Ofcom’s 2023 estimate 150,000 of them – must comply with requirements to conduct risk assessments, preemptively block child sexual abuse material, register a responsible person (who faces legal and financial liability), and much more.

Almost immediately, the first casualty made itself known: Dee Kitchen announced the closure of her site, which supports hundreds of interest-based forums. Ofcom’s risk assessment guidance (PDF), the personal liability would be overwhelming even if the forums produced enough in donations to cover the costs of compliance.

Russ Garrett has a summary for small sites. UK-linked blogs – even those with barely any readers – could certainly fit the definition per Ofcom’s checker tool, if users can comment on each other’s posts. Common sense says that’s ridiculous in many cases…but as Kitchen says all takes to ruin the blogger’s life is a malicious complainant wielding the OSA as their weapon.

Kitchen will certainly not be alone in concluding the requirements are prohibitively risky for web forums and bulletin boards that are run by volunteers and have minimal funding. Yet they are the Internet’s healthy social ecology, without the algorithms and business models that do most to create the harms the Act is meant to address. Promising Trouble and Power to Change are collaborating on a community of practice, and have asked Ofcom for a briefing on compliance for volunteers and small sites.

Garrett’s summary also points out that Ofcom’s rules leave it wide open for sites to censor *more* than is required, and many will do exactly that to minimize their risk. A side effect, as Garrett writes, will be to further centralize the Net, as moving communities to larger providers such as Discord will shift the liability onto *them*. This is what happens when rules controlling speech are written from the single lens of preventing harm rather than starting from a base of human rights.

More guidance to come from Ofcom next month. We haven’t even started on implementing age verification yet.

***

On Monday, I learned a new term I wish I hadn’t: “government identity as a service”. GIAAS?

The speaker was human rights campaigner Edward Hasbrouck, in a talk on identification Dave Farber‘s and Dan Gillmor‘s weekly CCRC/IP-Asia Zoom call.

Most people trace the accelerating rise of demands for identification in countries like the US and UK to 9/11. Based on that, there are now people old enough to drink in a US state who are not aware it was ever possible to just walk up to fly, get a hotel room, or enter an office. As Hasbrouck writes in a US election day posting, the rise in government demands for ID has been powered by the simultaneous rise of corporate tracking for commercial purposes. He calls it a “malign convergence of interest”.

It has long been obvious that anything companies collect can be subpoenaed by governments. Hasbrouck’s point, however, is that identification enables control as well as surveillance; it brings watchlists, blocklists, and automated bars to freedom of action – it makes us decision subjects as Gavin Freeguard said at the recent Foundation for Information Policy Research event.

Hasbrouck pinpoints three components that each present a vulnerability to control: identification, logging, decision making. As an example, consider the UK’s in-progress eVisa system, in which the government confirms an individual’s visa status online in real time with no option for physical documentation. This gives the government enormous power to stop individuals from doing vital but mundane things like rent a home, board an aircraft, or get a job. Its heart is identification – and a law delegating border enforcement to myriad civil intermediaries and normalizes these checks.

Many in the UK were outraged by proposals to give the Department of Work and Pensions the power to examine people’s bank accounts. In the US, Hasbrouck points to a recent report from the House Judiciary Committee on the Weaponization of the Federal Government that documents the Treasury Department’s Financial Crimes Enforcement Network’s collaboration with the FBI to push banks to submit reports of suspicious activity while it trawled for possible suspects after the January 6 insurrection. Yes, the destructors should be caught and punished; but also any weapon turned against people we don’t like can also be turned against us. Did anyone vote to let the FBI conduct financial surveillance by the million?

Now imagine that companies outsource ID checks to the government and offload the risk of running their own. That is how the no-fly list works. That’s how airlines operate *now*. GIAAS.

Then add the passive identification that systems like facial recognition are spreading. You can no longer reliably know whether you have been identified and logged, who gets that information, or what hidden decision they may make based on it. Few of us are sure of our rights in any situation, and few of us even ask why. In his slides (PDF), Hasbrouck offers a list of ways to fight back. He has hope.

Illustrations: Edward Hasbrouck at CPDP in 2017.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Loose ends

Privacy technologies typically fail for one of two reasons: 1) they’re too complicated and/or expensive to find widespread adoption among users; 2) sites and services ignore, undermine, or bypass them in order to preserve their business model. In the first category are numerous privacy-enhancing technologies that failed to make their case in the marketplace. Among examples of the first category are numerous encryption-related attempts to secure communications. Repeated failures in the marketplace, usually because the resulting products were too technically difficult for most users, they never found mass adoption. In the end, encrypted messaging didn’t really took off until WhatsApp built it into its service.

This week saw a category two failure: Mozilla announced it is removing the Do Not Track option from Firefox’s privacy settings. DNT is simple enough to implement if you can stand to check and change settings, but it falls on the wrong side of modern business models and, other than in California, the US has no supporting legislation to make it enforceable. Granted, Firefox is a minority browser now, but the moment feels significant for this 13-year-old technology.

As Kevin Purdy explains at Ars Technica, DNT began as an FTC proposal, based on work by Christopher Soghoian and Sid Stamm, that aimed to create a mechanism for the web similar to the “Do Not Call” list for telephone networks.

The world in which DNT seemed a hopeful possibility seems almost quaint now: then, one could still imagine that websites might voluntarily respect the signal web browsers sent indicating users’ preferences. Do Not Call, by contrast, was established by US federal legislation. Despite various efforts, the US failed to pass legislation underpinning DNT, and it never became a web standard. The closest it has come to the latter is Section 2.12 of the W3C’s Ethical Web Principles, which says, “People must be able to change web pages according to their needs.” Can I say I *need* to not be tracked?

Even at the time it seemed doubtful that web companies would comply. But it also suffered from unfortunate timing. DNT arrived just as the twin onslaught of smartphones and social media was changing the ethos that built the open web. Since then, as Cory Doctor wrote earlier this year, the incentives have aligned to push web browsers to become faithless user agents, and conventions mean less and less.

Ultimately, DNT only ever worked insofar as users could trust websites to honor their preference. As it’s become clear they can’t, ad blockers have proliferated, depriving sites of ad revenue they need to survive. Had DNT been successful, perhaps we’d have all been better off.

***

Also on the way out this week is Cruise’s San Francisco robotaxis. My last visit to San Francisco, about a year ago, was the first time I saw these in person. Most of the ones I saw were empty Waymos, perhaps in transit to a passenger, perhaps just pointlessly clogging the streets. Around then, a Cruise robotaxi ran over a pedestrian who’d been hit by another car and then dragged her 20 feet. San Francisco promptly suspended Cruise’s license. Technology critic Paris Marx thought the incident would likely be Cruise’s “death knell”. And so it’s proving. The announcement from GM, which acquired Cruise in 2016 for $1 billion, leaves just Waymo standing in the US self-driving taxi business, with Tesla saying it will enter the market late next year.

I always associate robotaxis with Vernor Vinge‘s 2006 novel Rainbows End. In it, Vinge imagined a future in which robotaxis arrived within minutes of being hailed and replaced both public transport and private car ownership. By 2012 or so, his fictional imagining had become real-life projection, and many were predicting that our streets would imminently be filled with self-driving cars, taxis or not. In 2017, the conversation was all about what ethics to program into them and reclaiming urban space. Now, that imagined future seems to be receding, as skeptics predicted it would.

***

American journalism has long operated under the presumption that the stories it produces should be “neutral”. Now, at the LA Times, CEO Patrick Soon-Shiong thinks he can enforce this neutrality by running an AI-based “bias meter” over the paper’s stories. If you remember, in the late stages of the US presidential election, Soon-Shiong blocked the paper from endorsing Kamala Harris. Reports say that the bias meter, due out next month, is meant to identify any bias the story’s source has and then deliver “both sides” of that story.

This is absurd. Few news stories have just two competing sides. A biased source can’t be countered by rewriting the story unless you include more sources and points of view, which means additional research. Most important, AI can’t think.

But readers can. And so what this story says is that Soon-Shiung doesn’t trust either the journalists who work for him or the paper’s readers to draw the conclusions he wants. If he knew more about journalism, he’d know that readers generally don’t adopt opinions just because someone tells them to. The far greater power, I recall reading years ago, lies in determining what readers *think about* by deciding what topics are important enough to cover. There’s bias there, too, but Soon-Shiong’s meter won’t show it.

Illustrations: Dominic Wilccox‘s concept driverless sleeper car, 2014.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Playing monopoly

If you were going to carve up today’s technology giants to create a more competitive landscape, how would you do it? This time the game’s for real. In August, US District Judge Amit Mehta ruled that, “Google is a monopolist and has acted as one to maintain its monopoly.” A few weeks ago, the Department of Justice filed preliminary proposals (PDF) for remedies. These may change before the parties reassemble in court next April.

Antitrust law traditionally aimed to ensure competition in order to create both a healthy business ecosystem and better serve consumers. “Free” – that is, pay-with-data – online services have been resistant to antitrust analysis through decades of focusing on lowered prices to judge success.

It’s always tempting to think of breaking monopolists up into business units. For example, a key moment in Meta’s march to huge was its purchase of WhatsApp (2014) and Instagram (2012), turning baby competitors into giant subsidiaries. In the EU, that permission was based on a promise, which Meta later broke, not to merge the three companies’ databases. Separating them back out again to create three giant privacy-invading behemoths in place of one is more like the sorceror’s apprentice than a win.

In the late 1990s case against Microsoft, which ended in settlement, many speculated about breaking it up into Baby Bills. The key question: create clones or divide up the Windows and office software?

In 2013, at ComputerWorld Gregg Keizer asked experts to imagine the post-Microsoft-breakup world. Maybe the office software company ported its products onto the iPad. Maybe the clones eventually diverged and one would have dominated search. Keizer’s experts generally agree, though, that the antitrust suit itself had its effects, slowing the company’s forward progress by making it fear provoking further suits, like IBM before it.

In Google’s case, the key turning point was likely the 2007-2008 acquisition of online advertising pioneer DoubleClick. Google was then ten years old and had been a public company for almost four years. At its IPO Wall Street pundits were dismissive, saying it had no customer lock-in and no business model.

Reading Google’s 2008 annual report is an exercise in nostalgia. Amid an explanation of contextual advertising, Google says it has never spent much on marketing because the quality of its products generated word of mouth momentum worldwide. This was all true – then.

At the time, privacy advocates opposed the DoubleClick merger. Both FTC and EU regulators raised concerns, but let it go ahead to become the heart of the advertising business Susan Wojcicki and Sheryl Sandberg built for Google. Despite growing revenues from its cloud services business, most of Google’s revenues still come from advertising.

Since then, Mehta ruled, Google cemented its dominance by paying companies like Apple, Samsung, and Verizon to make its search engine the default on the devices they make and/or sell. Further, Google’s dominance – 90% of search – allows it to charge premium rates for search ads, which in turn enhances its financial advantage. OK, one of those complaining competitors is Microsoft, but others are relative minnows like 15-year-old DuckDuckGo, which competes on privacy, buys TV ads, and hasn’t cracked 1% of the search market. Even Microsoft’s Bing, at number two, has less than 4%. Google can insist that it’s just that good, but complaints that its search results are degrading are everywhere.

Three aspects of the DoJ’s proposals seized the most attention: forcing Google to divest itself of the Chrome browser; second, if that’s not enough, to divest the Android mobile operating system; and third a block on paying other companies to make Google search the default. The latter risks crippling Mozilla and Firefox, and would dent Apple’s revenues, but not really harm Google. Saving $26.3 billion (2021 number) can’t be *all* bad.

At The Verge, Lauren Feiner summarizes the DoJ’s proposals. At the Guardian, Dan Milmo notes that the DoJ also wants Google to be barred from buying or investing in search rivals, query-based AI, or adtech – no more DoubleClicks.

At Google’s blog, chief legal officer Kent Walker calls the proposals “a radical interventionist agenda”. He adds that it would chill Google’s investment in AI like this is a bad thing, when – hello! – a goal is ensuring a competitive market in future technologies. (It could even be a good thing generally.)

Finally, Walker claims divesting Chrome and/or Android would endanger users’ security and privacy and frets that it would expose Americans’ personal search queries to “unknown foreign and domestic companies”. Adapting a line from the 1980 movie Hopscotch, “You mean, Google’s methods of tracking are more humane than the others?” While relaying DuckDuckGo’s senior vice-president’s similar reaction, Ars Technica’s Ashley Belanger dubs the proposals “Google’s nightmare”.

At Techdirt, Mike Masnick favors DuckDuckGo’s idea of forcing Google to provide access to its search results via an API so competitors can build services on top, as his company does with Bing. Masnick wants users to become custodians and exploiters of their own search histories. Finally, at Pluralistic, Cory Doctorow likes spinning out – not selling – Chrome. End adtech surveillance, he writes, don’t democratize it.

It’s too early to know what the DoJ will finally recommend. If nothing is done, however, Google will be too rich to fear future lawsuits.

Illustration: Mickey Mouse as the sorceror’s apprentice in (1940).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.