Banning TikTok

Two days from now, TikTok may go dark in the US. Nine months ago, in April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act, banning TikTok if its Chinese owner, ByteDance, has not removed itself from ownership by January 19, 2025.

Last Friday, January 10, the US Supreme Court heard three hours of arguments in consolidated challenges filed by TikTok and a group of TikTok users: TikTok, Inc. v. Garland and Furbaugh v. Garland. Too late?

As a precaution, Kinling Lo and Viola Zhou report at Rest of World, at least some of TikTok’s 170 million American users are building community arks elsewhere – the platform Xiaohongshu (“RedNote”), for one. This is not the smartest choice; it, too is Chinese and could become subject to the same national security concerns, like the other Chinese apps Makena Kelly reports at Wired are scooping up new US users. Ashley Belanger reports at Ars Technica that rumors say the Chinese are thinking of segregating these American newcomers.

“The Internet interprets censorship as damage, and routes around it,” EFF founder and activist John Gilmore told Time Magazine in 1993. He meant Usenet, which could get messages past individual server bans, but it’s really more a statement about Internet *users*, who will rebel against bans. That for sure has not changed despite the more concentrated control of the app ecosystem. People will access each other by any means necessary. Even *voice* calls.

PAFACA bans apps from four “foreign adversaries to the United States” – China, Russia, North Korea, and Iran. That being the case, Xiaohongshu/RedNote is not a safe haven. The law just hasn’t noticed this hitherto unknown platform yet.

The law’s passage in April 2024 was followed in early May by TikTok’s legal challenge. Because of the imminent sell-by deadline, the case was fast-tracked, and landed in the US District of Columbia Circuit Court of Appeals in early December. The district court upheld the law and rejected both TikTok’s constitutional challenage and its request for an injunction staying enforcement until the constitutional claims could be fully reviewed by the Supreme Court. TikTok appealed that decision, and so last week here we were. This case is separate from Free Speech Coalition v. Paxton, which SCOTUS heard *this* week and challenges Texas’s 2023 age verification law (H.B. 1181), which could have even further-reaching Internet effects.

Here it gets silly. Incoming president Donald Trump, who originally initiated the ban but was blocked by the courts on constitutional grounds, filed an amicus brief arguing that any ban should be delayed until after he’s taken office on Monday because he can negotiate a deal. NBC News reports that the outgoing Biden administration is *also* trying to stop the ban and, per Sky News, won’t enforce it if it takes effect.

Previously, both guys wanted a ban, but I guess now they’ve noticed that, as Mike Masnick says at Techdirt, it makes them look out of touch to nearly half the US population. In other words, they moved from “Oh my God! The kids are using *TikTok*!” to “Oh, my God! The kids are *using* TikTok!”

The court transcript shows that TikTok’s lawyers made three main arguments. One: TikTok is now incorporated in the US, and the law is “a burden on TikTok’s speech”. Two: PAFACA is content-based, in that it selects types of content to which it applies (user-generated) and ignores others (reviews). Three: the US government has “no valid interest in preventing foreign propaganda”. Therefore, the government could find less restrictive alternatives, such as banning the company from sharing sensitive data. In answer to questions, TikTok’s lawyers claimed that the US’s history of banning foreign ownership of broadcast media is not relevant because it was due to bandwidth scarcity. The government’s lawyers countered with national security: the Chinese government could manipulate TikTok’s content and use the data it collects for espionage.

Again: the Chinese can *buy* piles of US data just like anyone else. TikTok does what Silicon Valley does. Pass data privacy laws!

Experts try to read the court. Amy Howe at SCOTUSblog says the justices seemed divided, but overall likely to issue a swift decision. At This Week in Google and Techdirt, Cathy Gellis says the proceedings, have left her “cautiously optimistic” that the court will not undermine the First Amendment, a feeling seemingly echoed by some of the panel of experts who liveblogged the proceedings.

The US government appears to have tied itself up in knots: SCOTUS may uphold a Congressionally-legislated ban neither old nor new administration now wants, that half the population resents, and that won’t solve the US’s pervasive privacy problems. Lost on most Americans is the irony that the rest of the world has complained for years that under the PATRIOT Act foreign subsidiaries of US companies are required to send data to US intelligence. This is why Max Schrems keeps winning cases under GDPR.

So, to wrap up: the ban doesn’t solve the problem it purports to solve, and it’s not the least restrictive possibility. On the other hand, national security? The only winner may be, as Jason Koebler writes at 404Media, Mark Zuckerberg.

Illustrations: Logo of Douyin, ByteDance’s Chinese version of TikTok.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The lost Internet

As we open 2025 it would be traditional for an Old Internet Curmudgeon to rhapsodize about the good, old days of the 1990s, when the web was open, snark flourished at sites like suck.com, no one owned social media (that is, Usenet and Internet Relay Chat), and even the spam was relatively harmless.

But that’s not the period I miss right now. By “lost” I mean the late 2000s, when we shifted from an Internet of largely unreliable opinions to an Internet full of fact-based sites you could trust. This was the period during which Wikipedia (created 2001) grew up, and Open Street Map (founded 2004) was born, joining earlier sites like the Internet Archive (founded 1996) and Snopes (1994). In that time, Google produced useful results, blogs flourished, and before it killed them if you asked on Twitter for advice on where to find a post box near a point in Liverpool you’d get correct answers straight to your mobile phone.

Today, so far: I can’t get a weather app to stop showing the location I was at last week and show the location I’m at this week. Basically, the app is punishing me for not turning on location tracking. The TV remote at my friend’s house doesn’t fully work and she doesn’t know why or how to fix it; she works around it with a second remote whose failings are complementary. No calendar app works as well as the software I had 1995-2001 (it synced! without using a cloud server and third-party account!). At the supermarket, the computer checkout system locked up. It all adds up to a constant white noise of frustration.

We still have Wikipedia, Open Street Map, Snopes, and the Internet Archive. But this morning a Mastodon user posted that their ten-year-old says you can’t trust Google any more: “It just returns ‘a bunch of madeup stuff’.” When ten-year-olds know your knowledge product sucks…

If generative AI were a psychic we’d call what it does cold reading.

At his blog, Ed Zitron has published a magnificent, if lengthy, rant on the state ot technology. “The rot economy”, he calls it, and says we’re all victims of constant low-level trauma. Most of his complaints will be familiar: the technologies we use are constantly shifting and mostly for the worse. My favorite line: “We’re not expected to work out ‘the new way to use a toilet’ every few months because somebody decided we were finishing too quickly.”

Pause to remember nostalgically 2018, when a friend observed that technology wasn’t exciting any more and 2019, when many more people thought the Internet was no longer “fun”. Those were happy days. Now we are being overwhelmed with stuff we actively don’t want in our lives. Even hacked Christmas lights sound miserable for the neighbors.

***

I have spent some of these holidays editing a critique of Ofcom’s regulatory plans under the Online Safety Act (we all have our own ideas about holidays), and one thing seems clear: the splintering Internet is only going to get worse.

Yesterday, firing up Chrome because something didn’t work in Firefox, I saw a fleeting popup to the effect that because I may not be over 18 there are search results Google won’t show me. I don’t think age verification is in force in the Commonwealth of Pennsylvania – US states keep passing bills, but hit legal challenges.

Age verification has been “imminent” in the UK for so long – it was originally included in the Digital Economy Act 2017 – that it seems hard to believe it may actually become a reality. But: sites within the Act’s scope will have to complete an “illegal content risk assessment” by March 16. So the fleeting popup felt like a visitation from the Ghost of Christmas Future.

One reason age verification was dropped back then – aside from the distractions of Brexit – was that the mechanisms for implementing it were all badly flawed – privacy-invasive, ineffective, or both. I’m not sure they’ve improved much. In 2022, France’s data protection watchdog checked them out: “CNIL finds that such current systems are circumventable and intrusive, and calls for the implementation of more privacy-friendly models.”

I doubt Ofcom can square this circle, but the costs of trying will include security, privacy, freedom of expression, and constant technological friction. Bah, humbug.

***

Still, one thing is promising: the rise of small, independent media outlets wbo are doing high-quality work. Joining established efforts like nine-year-old The Ferret, ten-year-old Bristol Cable, and five-year-old Rest of World are year-and-a-half-old 404 Media and newcomer London Centric. 404Media, formed by four journalists formerly at Vice’s Motherboard, has been consistently making a splash since its founding; this week Jason Koebler reminds that Elon Musk’s proactive willingness to unlock the blown-up cybertruck in Las Vegas and provide comprehensive data on where it’s been, including video from charging stations, without warrant or court order, could apply to any Tesla customer at any time. Meanwhile, in its first three months London Centric’s founding journalist, Jim Waterson, has published pieces on the ongoing internal mess at Transport for London resulting from the August cyberattack and bicycle theft in the capital. Finally, if you’re looking for high-quality American political news, veteran journalist Dan Gillmore curates it for you every day in his Cornerstone of Democracy newsletter.

The corporate business model of journalism is inarguably in trouble, but journalism continues.

Happy new year.

Illustrations: The Marx Brothers in their 1929 film, The Cocoanuts, newly released into the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Non-playing characters

It’s the most repetitive musical time of the year. Stores have been torturing their staff with an endlessly looping soundtrack of the same songs – in some cases since August. Even friends are playing golden Christmas oldies from the 1930s to 1950s.

Once upon a time – within my lifetime, in fact – stores and restaurants were silent. Into that silence came Muzak. I may be exaggerating: Wikipedia tells me the company dates to 1934. But it feels true.

The trend through all those years has been toward turning music into a commodity and pushing musicians into the poorly paid background by rerecording “for hire” to avoid paying royalties, among other tactics.

That process has now reached its nadir with the revelation by Liz Pelly at Harper’s Magazine that Spotify has taken to filling its playlists with “fake” music – that is, music created at scale by production companies and assigned to “ghost artists” who don’t really exist. For users looking for playlists of background music, it’s good enough; for Spotify it’s far more lucrative than streaming well-known artists who must be paid royalties (even at greatly reduced rates from the old days of radio).

Pelly describes the reasoning behind the company’s “Perfect Fit Content” program this way: “Why pay full-price royalties if users were only half listening?” This is music as lava lamp.

And you thought AI was going to be the problem. But no, the problem is not the technology, it’s the business model. At The New Yorker, Hua Hsu ruminates on Pelly’s imminently forthcoming book, Mood Machine, in terms of opportunity costs: what is the music we’re not hearing as artists desperate to make a living divert to conform to today’s data-driven landscape? I was particularly struck by Hsu’s data point that Spotify has stopped paying royalties on tracks that are streamed fewer than 1,000 times in a year. From those who have little, everything is taken.

The kind of music I play – traditional and traditional-influenced contemporary – is the opposite of all this. Except for a brief period in the 1960s (“the folk scare”), folk musicians made our own way. We put out our own albums long before it became fashionable, and sold from the stage because we had to. If the trend continues, most other musicians will either become like us or be non-playing characters in an industry that couldn’t exist without them.

***

The current Labour government is legislating the next stage of reforming the House of Lords: the remaining 92 hereditary peers are to be ousted. This plan is a mere twig compared to Keir Starmer’s stated intention in 2020 and 2022 to abolish it entirely. At the Guardian, Simon Jenkins is dissatisfied: remove the hereditaries, sure, but, “There is no mention of bishops and donors, let alone Downing Street’s clothing suppliers and former secretaries. For its hordes of retired politicians, the place will remain a luxurious club that makes the Garrick [club] look like a greasy spoon.”

Jenkins’ main question is the right one: what do you replace the Lords with? It is widely known among the sort of activists who testify in Parliament that you get deeper and more thoughtful questions in the Lords than you ever do in the Commons. Even if you disagree with members like Big Issue founder John Bird and children’s rights campaigner and filmmaker Beeban Kidron, or even the hereditary Earl of Erroll, who worked in the IT industry and has been a supporter of digital rights for years, it’s clear they’re offering value. Yet I’d be surprised to see them stand for election, and as a result it’s not clear that a second wholly elected chamber would be an upgrade.

With change afoot, it’s worth calling out the December 18 Lords Grand Committee debate on the data bill. I tuned in late, just in time to hear Kidron and Timothy Clement-Jones dig into AI and UK copyright law. This is the Labour plan to create an exception to copyright law so AI companies can scrape data at will to train their models. As Robert Booth writes at the Guardian, there has been, unsurprisingly, widespread opposition from the creative sector. Among other naysayers, Kidron compared the government’s suggested system to asking shopkeepers to “opt out of shoplifters”.

So they’re in this ancient setting, wearing modern clothes, using the – let’s call it – *vintage* elocutionary styling of the House of Lords…and talking intelligently and calmly about the iniquity of vendors locking schools into expensive contracts for software they don’t need, and AI companies’ growing disregard for robots.txt. Awesome. Let’s keep that, somehow.

***

In our 20 years of friendship I never knew that John “JI” Ioannidis, who died last month, had invented technology billions of people use every day. As a graduate student at Columbia, where he received his PhD in 1993, in work technical experts have called “transformative”, Ioannidis solved the difficult problem of forwarding Internet data to devices moving around from network to network: Mobile IP, in other words. He also worked on IPSec, trust management, and prevention of denial of service attacks.

“He was a genius,” says one of his colleagues, and “severely undercredited”. He is survived by his brother and sister, and an infinite number of friends who went for dim sum with him. RIP.

Illustrations: Cartoon by veteran computer programmer Jef Poskanzer. Used by permission.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Blue

The inxodus onto Bluesky noted here last week continues apace: the site’s added a million users a day for more than a week, gradually slowing down from 12 new users a second, per the live counter.

These are not lurkers. Suddenly, the site feels like Twitter circa 2009/2010, when your algorithm-free feed was filled with interesting people sharing ideas, there were no ads, and abuse was in its infancy. People missing in action for the last year or two are popping up; others I’ve wished would move off exTwitter so I could stop following them there have suddenly joined. Mastodon is also seeing an uptick, and (I hear) Threads continues to add users without, for me, adding interest to match…. I doubt this diaspora is all “liberals”, as some media have it – or if they are, it won’t be long before politicians and celebrities note the action is elsewhere and rush to stay relevant.

It takes a long time for a social medium to die if it isn’t killed by a corporation. Even after this week’s bonanza, Bluesky’s entire user base fits inside 5% of exTwitter, which still has around 500 million users as of September, about half of them active daily. What matters most are *posters*, who are about 10% or less of any social site’s user base. When they leave, engagement plummets, as shown in a 2017 paper in Nature.

An example in action: at Statnews, Katie Palmer reports that the science and medical community is adopting Bluesky.

I have to admit to some frustration over this: why not Mastodon? As retro-fun as this week on Bluesky has been, the problem noted here a few weeks ago of Bluesky’s venture capital funding remains. Yes, the company is incorporated as a public benefit company – but venture capitalists want exit strategies and return on investment. That tension looms.

Mastodon is a loose collection of servers that all run the same software, which in turn is written to the open protocol Activity Pub. Gergely Orosz has deep-dive looks at Bluesky’s development and culture; the goal was to write a new open protocol, AT, that would allow Bluesky, similarly, to federate with others. There is already a third-party bit of software, Bridgy, that provides interoperability among Bluesky, any system based on Activity Pub (“the Fediverse”, of which Mastodon is a subset), and the open web (such as blogs). For the moment, though, Bluesky remains the only site running its AT protocol, so the more users Bluesky adds, the more it feels like a platform rather than a protocol. And platforms can change according to the whims of their owners – which is exactly what those leaving exTwitter are escaping. So: why not Mastodon, which doesn’t have that problem?

In an exchange on Bluesky, Palmer said that those who mentioned it said they found Mastodon “too difficult to figure out”.

It can’t be the thing itself; typing and sending varies little. The problem has to be the initial uncertainty about choosing a server. What you really want is for institutions to set up their own, and then you sign up there. For most people that’s far too much heavy lifting. Still, this is what the BBC and the German government have done, and it has a significant advantage in that posting from an address on that server automatically verifies the poster as an authentic staffer. NPR simply found a server and opened an account, like I did when I joined Mastodon in 2019.

All that said, how Mastodon administrators will cope with increasing usage and resulting costs also remains an open question as discussed here last year.

So: some advice as you settle into your new online home:

– Plan for the site’s eventual demise. “On the Internet your home will always leave you” (I have lost the source of this quote). Every site, no matter how big and fast-growing it is now, or how much you all love it…assume that at some point in the future it will either die of outmoded business model (AOL forums); get bought and closed down (Television without Pity, CompuServe, Geocities); become intolerable because of cultural change (exTwitter); or be abandoned because the owner loses interest (countless blogs and comment boards). Plan for that day. Collect alternative means of contacting the people you come to know and value. Build multiple connections.

– Watch the data you’re giving the site. No one in 2007, when I joined Twitter, imagined their thousands of tweets would become fodder for a large language model to benefit one of the world’s richest multi-billionaires.

– If you are (re)building an online community for an organization, own that community. Use social media, by all means, but use it to encourage people to visit the organization’s website, or join its fully-controlled mailing list or web board. Otherwise, one day, when things change, you will have to start over from scratch, and may not even know who your members are or how to reach them.

– Don’t worry too much about the “filter bubble”, as John Elledge writes. Studies generally agree social media users encounter more, and more varied, sources of news than others. As he says, only journalists have to read widely among people whose views they find intolerable (see also the late, great Molly Ivins).

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Follow the business models

In a market that enabled the rational actions of economists’ fantasies, consumers would be able to communicate their preferences for “smart” or “dumb” objects by exercising purchasing power. Instead, everything from TVs and vacuum cleaners to cars is sprouting Internet connections and rampant data collection.

I would love to believe we will grow out of this phase as the risks of this approach continue to become clearer, but I doubt it because business models will increasingly insist on the post-sale money, which never existed in the analog market. Subscriptions to specialized features and embedded ads seem likely to take ever everything. Essentially, software can change the business model governing any object’s manufacture into Gillette’s famous gambit: sell the razors cheap, and make the real money selling razor blades. See also in particular printer cartridges. It’s going to be everywhere, and we’re all going to hate it.

***

My consciousness of the old ways is heightened at the moment because I spent last weekend participating in a couple of folk music concerts around my old home town, Ithaca, NY. Everyone played acoustic instruments and sang old songs to celebrate 58 years of the longest-running folk music radio show in North America. Some of us hadn’t really met for nearly 50 years. We all look older, but everyone sounded great.

A couple of friends there operate a “rock shop” outside their house. There’s no website, there’s no mobile app, just a table and some stone wall with bits of rock and other findings for people to take away if they like. It began as an attempt to give away their own small collection, but it seems the clearing space aspect hasn’t worked. Instead, people keep bringing them rocks to give away – in one case, a tray of carefully laid-out arrowheads. I made off with a perfect, peach-colored conch shell. As I left, they were taking down the rock shop to make way for fantastical Halloween decorations to entertain the neighborhood kids.

Except for a brief period in the 1960s, playing folk music has never been lucrative. However it’s still harder now: teens buy CDs to ensure they can keep their favorite music, and older people buy CDs because they still play their old collections. But you can’t even *give* a 45-year-old a CD because they have no way to play it. At the concert, Mike Agranoff highlighted musicians’ need for support in an ecosystem that now pays them just $0.014 (his number) for streaming a track.

***

With both Halloween and the US election scarily imminent, the government the UK elected in July finally got down to its legislative program this week.

Data protection reform is back in the form of the the Data Use and Access Bill, Lindsay Clark reports at The Register, saying the bill is intended to improve efficiency in the NHS, the police force, and businesses. It will involve making changes to the UK’s implementation of the EU’s General Data Protection Regulation. Care is needed to avoid putting the UK’s adequacy decision at risk. At the Open Rights Group Mariano della Santi warns that the bill weakens citizens’ protection against automated decision making. At medConfidential, Sam Smith details the lack of safeguards for patient data.

At Computer Weekly, Bill Goodwin and Sebastian Klovig Skelton outline the main provisions and hopes: improve patient care, free up police time to spend more protecting the public, save money.

‘Twas ever thus. Every computer system is always commissioned to save money and improve efficiency – they say this one will save 140,000 a years of NHS staff time! Every new computer system also always brings unexpected costs in time and money and messy stages of implementation and adaptation during which everything becomes *less* efficient. There are always hidden costs – in this case, likely the difficulties of curating data and remediating historical bias. An easy prediction: these will be non-trivial.

***

Also pending is the draft United Nations Convention Against Cybercrime; the goal is to get it through the General Assembly by the end of this year.

Human Rights Watch writes that 29 civil society organizations have written to the EU and member states asking them to vote against the treaty’s adoption and consider alternative approaches that would safeguard human rights. The EFF is encouraging all states to vote no.

Internet historians will recall that there is already a convention on cybercrime, sometimes called the Budapest Convention. Drawn up in 2001 by the Council of Europe to come into force in 2004, it was signed by 70 countries and ratified by 68. The new treaty has been drafted by a much broader range of countries, including Russia and China, is meant to be consistent with that older agreement. However, the hope is it will achieve the global acceptance its predecessor did not, in part because of the broader

However, opponents are concerned that the treaty is vague, failing to limit its application to crimes that can only be committed via a computer, and lacks safeguards. It’s understandable that law enforcement, faced with the kinds of complex attacks on computer systems we see today want their path to international cooperation eased. But, as EFF writes, that eased cooperation should not extend to “serious crimes” whose definition and punishment is left up to individual countries.

Illustrations: Halloween display seen near Mechanicsburg, PA.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: The Web We Weave

The Web We Weave
By Jeff Jarvis
Basic Books
ISBN: 9781541604124

Sometime in the very early 1990s, someone came up to me at a conference and told me I should read the work of Robert McChesney. When I followed the instruction, I found a history of how radio and TV started as educational media and wound up commercially controlled. Ever since, this is the lens through which I’ve watched the Internet develop: how do we keep the Internet from following that same path? If all you look at is the last 30 years of web development, you might think we can’t.

A similar mission animates retired CUNY professor Jeff Jarvis in his latest book, The Web We Weave. In it, among other things, he advocates reanimating the open web by reviving the blogs many abandoned when Twitter came along and embracing other forms of citizen media. Phenomena such as disinformation, misinformation, and other harms attributed to social media, he writes, have precursor moral panics: novels, comic books, radio, TV, all were once new media whose evils older generations fretted about. (For my parents, it was comic books, which they completely banned while ignoring the hours of TV I watched.) With that past in mind, much of today’s online harms regulation leaves him skeptical.

As a media professor, Jarvis is interested in the broad sweep of history, setting social media into the context that began with the invention of the printing press. That has its benefits when it comes to later chapters where he’s making policy recommendations on what to regulate and how. Jarvis is emphatically a free-speech advocate.

Among his recommendations are those such advocates typically support: users should be empowered, educated, and taught to take responsibility, and we should develop business models that support good speech. Regulation, he writes, should include the following elements: transparency, accountability, disclosure, redress, and behavior rather than content.

On the other hand, Jarvis is emphatically not a technical or design expert, and therefore has little to say about the impact on user behavior of technical design decisions. Some things we know are constants. For example, the willingness of (fully identified) online communicators to attack each other was noted as long ago as the 1980s, when Sara Kiesler studied the first corporate mailing lists.

Others, however, are not. Those developing Mastodon, for example, deliberately chose not to implement the ability to quote and comment on a post because they believed that feature fostered abuse and pile-ons. Similarly, Lawrence Lessig pointed out in 1999 in Code and Other Laws of Cyberspae (PDF) that you couldn’t foment a revolution using AOL chatrooms because they had a limit of 23 simultaneous users.

Understanding the impact of technical decisions requires experience, experimentation, and, above all, time. If you doubt this, read Mike Masnick’s series at Techdirt on Elon Musk’s takeover and destruction of Twitter. His changes to the verification system alone have undermined the ability to understand who’s posting and decide how trustworthy their information is.

Jarvis goes on to suggest we should rediscover human scale and mutual obligation, both crucial as the covid pandemic progressed. The money will always favor mass scale. But we don’t have to go that way.

A hole is a hole

We told you so.

By “we” I mean thousands, of privacy advocates, human rights activists, technical experts, and information security journalists.

By “so”, I mean: we all said repeatedly over decades that there is no such thing as a magic hole that only “good guys” can use. If you build a supposedly secure system but put in a hole to give the “authorities” access to communications, that hole can and will be exploited by “bad guys” you didn’t want spying on you.

The particular hole Chinese hackers used to spy on the US is the Communications Assistance for Law Enforcement Act (1994). CALEA mandates that telecommunications providers design their equipment so that they can wiretap any customer if law enforcement presents a warrant. At Techcrunch, Zack Whittaker recaps much of the history, tracing technology giants’ new emphasis on end-to-end encryption to the 2013 Snowden revelations of the government’s spying on US citizens.

The mid-1990s were a time of profound change for telecommunications: the Internet was arriving, exchanges were converting from analog to digital, and deregulation was providing new competition for legacy telcos. In those pre-broadband years, hundreds of ISPs offered dial-up Internet access. Law enforcement could no longer just call up a single central office to place a wiretap. When CALEA was introduced, critics were clear and prolific; for an in-depth history see Susan Landau’s and Whit Diffie’s book, Privacy on the Line (originally published 1998, second edition 2007). The net.wars archive includes a compilation of years of related arguments, and at Techdirt, Mike Masnick reviews the decades of law enforcement insistence that they need access to encrypted text. “Lawful access” is the latest term of art.

In the immediate post-9/11 shock, some of those who insisted on the 1990s version of today’s “lawful access” – key escrow, took the opportunity to tell their opponents (us) that the attacks proved we’d been wrong. One such was the just-departed Jack Straw, the home secretary from 1997 to (June) 2001, who blamed BBC Radio Four and “…large parts of the industry, backed by some people who I think will now recognise they were very naive in retrospect”. That comment sparked the first net.wars column. We could now say, “Right back atcha.”

Whatever you call an encryption backdoor, building a hole into communications security was, is, and will always be a dangerous idea, as the Dutch government recently told the EU. Now, we have hard evidence.

***

The time is long gone when people used to be snobbish about Internet addresses (see net.wars-the-book, chapter three). Most of us are therefore unlikely to have thought much about the geekishly popular “.io”. It could be a new-fangled generic top-level domain – but it’s not. We have been reading linguistic meaning into what is in fact a country code. Which is all fine and good, except that the country it belongs to is the Chagos Islands, also known as the British Indian Ocean Territory, which I had never heard of until the British government announced recently that it will hand the islands back to Mauritius (instead of asking the Chagos Islanders what they want…). Gareth Edwards made the connection: when that transfer happens, .io will cease to exist (h/t Charles Arthur’s The Overspill).

Edwards goes on to discuss the messy history of orphaned country code domains: Yugoslavia, and the Soviet Union. As a result, ICANN, the naming authority, now has strict rules that mandate termination in such cases. This time, there’s a lot at stake: .io is a favorite among gamers, crypto companies, and many others, some of them substantial businesses. Perhaps a solution – such as setting .io up anew as a gTLD with its domains intact – will be created. But meantime, it’s worth noting that the widely used .tv (Tuvalu), .fm (Federated States of Micronesia), and .ai (Anguilla) are *also* country code domains.

***

The story of what’s going on with Automattic, the owner of the blogging platform WordPress.com, and WP Engine, which provides hosting and other services for businesses using WordPress, is hella confusing. It’s also worrying: WordPress, which is open source content management software overseen by the WordPress Foundation, powers a little over 40% of the Internet’s top ten million websites and more than 60% of sites overall (including this one).

At Heise Online, Kornelius Kindermann offers one of the clearer explanations: Automattic, whose CEO, Matthew Mullenweg is also a director of the WordPress Foundation and a co-creator of the software, wants WP Engine, which has been taken over by the investment company Silver Lake, to pay “trademark royalties” of 8% to the WordPress Foundation to support the software. WP Engine doesn’t wanna. Kindermann estimates the sum involved at $35 million, After the news of all that broke, 159 employees have announced they are leaving Automattic.

The more important point that, like the users of the encrypted services governments want to compromise, the owners of .io domains, or, ultimately, the Chagos Islanders themselves, WP Engine’s customers, some of them businesses worth millions, are hostages of uncertainty surrounding the decisions of others. Open source software is supposed to give users greater control. But as always, complexity brings experts and financial opportunities, and once there’s money everyone wants some of it.

Illustrations: View of the Chagos Archipelago taken during ISS Expedition 60 (NASA, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Beware the duck

Once upon a time, “convergence” was a buzzword. That was back in the days when audio was on stereo systems, television was on a TV, and “communications” happened on phones that weren’t computers. The word has disappeared back into its former usage pattern, but it could easily be revived to describe what’s happening to content as humans dive into using generative tools.

Said another way. Roughly this time last year, the annual technology/law/pop culture conference Gikii was awash in (generative) AI. That bubble is deflating, but in the experiments that nonetheless continue a new topic more worthy of attention is emerging: artificial content. It’s striking because what happens at this gathering, which mines all types of popular culture for cues for serious ideas, is often a good guide to what’s coming next in futurelaw.

That no one dared guess which of Zachary Cooper‘s pair of near-identicalaudio clips was AI-generated, and which human-performed was only a starting point. One had more static? Cooper’s main point: “If you can’t tell which clip is real, then you can’t decide which one gets copyright.” Right, because only human creations are eligible (although fake bands can still scam Spotify).

Cooper’s brief, wild tour of the “generative music underground” included using AI tools to create songs whose content is at odds with their genre, whole generated albums built by a human producer making thousands of tiny choices, and the new genre “gencore”, which exploits the qualities of generated sound (Cher and Autotune on steroids). Voice cloning, instrument cloning, audio production plugins, “throw in a bass and some drums”….

Ultimately, Cooper said, “The use of generative AI reveals nothing about the creative relationship to work; it destabilizes the international market by having different authorship thresholds; and there’s no means of auditing any of it.” Instead of uselessly trying to enforce different rights predicated on the use or non-use of a specific set of technologies, he said, we should tackle directly the challenges new modes of production pose to copyright. Precursor: the battles over sampling.

Soon afterwards, Michael Veale was showing us Civitai, an Idaho-based site offering open source generative AI tools, including fine-tuned models. “Civitai exists to democratize AI media creation,” the site explains. “Everything has a valid legal purpose,” Veale said, but the way capabilities can be retrained and chained together to create composites makes it hard to tell which tools, if any, should be taken down, even for creators (see also the puzzlement as Redditors try to work this out). Even environmental regulation can’t help, as one attendee suggested: unlike large language models, these smaller, fine-tuned models (as Jon Crowcroft and I surmised last year would be the future) are efficient; they can run on a phone.

Even without adding artificial content there is always an inherent conflict when digital meets an analog spectrum. This is why, Andy Phippen said, the threshold of 18 for buying alcohol and cigarettes turns into a real threshold of 25 at retail checkouts. Both software and humans fail at determining over-or-under-18, and retailers fear liability. Online age verification as promoted in the Online Safety Act will not work.

If these blurred lines strain the limits of current legal approaches, others expose gaps in the law. Andrea Matwyshyn, for example, has been studying parallels I’ve also noticed between early 20th century company towns and today’s tech behemoths’ anti-union, surveillance-happy working practices. As a result, she believes that regulatory authorities need to start considering closely the impact of data aggregation when companies merge and look for company town-like dynamics”.

Andelka Phillips parodied the overreach of app contracts by imagining the EULA attached to “ThoughtReader app”. A sample clause: “ThoughtReader may turn on its service at any time. By accepting this agreement, you are deemed to accept all monitoring of your thoughts.” Well, OK, then. (I also had a go at this here, 19 years ago.)

Emily Roach toured the history of fan fiction and the law to end up at Archive of Our Own, a “fan-created, fan-run, nonprofit, noncommercial archive for transformative fanworks, like fanfiction, fanart, fan videos, and podfic”, the idea being to ensure that the work fans pour their hearts into has a permanent home where it can’t be arbitrarily deleted by corporate owners. The rules are strict: not so much as a “buy me a coffee” tip link that could lead to a court-acceptable claim of commercial use.

History, the science fiction writer Charles Stross has said, is the science fiction writer’s secret weapon. Also at Gikii: Miranda Mowbray unearthed the 18th century “Digesting Duck” automaton built by Jacques de Vauconson. It was a marvel that appeared to ingest grain and defecate waste and that in its day inspired much speculation about the boundaries between real and mechanical life. Like the amazing ancient Greek automata before it, it was, of course, a purely mechanical fake – it stored the grain in a small compartment and released pellets from a different compartment – but today’s humans confused into thinking that sentences mean sentience could relate.

Illustrations: One onlooker’s rendering of his (incorrect) idea of the interior of Jacques de Vaucanson’s Digesting Duck (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Sectioned

Social media seems to be having a late-1990s moment, raising flashbacks to the origins of platform liability and the passage of Section 230 of the Communications Decency Act (1996). It’s worth making clear at the outset: most of the people talking about S230 seem to have little understanding of what it is and does. It allows sites to moderate content without becoming liable for it. It is what enables all those trust and safety teams to implement sites’ restrictions on acceptable use. When someone wants to take an axe to it because there is vile content circulating, they have not understood this.

So, in one case this week a US appeals court is allowing a lawsuit to proceed that seeks to hold TikTok liable for users’ postings of the “blackout challenge”, the idea being to get an adrenaline rush by reviving from near-asphyxiation. Bloomberg reports that at least 20 children have died trying to accomplish this, at least 15 of them age 12 or younger (TikTok, like all social media, is supposed to be off-limits to under-13s). The people suing are the parents of one of those 20, a ten-year-old girl who died attempting the challenge.

The other case is that of Pavel Durov, CEO of the messaging service Telegram, who has been arrested in France as part of a criminal investigation. He has been formally charged with complicity in managing an online platform “in order to enable an illegal transaction in organized group”, and refusal to cooperate with law enforcement authorities and ordered not to leave France, with bail set at €5 million (is that enough to prevent the flight of a billionaire with four passports?).

While there have been many platform liability cases, there are relatively few examples of platform owners and operators being charged. The first was in 1997, back when “online” still had a hyphen; the German general manager of CompuServe, Felix Somm, was arrested in Bavaria on charges of “trafficking in pornography”. That is, German users of Columbus, Ohio-based CompuServe could access pornography and illegal material on the Internet through the service’s gateway. In 1998, Somm was convicted and given a two-year suspended sentence. In 1999 his conviction was overturned on appeal, partly, the judge wrote, because there was no technology at the time that would have enabled CompuServe to block the material.

The only other example I’m aware of came just this week, when an Arizona judge sentenced Michael Lacey, co-founder of the classified ads site Backpage.com, to five years in prison and fined him $3 million for money laundering. He still faces further charges for prostitution facilitation and money laundering; allegedly he profited from a scheme to promote prostitution on his site. Two other previously convicted Backpages executives were also sentenced this week to ten years in prison.

In Durov’s case, the key point appears to be his refusal to follow industry practice with respect to to reporting child sexual abuse material or cooperate with properly executed legal requests for information. You don’t have to be a criminal to want the social medium of your choice to protect your privacy from unwarranted government snooping – but equally, you don’t have to be innocent to be concerned if billionaire CEOs of large technology companies consider themselves above the law. (See also Elon Musk, whose X platform may be tossed out of Brazil right now.)

Some reports on the Durov case have focused on encryption, but the bigger issue appears to be failure to register to use encryption , as Signal has. More important, although Telegram is often talked about as encrypted, it’s really more like other social media, where groups are publicly visible, and only direct one-on-one messages are encrypted. But even then, they’re only encrypted if users opt in. Given that users notoriously tend to stick with default settings, that means that the percentage of users who turn that encryption on is probably tiny. So it’s not clear yet whether France is seeking to hold Durov responsible for the user-generated content on his platform (which S230 would protect in the US), or accusing him of being part of criminal activity relating to his platform (which it wouldn’t).

Returning to the Arizona case, in allowing the lawsuit to go ahead, the appeals court judgment says that S230 has “evolved away from its original intent”, and argues that because TikTok’s algorithm served up the challenge on the child’s “For You” page, the service can be held responsible. At TechDirt, Mike Masnick blasts this reasoning, saying that it overturns numerous other court rulings upholding S230, and uses the same reasoning as the 1995 decision in Stratton Oakmont v. Prodigy. That was the case that led directly to the passage of S230, introduced by then-Congressman Christopher Cox (R-CA) and Senator Ron Wyden (D-OR), who are still alive to answer questions about their intent. Rather than evolving away, we’ve evolved back full circle.

The rise of monopolistic Big Tech has tended to obscure the more important point about S230. As Cory Doctorow writes for EFF, killing S230 would kill the small federated communities (like Mastodon and Discord servers) and web boards that offer alternatives to increasing Big Tech’s pwoer. While S230 doesn’t apply outside the US (some Americans have difficulty understanding that other countries have different laws), its ethos is pervasive and the companies it’s enabled are everywhere. In the end, it’s like democracy: the alternatives are worse.

Illustrations: Drunken parrot in Putney (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Gather ye lawsuits while ye may

Most of us howled with laughter this week when the news broke that Elon Musk is suing companies for refusing to advertise on his exTwitter platform. To be precise, Musk is suing the World Federation of Advertisers, Unilever, Mars, CVS, and Ørsted in a Texas court.

How could Musk, who styles himself a “free speech absolutist”, possibly force companies to advertise on his site? This is pure First Amendment stuff: both the right to free speech (or to remain silent) and freedom of assembly. It adds to the nuttiness of it all that last November Musk was telling advertisers to “go fuck yourselves” if they threatened him with a boycott. Now he’s mad because they responded in kind.

Does the richest man in the world even need advertisers to finance his toy?

At Techdirt, Mike Masnick catalogues the “so much stupid here”.

The WFA initiative that offends Musk is the Global Alliance for Responsible Media, which develops guidelines for content moderation – things like a standard definition for “hate speech” to help sites operate consistent and transparent policies and reassure advertisers that their logos don’t appear next to horrors like the livestreamed shooting in Christchurch, New Zealand. GARM’s site says: membership is voluntary, following its guidelines is voluntary, it does not provide a rating service, and it is apolitical.

Pre-Musk, Twitter was a member. After Musk took over, he pulled exTwitter out of it – but rejoined a month ago. Now, Musk claims that refusing to advertise on his site might be a criminal matter under RICO. So he’s suing himself? Blink.

Enter US Republicans, who are convinced that content moderation exists only to punish conservative speech. On July 10, House Judiciary Committee, under the leadership of Jim Jordan (R-OH), released an interim report on its ongoing investigation of GARM.

The report says GARM appears to “have anti-democratic views of fundamental American freedoms” and likens its work to restraint of trade Among specific examples, it says GARM’s recommended that its members stop advertising on exTwitter, threatened Spotify when podcaster Joe Rogan told his massive audience that young, healthy people don’t need to be vaccinated against covid, and considered blocking news sites such as Fox News, Breitbart, and The Daily Wire. In addition, the report says, GARM advised its members to use fact-checking services like NewsGuard and the Global Disinformation Index “which disproportionately label right-of-center news sites as so-called misinformation”. Therefore, the report concludes, GARM’s work is “likely illegal under the antitrust laws”.

I don’t know what a court would have made of that argument – for one thing, GARM can’t force anyone to follow its guidelines. But now we’ll never know. Two days after Musk filed suit, the WFA announced it’s shuttering GARM immediately because it can’t afford to defend the lawsuit and keep operating even though it believes it’s complied with competition rules. Such is the role of bullies in our public life.

I suppose Musk can hope that advertisers decide it’s cheaper to buy space on his site than to fight the lawsuit?

But it’s not really a laughing matter. GARM is just one of a number of initiatives that’s come under attack as we head into the final three months of campaigning before the US presidential election. In June, Renee DiResta, author of the new book Invisible Rulers, announced that her contract as the research manager of the Stanford Internet Observatory was not being renewed. Founding director Alex Stamos was already gone. Stanford has said the Observatory will continue under new leadership, but no details have been published. The Washington Post says conspiracy theorists have called DiResta and Stamos part of a government-private censorship consortium.

Meanwhile, one of the Observatory’s projects, a joint effort with the University of Washington called the Election Integrity Partnership, has announced, in response to various lawsuits and attacks, that it will not work on the 2024 or future elections. At the same time, Meta is shutting down CrowdTangle next week, removing a research tool that journalists and academics use to study content on Facebook and Instagram. While CrowdTangle will be replaced with Meta Content Library, access will be limited to academics and non-profits, and those who’ve seen it say it’s missing useful data that was available through CrowdTangle.

The concern isn’t the future of any single initiative; it’s the pattern of these things winking out. As work like DiResta’s has shown, the flow of funds financing online political speech (including advertising) is dangerously opaque. We need access and transparency for those who study it, and in real time, not years after the event.

In this, as so much else, the US continues to clash with the EU, which accused the US in December of breaching its rules with respect to disinformation, transparency, and extreme content. Last month, it formally charged Musk’s site for violating the Digital Services Act, for which Musk could be liable for a fine of up to 6% of exTwitter’s global revenue. Among the EU’s complaints is the lack of a searchable and reliable advertisement repository – again, an important element of the transparency we need. Its handling of disinformation and calls to violence during the current UK riots may be added to the investigation.

Musk will be suing *us*, next.

Illustrations: A cartoon caricature of Christina Rossetti by her brother Dante Gabriel Rossetti 1862, showing her having a tantrum after reading The Times’ review of her poetry (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.