Blue

The inxodus onto Bluesky noted here last week continues apace: the site’s added a million users a day for more than a week, gradually slowing down from 12 new users a second, per the live counter.

These are not lurkers. Suddenly, the site feels like Twitter circa 2009/2010, when your algorithm-free feed was filled with interesting people sharing ideas, there were no ads, and abuse was in its infancy. People missing in action for the last year or two are popping up; others I’ve wished would move off exTwitter so I could stop following them there have suddenly joined. Mastodon is also seeing an uptick, and (I hear) Threads continues to add users without, for me, adding interest to match…. I doubt this diaspora is all “liberals”, as some media have it – or if they are, it won’t be long before politicians and celebrities note the action is elsewhere and rush to stay relevant.

It takes a long time for a social medium to die if it isn’t killed by a corporation. Even after this week’s bonanza, Bluesky’s entire user base fits inside 5% of exTwitter, which still has around 500 million users as of September, about half of them active daily. What matters most are *posters*, who are about 10% or less of any social site’s user base. When they leave, engagement plummets, as shown in a 2017 paper in Nature.

An example in action: at Statnews, Katie Palmer reports that the science and medical community is adopting Bluesky.

I have to admit to some frustration over this: why not Mastodon? As retro-fun as this week on Bluesky has been, the problem noted here a few weeks ago of Bluesky’s venture capital funding remains. Yes, the company is incorporated as a public benefit company – but venture capitalists want exit strategies and return on investment. That tension looms.

Mastodon is a loose collection of servers that all run the same software, which in turn is written to the open protocol Activity Pub. Gergely Orosz has deep-dive looks at Bluesky’s development and culture; the goal was to write a new open protocol, AT, that would allow Bluesky, similarly, to federate with others. There is already a third-party bit of software, Bridgy, that provides interoperability among Bluesky, any system based on Activity Pub (“the Fediverse”, of which Mastodon is a subset), and the open web (such as blogs). For the moment, though, Bluesky remains the only site running its AT protocol, so the more users Bluesky adds, the more it feels like a platform rather than a protocol. And platforms can change according to the whims of their owners – which is exactly what those leaving exTwitter are escaping. So: why not Mastodon, which doesn’t have that problem?

In an exchange on Bluesky, Palmer said that those who mentioned it said they found Mastodon “too difficult to figure out”.

It can’t be the thing itself; typing and sending varies little. The problem has to be the initial uncertainty about choosing a server. What you really want is for institutions to set up their own, and then you sign up there. For most people that’s far too much heavy lifting. Still, this is what the BBC and the German government have done, and it has a significant advantage in that posting from an address on that server automatically verifies the poster as an authentic staffer. NPR simply found a server and opened an account, like I did when I joined Mastodon in 2019.

All that said, how Mastodon administrators will cope with increasing usage and resulting costs also remains an open question as discussed here last year.

So: some advice as you settle into your new online home:

– Plan for the site’s eventual demise. “On the Internet your home will always leave you” (I have lost the source of this quote). Every site, no matter how big and fast-growing it is now, or how much you all love it…assume that at some point in the future it will either die of outmoded business model (AOL forums); get bought and closed down (Television without Pity, CompuServe, Geocities); become intolerable because of cultural change (exTwitter); or be abandoned because the owner loses interest (countless blogs and comment boards). Plan for that day. Collect alternative means of contacting the people you come to know and value. Build multiple connections.

– Watch the data you’re giving the site. No one in 2007, when I joined Twitter, imagined their thousands of tweets would become fodder for a large language model to benefit one of the world’s richest multi-billionaires.

– If you are (re)building an online community for an organization, own that community. Use social media, by all means, but use it to encourage people to visit the organization’s website, or join its fully-controlled mailing list or web board. Otherwise, one day, when things change, you will have to start over from scratch, and may not even know who your members are or how to reach them.

– Don’t worry too much about the “filter bubble”, as John Elledge writes. Studies generally agree social media users encounter more, and more varied, sources of news than others. As he says, only journalists have to read widely among people whose views they find intolerable (see also the late, great Molly Ivins).

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

What’s next

“It’s like your manifesto promises,” Bernard Woolley (Derek Fowldes) tells eponymous minister Jim Hacker (Paul Eddington) in Antony Jay‘s and Jonathan Lynn’s Yes, Minister. “People *understand*.” In other words, people know your election promises aren’t real.

The current US president-elect is impulsive and chaotic, and there will be resistance. So it’s reasonable to assume that at least some of his pre-election rhetoric will remain words and not deeds. There is, however, no telling which parts. And: the chaos is the point.

At Ars Technica, Ashley Belanger considers the likely impact of the threatened 60% tariffs on Chinese goods and 20% from everywhere else: laptops could double, games consoles go up 40%, and smartphones rise 26%. Friends want to stockpile coffee, tea, and chocolate.

Also at Ars Technica, Benj Edwards predicts that the new administration will quickly reverse Joe Biden’s executive order regulating AI development.

At his BIG Substack, Matt Stoller predicts a wave of mergers following three years of restrictions. At TechDirt, Karl Bode agrees, with special emphasis on media companies and an order of enshittification on the side. At Hollywood Reporter, similarly, Alex Weprin reports that large broadcast station owners are eagerly eying up local stations, and David Zaslav, CEO of merger monster Warner Brothers Discovery, tells Georg Szalai that more consolidation would provide “real positive impact”. (As if.)

Many predict that current Federal Communications Commissioner Brendan Carr will be promoted to FCC chair. Carr set out his agenda in his chapter of Project 2025: as the Benton Institute for Broadband and Society reports. His policies, Jon Brodkin writes at Ars Technica, include reforming Section 230 of the Communications Decency Act and dropping consumer protection initiatives. John Hendel warned in October at Politico that the new FCC chair could also channel millions of dollars to Elon Musk for his Starlink satellite Internet service, a possibility the FCC turned down in 2023.

Also on Carr’s list is punishing critical news organizations. Donald Trump’s lawyers began before the election with a series of complaints, as Lachlan Cartwright writes at Columbia Journalism Review. The targets: CBS News for 60 Minutes, the New York Times, Penguin Random House, Saturday Night Live, the Washington Post, and the Daily Beast.

Those of us outside the US will be relying on the EU to stand up to parts of this through the AI Act, Digital Markets Act, Digital Services Act, and GDPR. Enforcement will be crucial. The US administration may resist this procedure. The UK will have to pick a side.

***

It’s now two years since Elon Musk was forced to honor his whim of buying Twitter, and much of what he and others said would happen…hasn’t. Many predicted system collapse or a major hack. Instead, despite mass departures for sites other, the hollowed-out site has survived technically while degrading in every other way that matters.

Other than rebranding to “X”, Musk has failed to deliver many of the things he was eagerly talking about when he took over. A helpful site chronicles these: a payments system, a content moderation council, a billion more users. X was going to be the “everything app”. Nope.

This week, the aftermath of the US election and new terms of service making user data fodder for AI training have sparked a new flood of departures. This time round there’s consensus: they’re going to Bluesky.

It’s less clear what’s happening with the advertisers who supply the platform’s revenues, which the now-private company no longer has to disclose. Since Musk’s takeover, reports have consistently said advertisers are leaving. Now, the Financial Times reports (unpaywalled, Ars Technica) they are plotting their return, seeking to curry favor given Musk’s influence within the new US administration – and perhaps escaping the lawsuit he filed against them in August. Even so, it will take a lot to rebuild. The platform’s valuation is currently estimated at $10 billion, down from the $44 billion Musk paid.

This slash-and-burn approach is the one Musk wants to take to Department of Government Efficiency (DOGE, as in Dogecoin; groan). Musk’s list of desired qualities for DOGE volunteers – no pay, long hours, “super” high IQ – reminds of Dominic Cummings in January 2020, when he was Boris Johnson’s most-favored adviser and sought super-talented weirdos to remake the UK government. Cummings was gone by November.

***

It says something about the madness of the week that the sanest development appears to be that The Onion has bought Infowars, the conspiracy theory media operation Alex Jones used to promote, alongside vitamins, supplements, and many other conspiracy theories, the utterly false claim that the Sandy Hook school shootings were a hoax. The sale was part of a bankruptcy auction held to raise funds Jones owes to the families of the slaughtered Sandy Hook children after losing to them in court in a $1.4 billion defamation case. Per the New York Times, the purchase was sanctioned by the Sandy Hook families. The Onion will relaunch the site in its own style with funding from Everytown for Gun Safety. There may not be a god, but there is an onion.

Illustrations: The front page of The Onion, showing the news about its InfoWars purchase.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The master switch

In his 2010 book, The Master Switch, Columbia law professor Tim Wu quotes the television news pioneer Fred W. Friendly, who said in a 1970 article for Saturday Review that before any question of the First Amendment and free speech, is “who has exclusive control of the master switch. In his 1967 memoir, Due to Circumstances Beyond Our Control, Friendly tells numerous stories that illustrate the point, beginning with his resignation of the presidency of CBS News after the network insisted on showing a rerun of I Love Lucy rather than carry live the first Senate hearings on the US involvement in Vietnam.

This is the switch that Amazon founder Jeff Bezos flipped this week when he blocked the editorial board of the Washington Post, which he owns, from endorsing Kamala Harris and Tim Walz in the US presidential election. At that point, every fear people had in 2013, when Bezos paid $250 million to save the struggling 76-time Pulitzer prize-paper famed for breaking Watergate, came true. Bezos, like William Randolph Hearst, Rupert Murdoch, and others before him, exerted his ownership control. (See also the late, great film critic Roger Ebert on the day Rupert Murdoch took over the Chicago Sun-Times.)

If you think of the Washington Post as just a business, as opposed to a public service institution, you can see why Bezos preferred to hedge his bets. But, as former Post journalist Dan Froomkin called it in February 2023, ten years post-sale, the newspaper had reverted to its immediately pre-Bezos state, laying off staff and losing money. Then, Froomkin warned that Bezos’ newly-installed “lickspittle” publisher, editor, and editorial editor lacked vision and suggested Bezos turn it into a non-profit, give it an endowment, and leave it alone.

By October 2023, Froomkin was arguing that the Post had blown it by failing to cover the decade’s most important story, the threat to the US’s democratic system posed by “the increasingly demented and authoritarian Republican Party”. As of yesterday, more than 250,000 subscribers had canceled, literally decimating its subscriber base, though barely, as Jason Koebler writes at 404 Media, a rounding error in Bezos’ wealth.

Almost simultaneously, a similar story was playing out 3,000 miles across the country at the LA Times. There, owner Patrick Soon-Shiong overrode the paper’s editorial board’s intention to endorse Harris/Walz. Several board members have since resigned, along with editorials editor Mariel Garza.

At Columbia Journalism Review, Jeff Jarvis uses Timothy Snyder’s term, “anticipatory obedience” to describe these situations.

On his Mea Culpa podcast, former Trump legal fixer Michael Cohen has frequently issued a hard-to-believe warning that if Trump is elected he will assemble the country’s billionaires and take full control of their assets, Putin-style. As unAmerican as that sounds, Cohen has been improbably right before; in 2019 Congressional testimony he famously predicted that Trump would never allow a peaceful transition of power. If Trump wins and proves Cohen correct, anticipatory obedience won’t save Bezos or any other billionaire.

The Internet was supposed to provide an escape from this sort of control (in the 1990s, pundits feared The Drudge Report!). Into this context, several bits of social media news also dropped. Bluesky announced $15 million in venture capital funding and a user base of 13 million. Reddit announced its first-ever profit, apparently solely due to the deals the 19-year-old service signed to give Google and OpenAI to access user postings and use AI to translate users’ posts into multiple languages. Finally, the owner of the Mastodon server botsin.space, which allows users to run bots on Mastodon, is shutting down, ending new account signups and shifting to read-only by December. The owner blames unsustainably increasing costs as the user base and postings continue to grow.

Even though Bluesky is incorporated as a public benefit LLC, the acceptance of venture capital gives pause: venture capital always looks for a lucrative exit rather than value for users. Reddit served tens of millions of users for 19 years without ever making any money; it’s only profitable now because AI developers want its data.

Bluesky’s board includes the notable free speech advocate Techdirt’s Mike Masnick, who this week blasted the Washington Post’s decision in scathing terms. Masnick’s paper proposing promoting free speech by developing protocols rather than platforms serves as a sort of founding document. Platforms centralize user data and share it back out again; protocols are standards anyone can use to write compliant software to enable new connections. Think proprietary (Apple) versus open source (Linux, email, the web).

The point is this: platforms either start with or create billionaire owners; protocols allow participation by both large and small owners. That still leaves the long-term problem of how to make such services sustainable. Koebler writes of the hard work of going independent, but notes that the combination of new technology and the elimination of layers of management and corporate executives makes it vastly cheaper than before. Bluesky so far has no advertising, but plans to offer higher-level features by subscription, still implying a centralized structure. Mastodon instances survive on user donations and volunteer administrators. Its developers should target making it much easier and more efficient to run their instances: democratize the master switch.

Illustrations: Charles Foster Kane (Orson Welles) in his newsroom in the 1941 film Citizen Kane, (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Follow the business models

In a market that enabled the rational actions of economists’ fantasies, consumers would be able to communicate their preferences for “smart” or “dumb” objects by exercising purchasing power. Instead, everything from TVs and vacuum cleaners to cars is sprouting Internet connections and rampant data collection.

I would love to believe we will grow out of this phase as the risks of this approach continue to become clearer, but I doubt it because business models will increasingly insist on the post-sale money, which never existed in the analog market. Subscriptions to specialized features and embedded ads seem likely to take ever everything. Essentially, software can change the business model governing any object’s manufacture into Gillette’s famous gambit: sell the razors cheap, and make the real money selling razor blades. See also in particular printer cartridges. It’s going to be everywhere, and we’re all going to hate it.

***

My consciousness of the old ways is heightened at the moment because I spent last weekend participating in a couple of folk music concerts around my old home town, Ithaca, NY. Everyone played acoustic instruments and sang old songs to celebrate 58 years of the longest-running folk music radio show in North America. Some of us hadn’t really met for nearly 50 years. We all look older, but everyone sounded great.

A couple of friends there operate a “rock shop” outside their house. There’s no website, there’s no mobile app, just a table and some stone wall with bits of rock and other findings for people to take away if they like. It began as an attempt to give away their own small collection, but it seems the clearing space aspect hasn’t worked. Instead, people keep bringing them rocks to give away – in one case, a tray of carefully laid-out arrowheads. I made off with a perfect, peach-colored conch shell. As I left, they were taking down the rock shop to make way for fantastical Halloween decorations to entertain the neighborhood kids.

Except for a brief period in the 1960s, playing folk music has never been lucrative. However it’s still harder now: teens buy CDs to ensure they can keep their favorite music, and older people buy CDs because they still play their old collections. But you can’t even *give* a 45-year-old a CD because they have no way to play it. At the concert, Mike Agranoff highlighted musicians’ need for support in an ecosystem that now pays them just $0.014 (his number) for streaming a track.

***

With both Halloween and the US election scarily imminent, the government the UK elected in July finally got down to its legislative program this week.

Data protection reform is back in the form of the the Data Use and Access Bill, Lindsay Clark reports at The Register, saying the bill is intended to improve efficiency in the NHS, the police force, and businesses. It will involve making changes to the UK’s implementation of the EU’s General Data Protection Regulation. Care is needed to avoid putting the UK’s adequacy decision at risk. At the Open Rights Group Mariano della Santi warns that the bill weakens citizens’ protection against automated decision making. At medConfidential, Sam Smith details the lack of safeguards for patient data.

At Computer Weekly, Bill Goodwin and Sebastian Klovig Skelton outline the main provisions and hopes: improve patient care, free up police time to spend more protecting the public, save money.

‘Twas ever thus. Every computer system is always commissioned to save money and improve efficiency – they say this one will save 140,000 a years of NHS staff time! Every new computer system also always brings unexpected costs in time and money and messy stages of implementation and adaptation during which everything becomes *less* efficient. There are always hidden costs – in this case, likely the difficulties of curating data and remediating historical bias. An easy prediction: these will be non-trivial.

***

Also pending is the draft United Nations Convention Against Cybercrime; the goal is to get it through the General Assembly by the end of this year.

Human Rights Watch writes that 29 civil society organizations have written to the EU and member states asking them to vote against the treaty’s adoption and consider alternative approaches that would safeguard human rights. The EFF is encouraging all states to vote no.

Internet historians will recall that there is already a convention on cybercrime, sometimes called the Budapest Convention. Drawn up in 2001 by the Council of Europe to come into force in 2004, it was signed by 70 countries and ratified by 68. The new treaty has been drafted by a much broader range of countries, including Russia and China, is meant to be consistent with that older agreement. However, the hope is it will achieve the global acceptance its predecessor did not, in part because of the broader

However, opponents are concerned that the treaty is vague, failing to limit its application to crimes that can only be committed via a computer, and lacks safeguards. It’s understandable that law enforcement, faced with the kinds of complex attacks on computer systems we see today want their path to international cooperation eased. But, as EFF writes, that eased cooperation should not extend to “serious crimes” whose definition and punishment is left up to individual countries.

Illustrations: Halloween display seen near Mechanicsburg, PA.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: The Web We Weave

The Web We Weave
By Jeff Jarvis
Basic Books
ISBN: 9781541604124

Sometime in the very early 1990s, someone came up to me at a conference and told me I should read the work of Robert McChesney. When I followed the instruction, I found a history of how radio and TV started as educational media and wound up commercially controlled. Ever since, this is the lens through which I’ve watched the Internet develop: how do we keep the Internet from following that same path? If all you look at is the last 30 years of web development, you might think we can’t.

A similar mission animates retired CUNY professor Jeff Jarvis in his latest book, The Web We Weave. In it, among other things, he advocates reanimating the open web by reviving the blogs many abandoned when Twitter came along and embracing other forms of citizen media. Phenomena such as disinformation, misinformation, and other harms attributed to social media, he writes, have precursor moral panics: novels, comic books, radio, TV, all were once new media whose evils older generations fretted about. (For my parents, it was comic books, which they completely banned while ignoring the hours of TV I watched.) With that past in mind, much of today’s online harms regulation leaves him skeptical.

As a media professor, Jarvis is interested in the broad sweep of history, setting social media into the context that began with the invention of the printing press. That has its benefits when it comes to later chapters where he’s making policy recommendations on what to regulate and how. Jarvis is emphatically a free-speech advocate.

Among his recommendations are those such advocates typically support: users should be empowered, educated, and taught to take responsibility, and we should develop business models that support good speech. Regulation, he writes, should include the following elements: transparency, accountability, disclosure, redress, and behavior rather than content.

On the other hand, Jarvis is emphatically not a technical or design expert, and therefore has little to say about the impact on user behavior of technical design decisions. Some things we know are constants. For example, the willingness of (fully identified) online communicators to attack each other was noted as long ago as the 1980s, when Sara Kiesler studied the first corporate mailing lists.

Others, however, are not. Those developing Mastodon, for example, deliberately chose not to implement the ability to quote and comment on a post because they believed that feature fostered abuse and pile-ons. Similarly, Lawrence Lessig pointed out in 1999 in Code and Other Laws of Cyberspae (PDF) that you couldn’t foment a revolution using AOL chatrooms because they had a limit of 23 simultaneous users.

Understanding the impact of technical decisions requires experience, experimentation, and, above all, time. If you doubt this, read Mike Masnick’s series at Techdirt on Elon Musk’s takeover and destruction of Twitter. His changes to the verification system alone have undermined the ability to understand who’s posting and decide how trustworthy their information is.

Jarvis goes on to suggest we should rediscover human scale and mutual obligation, both crucial as the covid pandemic progressed. The money will always favor mass scale. But we don’t have to go that way.

Sectioned

Social media seems to be having a late-1990s moment, raising flashbacks to the origins of platform liability and the passage of Section 230 of the Communications Decency Act (1996). It’s worth making clear at the outset: most of the people talking about S230 seem to have little understanding of what it is and does. It allows sites to moderate content without becoming liable for it. It is what enables all those trust and safety teams to implement sites’ restrictions on acceptable use. When someone wants to take an axe to it because there is vile content circulating, they have not understood this.

So, in one case this week a US appeals court is allowing a lawsuit to proceed that seeks to hold TikTok liable for users’ postings of the “blackout challenge”, the idea being to get an adrenaline rush by reviving from near-asphyxiation. Bloomberg reports that at least 20 children have died trying to accomplish this, at least 15 of them age 12 or younger (TikTok, like all social media, is supposed to be off-limits to under-13s). The people suing are the parents of one of those 20, a ten-year-old girl who died attempting the challenge.

The other case is that of Pavel Durov, CEO of the messaging service Telegram, who has been arrested in France as part of a criminal investigation. He has been formally charged with complicity in managing an online platform “in order to enable an illegal transaction in organized group”, and refusal to cooperate with law enforcement authorities and ordered not to leave France, with bail set at €5 million (is that enough to prevent the flight of a billionaire with four passports?).

While there have been many platform liability cases, there are relatively few examples of platform owners and operators being charged. The first was in 1997, back when “online” still had a hyphen; the German general manager of CompuServe, Felix Somm, was arrested in Bavaria on charges of “trafficking in pornography”. That is, German users of Columbus, Ohio-based CompuServe could access pornography and illegal material on the Internet through the service’s gateway. In 1998, Somm was convicted and given a two-year suspended sentence. In 1999 his conviction was overturned on appeal, partly, the judge wrote, because there was no technology at the time that would have enabled CompuServe to block the material.

The only other example I’m aware of came just this week, when an Arizona judge sentenced Michael Lacey, co-founder of the classified ads site Backpage.com, to five years in prison and fined him $3 million for money laundering. He still faces further charges for prostitution facilitation and money laundering; allegedly he profited from a scheme to promote prostitution on his site. Two other previously convicted Backpages executives were also sentenced this week to ten years in prison.

In Durov’s case, the key point appears to be his refusal to follow industry practice with respect to to reporting child sexual abuse material or cooperate with properly executed legal requests for information. You don’t have to be a criminal to want the social medium of your choice to protect your privacy from unwarranted government snooping – but equally, you don’t have to be innocent to be concerned if billionaire CEOs of large technology companies consider themselves above the law. (See also Elon Musk, whose X platform may be tossed out of Brazil right now.)

Some reports on the Durov case have focused on encryption, but the bigger issue appears to be failure to register to use encryption , as Signal has. More important, although Telegram is often talked about as encrypted, it’s really more like other social media, where groups are publicly visible, and only direct one-on-one messages are encrypted. But even then, they’re only encrypted if users opt in. Given that users notoriously tend to stick with default settings, that means that the percentage of users who turn that encryption on is probably tiny. So it’s not clear yet whether France is seeking to hold Durov responsible for the user-generated content on his platform (which S230 would protect in the US), or accusing him of being part of criminal activity relating to his platform (which it wouldn’t).

Returning to the Arizona case, in allowing the lawsuit to go ahead, the appeals court judgment says that S230 has “evolved away from its original intent”, and argues that because TikTok’s algorithm served up the challenge on the child’s “For You” page, the service can be held responsible. At TechDirt, Mike Masnick blasts this reasoning, saying that it overturns numerous other court rulings upholding S230, and uses the same reasoning as the 1995 decision in Stratton Oakmont v. Prodigy. That was the case that led directly to the passage of S230, introduced by then-Congressman Christopher Cox (R-CA) and Senator Ron Wyden (D-OR), who are still alive to answer questions about their intent. Rather than evolving away, we’ve evolved back full circle.

The rise of monopolistic Big Tech has tended to obscure the more important point about S230. As Cory Doctorow writes for EFF, killing S230 would kill the small federated communities (like Mastodon and Discord servers) and web boards that offer alternatives to increasing Big Tech’s pwoer. While S230 doesn’t apply outside the US (some Americans have difficulty understanding that other countries have different laws), its ethos is pervasive and the companies it’s enabled are everywhere. In the end, it’s like democracy: the alternatives are worse.

Illustrations: Drunken parrot in Putney (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The fear factor

Be careful what you allow the authorities to do to people you despise, because one day those same tools will be turned against you.

In the last few weeks, the shocking stabbing of three young girls at a dance class in Southport became the spark to ignite riots across the UK by people who apparently believed social media theories that the 17-year-old boy responsible was Muslim, a migrant, or a terrorist. With the boy a week from his 18th birthday, the courts ruled police could release his name in order to make clear he was not Muslim and born in Wales. It failed to stop the riots.

Police and the courts have acted quickly; almost 800 people have been arrested, 350 have been charged, and hundreds are in custody. In a moving development, on a night when more than 100 riots were predicted, tens of thousands of ordinary citizens thronged city streets and formed protective human chains around refugee centers in order to block the extremists. The riots have quieted down, but police are still busy arresting newly-identified suspects. And the inevitable question is being asked: what do we do next to keep the streets safe and calm?

London mayor Sadiq Khan quickly called for a review of the Online Safety Act, saying he doesn’t believe it’s fit for purpose. Cabinet minister Nick Thomas-Symonds (Labour-Torfaen) has suggested the month-old government could change the law.

Meanwhile, prime minister Keir Starmer favours a wider rollout of live facial recognition to track thugs and prevent them from traveling to places where they plan to cause social unrest, copying systems the police use to prevent football hooligans from even boarding trains to matches. This proposal is startling because: before standing for Parliament Starmer was a human rights lawyer. One could reasonably expect him to know that facial recognition systems have a notorious history of inaccuracy due to biases baked into their algorithms via training data, and that in the UK there is no regulatory framework to provide oversight. Silkie Carlo, the director of Big Brother Watch immediately called the proposal “alarming” and “ineffective”, warning that it turns people into “walking ID cards”.

As the former head of Liberty, Shami Chakrabarti used to say when ID cards were last proposed, moves like these fundamentally change the relationship between the citizen and the state. Such a profound change deserves more thought than a reflex fear reaction in a crisis. As Ciaran Thapar argues at the Guardian, today’s violence has many causes, beginning with the decay of public services for youth, mental health, and , and it’s those causes that need to be addressed. Thapar invokes his memories of how his community overcame the “open, violent racism” of the 1980s Thatcher years in making his recommendations.

Much of the discussion of the riots has blamed social media for propagating hate speech and disinformation, along with calls for rethinking the Online Safety Act. This is also frustrating. First of all, the OSA, which was passed in 2023, isn’t even fully implemented yet. When last seen, Ofcom, the regulator designated to enforce it, was in the throes of recruiting people by the dozen, working out what sites will be in scope (about 150,000, they said), and developing guidelines. Until we see the shape of the regulation in practice, it’s too early to say the act needs expansion.

Second, hate speech and incitement to violence are already illegal under other UK laws. Just this week, a woman was jailed for 15 months for a comment to a Facebook group with 5,100 members that advocated violence against mosques and the people inside them. The OSA was not needed to prosecute her.

And third, while Elon Musk and Mark Zuckerberg definitely deserve to have anger thrown their way, focusing solely on the ills of social media makes no sense given the decades that right-wing newspapers have spent sowing division and hatred. Even before Musk, Twitter often acted as a democratization of the kind of angry, hate-filled coverage long seen in the Daily Mail (and others). These are the wedges that created the divisions that malicious actors can now exploit by disseminating disinformation, a process systematically explained by Renee DiResta in her new book, Invisible Rulers.

The FBI’s investigation of the January 6, 2021 insurrection at the US Capitol provides a good exemplar for how modern investigations can exploit new technologies. Law enforcement applied facial recognition to CCTV footage and massive databases, and studied social media feeds, location data and cellphone tracking, and other data. As Charlie Warzel and Stuart A. Thompson wrote at the New York Times in 2021, even though most of us agree with the goal of catching and punishing insurrectionists and rioters, the data “remains vulnerable to use and abuse” against protests of other types – such as this year’s pro-Palestinian encampments.

The same argument applies in the UK. Few want violence in the streets. But the unilateral imposition of live facial recognition, among other tracking technologies, can’t be allowed. There must be limits and safeguards. ID cards issued in wartime could be withdrawn when peace came; surveillance technologies, once put in place, tend to become permanent.

Illustrations: The CCTV camera at 22 Portobello Road, where George Orwell once lived.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Gather ye lawsuits while ye may

Most of us howled with laughter this week when the news broke that Elon Musk is suing companies for refusing to advertise on his exTwitter platform. To be precise, Musk is suing the World Federation of Advertisers, Unilever, Mars, CVS, and Ørsted in a Texas court.

How could Musk, who styles himself a “free speech absolutist”, possibly force companies to advertise on his site? This is pure First Amendment stuff: both the right to free speech (or to remain silent) and freedom of assembly. It adds to the nuttiness of it all that last November Musk was telling advertisers to “go fuck yourselves” if they threatened him with a boycott. Now he’s mad because they responded in kind.

Does the richest man in the world even need advertisers to finance his toy?

At Techdirt, Mike Masnick catalogues the “so much stupid here”.

The WFA initiative that offends Musk is the Global Alliance for Responsible Media, which develops guidelines for content moderation – things like a standard definition for “hate speech” to help sites operate consistent and transparent policies and reassure advertisers that their logos don’t appear next to horrors like the livestreamed shooting in Christchurch, New Zealand. GARM’s site says: membership is voluntary, following its guidelines is voluntary, it does not provide a rating service, and it is apolitical.

Pre-Musk, Twitter was a member. After Musk took over, he pulled exTwitter out of it – but rejoined a month ago. Now, Musk claims that refusing to advertise on his site might be a criminal matter under RICO. So he’s suing himself? Blink.

Enter US Republicans, who are convinced that content moderation exists only to punish conservative speech. On July 10, House Judiciary Committee, under the leadership of Jim Jordan (R-OH), released an interim report on its ongoing investigation of GARM.

The report says GARM appears to “have anti-democratic views of fundamental American freedoms” and likens its work to restraint of trade Among specific examples, it says GARM’s recommended that its members stop advertising on exTwitter, threatened Spotify when podcaster Joe Rogan told his massive audience that young, healthy people don’t need to be vaccinated against covid, and considered blocking news sites such as Fox News, Breitbart, and The Daily Wire. In addition, the report says, GARM advised its members to use fact-checking services like NewsGuard and the Global Disinformation Index “which disproportionately label right-of-center news sites as so-called misinformation”. Therefore, the report concludes, GARM’s work is “likely illegal under the antitrust laws”.

I don’t know what a court would have made of that argument – for one thing, GARM can’t force anyone to follow its guidelines. But now we’ll never know. Two days after Musk filed suit, the WFA announced it’s shuttering GARM immediately because it can’t afford to defend the lawsuit and keep operating even though it believes it’s complied with competition rules. Such is the role of bullies in our public life.

I suppose Musk can hope that advertisers decide it’s cheaper to buy space on his site than to fight the lawsuit?

But it’s not really a laughing matter. GARM is just one of a number of initiatives that’s come under attack as we head into the final three months of campaigning before the US presidential election. In June, Renee DiResta, author of the new book Invisible Rulers, announced that her contract as the research manager of the Stanford Internet Observatory was not being renewed. Founding director Alex Stamos was already gone. Stanford has said the Observatory will continue under new leadership, but no details have been published. The Washington Post says conspiracy theorists have called DiResta and Stamos part of a government-private censorship consortium.

Meanwhile, one of the Observatory’s projects, a joint effort with the University of Washington called the Election Integrity Partnership, has announced, in response to various lawsuits and attacks, that it will not work on the 2024 or future elections. At the same time, Meta is shutting down CrowdTangle next week, removing a research tool that journalists and academics use to study content on Facebook and Instagram. While CrowdTangle will be replaced with Meta Content Library, access will be limited to academics and non-profits, and those who’ve seen it say it’s missing useful data that was available through CrowdTangle.

The concern isn’t the future of any single initiative; it’s the pattern of these things winking out. As work like DiResta’s has shown, the flow of funds financing online political speech (including advertising) is dangerously opaque. We need access and transparency for those who study it, and in real time, not years after the event.

In this, as so much else, the US continues to clash with the EU, which it accused in December of breaching its rules with respect to disinformation, transparency, and extreme content. Last month, it formally charged Musk’s site for violating the Digital Services Act, for which Musk could be liable for a fine of up to 6% of exTwitter’s global revenue. Among the EU’s complaints is the lack of a searchable and reliable advertisement repository – again, an important element of the transparency we need. Its handling of disinformation and calls to violence during the current UK riots may be added to the investigation.

Musk will be suing *us*, next.

Illustrations: A cartoon caricature of Christina Rossetti by her brother Dante Gabriel Rossetti 1862, showing her having a tantrum after reading The Times’ review of her poetry (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Selective enforcement

This week, as a rider to the 21st Century Peace Through Strength Act, which provides funding for defense in Ukraine, Israel, and Taiwan, the US Congress passed provisions for banning the distribution of TikTok if owner ByteDance has not divested it within 270 days. President Joe Biden signed it into law on Wednesday, and, as Mike Masnick says at Techdirt, ByteDance’s lawsuit is imminently expected, largely on First Amendment grounds. ACLU agrees. Similar arguments won when ByteDance challenged a 2023 Montana law.

For context: Pew Research says TikTok is the fifth-most popular social media service in the US. An estimated 150 million Americans – and 62% of 18-29-year-olds – use it.

The ban may not be a slam-dunk to fail in court. US law, including the constitution, includes many restrictions on foreign influence, from requiring registration for those acting as agents to requiring presidents to have been born US citizens. Until 2017, foreigners were barred from owning US broadcast networks.

So it seems to this non-lawyer as though a lot hinges on how the court defines TikTok and what precedents apply. This is the kind of debate that goes back to the dawn of the Internet: is a privately-owned service built of user-generated content more like a town square, a broadcaster, a publisher, or a local pub? “Broadcast”, whether over the air or via cable, implies being assigned a channel on a limited resource; this clearly doesn’t apply to apps and services carried over the presumably-infinite Internet. Publishing implies editorial control, which social media lacks. A local pub might be closest: privately owned, it’s where people go to connect with each other. “Congress may make no law…abridging the freedom of speech”…but does that cover denying access to one “place” where speech takes place when there are many other options?

TikTok is already banned in Pakistan, Nepal, and Afghanistan, and also India, where it is one of 500 apps that have been banned since 2020. ByteDance will argue that the ban hurts US creators who use TikTok to build businesses. But as NPR reports, in India YouTube and Instagram rolled out short video features to fill the gap for hyperlocal content that the loss of TikTok opened up, and four years on creators have adapted to other outlets.

It will be more interesting if ByteDance claims the company itself has free speech rights. In a country where commercial companies and other organizations are deemed to have “free speech” rights entitling them to donate as much money as they want to political causes (as per the Supreme Court’s ruling in Citizens United v. Federal Election Commission), that might make a reasonable argument.

On the other hand, there is no question that this legislation is full of double standards. If another country sought to ban any of the US-based social media, American outrage would be deafening. If the issue is protecting the privacy of Americans against rampant data collection, then, as Free Press argues, pass a privacy law that will protect Americans from *every* service, not just this one. The claim that the ban is to protect national security is weakened by the fact that the Chinese government, like apparently everyone else, can buy data on US citizens even if it’s blocked from collecting it directly from ByteDance.

Similarly, if the issue is the belief that social media inevitably causes harm to teenagers, as author and NYU professor Jonathan Haidt insists in his new book, then again, why only pick on TikTok? Experts who have really studied this terrain, such as Danah Boyd and others, insist that Haidt is oversimplifying and pushing parents to deny their children access to technologies whose influence is largely positive. I’m inclined to agree; between growing economic hardship, expanding wars, and increasing climate disasters young people have more important things to be anxious about than social media. In any case, where’s the evidence that TikTok is a bigger source of harm than any other social medium?

Among digital rights activists, the most purely emotional argument against the TikTok ban revolves around the original idea of the Internet as an open network. Banning access to a service in one country (especially the country that did the most to promote the Internet as a vector for free speech and democratic values) is, in this view, a dangerous step toward the government control John Perry Barlow famously rejected in 1996. And yet, to increasing indifference, no-go signs are all over the Internet. *Six* years after GDPR came into force, Europeans are still blocked from many US media sites that can’t be bothered to comply with it. Many other media links don’t work because of copyright restrictions, and on and on.

The final double standard is this: a big element in the TikTok ban is the fear that the Chinese government, via its control over companies hosted there, will have access to intimate personal information about Americans. Yet for more than 20 years this has been the reality for non-Americans using US technology services outside the US: their data is subject to NSA surveillance. This, and the lack of redress for non-Americans, is what Max Schrems’ legal cases have been about. Do as we say, not as we do?

Illustrations: TikTok CEO Shou Zi Chew, at the European Commission in 2024 (by Lukasz Kobus at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The toast bubble

To The Big Bang Theory (“The Russian Rocket Reaction”, S5e05):

Howard: Someone has to go up with the telescope as a payload specialist, and guess who that someone is!
Sheldon: Muhammed Li.
Howard: Who’s Muhammed Li?
Sheldon: Muhammed is the most common first name in the world, Li the most common surname, and as I didn’t know the answer I thought that gave me a mathematical edge.

Experts tell me that exchange doesn’t perfectly explain how generative AI works; it’s too simplistic. Generative AI – or a Sheldon made more nuanced by his writers – takes into account contextual information to calculate the probable next word. So it wouldn’t pick from all the first names and surnames in the world. It might, however, pick from the names of all the payload specialists or some other group it correlated, or confect one.

More than a year on, I still can’t find a use for generative “AI” that is so unreliable and inscrutable. At Exponential View, Azeem Azhar has written about the “answer engine” Perplexity.ai. While it’s helpful that Perplexity provides references for its answers, it was producing misinformation by the third question I asked it, and offered no improvement when challenged. Wikipedia spent many years being accused of unreliability, too, but at least there you can read the talk page and understand how the editors arrived at the text they present.

On The Daily Show this week, Jon Stewart ranted about AI and interviewed FTC chair Lina Khan. Well-chosen video clips showed AI company heads’ true colors, telling the public AI is an assistant for humans while telling money people and each other that AI will enable greater productivity with fewer workers and help eliminate “the people tax”.

More interesting, however, was Khan’s note that the FTC is investigating the investments and partnerships in AI to understand if they’re giving current technology giants undue influence in the marketplace. If, in her example, all the competitors in a market outsource their pricing decisions to the same algorithm they may be guilty of price fixing even if they’re not actively colluding. And these markets are consolidating at an ever-earlier stage. Snapchat and WhatsApp had millions of users by the time Facebook thought it prudent to buy them rather than let them become dangerous competitors. AI is pre-consolidating: the usual suspects have been buying up AI startups and models at pace.

“More profound than fire or electricity,” Google CEO Sundar Pichai tells a camera at one point, speaking about AI. The last time I heard this level of hyperbole it was about the Internet in the 1990s, shortly before the bust. A friend’s answer to this sort of thing has never varied: “I’d rather have indoor plumbing.”

***

Last week the Federal District Court in Manhattan sentenced FTX CEO Sam Bankman-Fried to 25 years in prison for stealing $8 billion. In the end, you didn’t have to understand anything complicated about cryptocurrencies; it was just good old embezzlement.

And then the price of bitcoin went *up*. At the Guardian, Molly White explains that this is because cryptoevangelists are pushing the idea that the sector can reach its full potential, now that Bankman-Fried and other bad apples have been purged. But, as she says, nothing has really changed. No new use case has come along to make cryptocurrencies more useful, more valuable, or more trustworthy.

Both cryptocurrencies and generative AI are bubbles. The difference is that the AI bubble will likely leave behind it some technologies and knowledge that are genuinely useful; it will be like the Internet, which boomed and busted before settling in to change the world. Cryptocurrencies are more like the Dutch tulips. Unfortunately, in the meantime both these bubbles are consuming energy at an insane rate. How many wildfires is bitcoin worth?

**

I’ve seen a report suggesting that the last known professional words of the late Ross Anderson may have been, “Do they take us for fools?”

He was referring to the plans, debated in the House of Commons on March 25, to amend the Investigatory Powers Act to allow the government to pre-approve (or disapprove) new security features technology firms want to intorduce. The government is of course saying it’s all perfectly innocent, intended to keep the country safe. But recent clashes in the decades-old conflict over strong encryption have seen the technology companies roll out features like end-to-end encryption (Meta) and decide not to implement others, like client-side scanning (Apple). The latest in a long line of UK governments that want access to encrypted text was hardly going to take that quietly. So here we are, debating this yet again. Yet the laws of mathematics still haven’t changed: there is no such thing as a security hole that only “good guys” can use.

***

Returning to AI, it appears that costs may lead Google to charge for access to its AI-enhanced search, as Alex Hern reports at the Guardian. Hern thinks this is good news for its AI-focused startup competitors, which already charge for top-tier tools and who are at risk of being undercut by Google. I think it’s good for users by making it easy to avoid the AI “enhancement”. Of course, DuckDuckGo already does this without all the tracking and monopoly mishegoss.

Illustrations: Jon Stewart uninspired by Mark Zuckerberg’s demonstration of AI making toast.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.