Blue

The inxodus onto Bluesky noted here last week continues apace: the site’s added a million users a day for more than a week, gradually slowing down from 12 new users a second, per the live counter.

These are not lurkers. Suddenly, the site feels like Twitter circa 2009/2010, when your algorithm-free feed was filled with interesting people sharing ideas, there were no ads, and abuse was in its infancy. People missing in action for the last year or two are popping up; others I’ve wished would move off exTwitter so I could stop following them there have suddenly joined. Mastodon is also seeing an uptick, and (I hear) Threads continues to add users without, for me, adding interest to match…. I doubt this diaspora is all “liberals”, as some media have it – or if they are, it won’t be long before politicians and celebrities note the action is elsewhere and rush to stay relevant.

It takes a long time for a social medium to die if it isn’t killed by a corporation. Even after this week’s bonanza, Bluesky’s entire user base fits inside 5% of exTwitter, which still has around 500 million users as of September, about half of them active daily. What matters most are *posters*, who are about 10% or less of any social site’s user base. When they leave, engagement plummets, as shown in a 2017 paper in Nature.

An example in action: at Statnews, Katie Palmer reports that the science and medical community is adopting Bluesky.

I have to admit to some frustration over this: why not Mastodon? As retro-fun as this week on Bluesky has been, the problem noted here a few weeks ago of Bluesky’s venture capital funding remains. Yes, the company is incorporated as a public benefit company – but venture capitalists want exit strategies and return on investment. That tension looms.

Mastodon is a loose collection of servers that all run the same software, which in turn is written to the open protocol Activity Pub. Gergely Orosz has deep-dive looks at Bluesky’s development and culture; the goal was to write a new open protocol, AT, that would allow Bluesky, similarly, to federate with others. There is already a third-party bit of software, Bridgy, that provides interoperability among Bluesky, any system based on Activity Pub (“the Fediverse”, of which Mastodon is a subset), and the open web (such as blogs). For the moment, though, Bluesky remains the only site running its AT protocol, so the more users Bluesky adds, the more it feels like a platform rather than a protocol. And platforms can change according to the whims of their owners – which is exactly what those leaving exTwitter are escaping. So: why not Mastodon, which doesn’t have that problem?

In an exchange on Bluesky, Palmer said that those who mentioned it said they found Mastodon “too difficult to figure out”.

It can’t be the thing itself; typing and sending varies little. The problem has to be the initial uncertainty about choosing a server. What you really want is for institutions to set up their own, and then you sign up there. For most people that’s far too much heavy lifting. Still, this is what the BBC and the German government have done, and it has a significant advantage in that posting from an address on that server automatically verifies the poster as an authentic staffer. NPR simply found a server and opened an account, like I did when I joined Mastodon in 2019.

All that said, how Mastodon administrators will cope with increasing usage and resulting costs also remains an open question as discussed here last year.

So: some advice as you settle into your new online home:

– Plan for the site’s eventual demise. “On the Internet your home will always leave you” (I have lost the source of this quote). Every site, no matter how big and fast-growing it is now, or how much you all love it…assume that at some point in the future it will either die of outmoded business model (AOL forums); get bought and closed down (Television without Pity, CompuServe, Geocities); become intolerable because of cultural change (exTwitter); or be abandoned because the owner loses interest (countless blogs and comment boards). Plan for that day. Collect alternative means of contacting the people you come to know and value. Build multiple connections.

– Watch the data you’re giving the site. No one in 2007, when I joined Twitter, imagined their thousands of tweets would become fodder for a large language model to benefit one of the world’s richest multi-billionaires.

– If you are (re)building an online community for an organization, own that community. Use social media, by all means, but use it to encourage people to visit the organization’s website, or join its fully-controlled mailing list or web board. Otherwise, one day, when things change, you will have to start over from scratch, and may not even know who your members are or how to reach them.

– Don’t worry too much about the “filter bubble”, as John Elledge writes. Studies generally agree social media users encounter more, and more varied, sources of news than others. As he says, only journalists have to read widely among people whose views they find intolerable (see also the late, great Molly Ivins).

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

What’s next

“It’s like your manifesto promises,” Bernard Woolley (Derek Fowldes) tells eponymous minister Jim Hacker (Paul Eddington) in Antony Jay‘s and Jonathan Lynn’s Yes, Minister. “People *understand*.” In other words, people know your election promises aren’t real.

The current US president-elect is impulsive and chaotic, and there will be resistance. So it’s reasonable to assume that at least some of his pre-election rhetoric will remain words and not deeds. There is, however, no telling which parts. And: the chaos is the point.

At Ars Technica, Ashley Belanger considers the likely impact of the threatened 60% tariffs on Chinese goods and 20% from everywhere else: laptops could double, games consoles go up 40%, and smartphones rise 26%. Friends want to stockpile coffee, tea, and chocolate.

Also at Ars Technica, Benj Edwards predicts that the new administration will quickly reverse Joe Biden’s executive order regulating AI development.

At his BIG Substack, Matt Stoller predicts a wave of mergers following three years of restrictions. At TechDirt, Karl Bode agrees, with special emphasis on media companies and an order of enshittification on the side. At Hollywood Reporter, similarly, Alex Weprin reports that large broadcast station owners are eagerly eying up local stations, and David Zaslav, CEO of merger monster Warner Brothers Discovery, tells Georg Szalai that more consolidation would provide “real positive impact”. (As if.)

Many predict that current Federal Communications Commissioner Brendan Carr will be promoted to FCC chair. Carr set out his agenda in his chapter of Project 2025: as the Benton Institute for Broadband and Society reports. His policies, Jon Brodkin writes at Ars Technica, include reforming Section 230 of the Communications Decency Act and dropping consumer protection initiatives. John Hendel warned in October at Politico that the new FCC chair could also channel millions of dollars to Elon Musk for his Starlink satellite Internet service, a possibility the FCC turned down in 2023.

Also on Carr’s list is punishing critical news organizations. Donald Trump’s lawyers began before the election with a series of complaints, as Lachlan Cartwright writes at Columbia Journalism Review. The targets: CBS News for 60 Minutes, the New York Times, Penguin Random House, Saturday Night Live, the Washington Post, and the Daily Beast.

Those of us outside the US will be relying on the EU to stand up to parts of this through the AI Act, Digital Markets Act, Digital Services Act, and GDPR. Enforcement will be crucial. The US administration may resist this procedure. The UK will have to pick a side.

***

It’s now two years since Elon Musk was forced to honor his whim of buying Twitter, and much of what he and others said would happen…hasn’t. Many predicted system collapse or a major hack. Instead, despite mass departures for sites other, the hollowed-out site has survived technically while degrading in every other way that matters.

Other than rebranding to “X”, Musk has failed to deliver many of the things he was eagerly talking about when he took over. A helpful site chronicles these: a payments system, a content moderation council, a billion more users. X was going to be the “everything app”. Nope.

This week, the aftermath of the US election and new terms of service making user data fodder for AI training have sparked a new flood of departures. This time round there’s consensus: they’re going to Bluesky.

It’s less clear what’s happening with the advertisers who supply the platform’s revenues, which the now-private company no longer has to disclose. Since Musk’s takeover, reports have consistently said advertisers are leaving. Now, the Financial Times reports (unpaywalled, Ars Technica) they are plotting their return, seeking to curry favor given Musk’s influence within the new US administration – and perhaps escaping the lawsuit he filed against them in August. Even so, it will take a lot to rebuild. The platform’s valuation is currently estimated at $10 billion, down from the $44 billion Musk paid.

This slash-and-burn approach is the one Musk wants to take to Department of Government Efficiency (DOGE, as in Dogecoin; groan). Musk’s list of desired qualities for DOGE volunteers – no pay, long hours, “super” high IQ – reminds of Dominic Cummings in January 2020, when he was Boris Johnson’s most-favored adviser and sought super-talented weirdos to remake the UK government. Cummings was gone by November.

***

It says something about the madness of the week that the sanest development appears to be that The Onion has bought Infowars, the conspiracy theory media operation Alex Jones used to promote, alongside vitamins, supplements, and many other conspiracy theories, the utterly false claim that the Sandy Hook school shootings were a hoax. The sale was part of a bankruptcy auction held to raise funds Jones owes to the families of the slaughtered Sandy Hook children after losing to them in court in a $1.4 billion defamation case. Per the New York Times, the purchase was sanctioned by the Sandy Hook families. The Onion will relaunch the site in its own style with funding from Everytown for Gun Safety. There may not be a god, but there is an onion.

Illustrations: The front page of The Onion, showing the news about its InfoWars purchase.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The master switch

In his 2010 book, The Master Switch, Columbia law professor Tim Wu quotes the television news pioneer Fred W. Friendly, who said in a 1970 article for Saturday Review that before any question of the First Amendment and free speech, is “who has exclusive control of the master switch. In his 1967 memoir, Due to Circumstances Beyond Our Control, Friendly tells numerous stories that illustrate the point, beginning with his resignation of the presidency of CBS News after the network insisted on showing a rerun of I Love Lucy rather than carry live the first Senate hearings on the US involvement in Vietnam.

This is the switch that Amazon founder Jeff Bezos flipped this week when he blocked the editorial board of the Washington Post, which he owns, from endorsing Kamala Harris and Tim Walz in the US presidential election. At that point, every fear people had in 2013, when Bezos paid $250 million to save the struggling 76-time Pulitzer prize-paper famed for breaking Watergate, came true. Bezos, like William Randolph Hearst, Rupert Murdoch, and others before him, exerted his ownership control. (See also the late, great film critic Roger Ebert on the day Rupert Murdoch took over the Chicago Sun-Times.)

If you think of the Washington Post as just a business, as opposed to a public service institution, you can see why Bezos preferred to hedge his bets. But, as former Post journalist Dan Froomkin called it in February 2023, ten years post-sale, the newspaper had reverted to its immediately pre-Bezos state, laying off staff and losing money. Then, Froomkin warned that Bezos’ newly-installed “lickspittle” publisher, editor, and editorial editor lacked vision and suggested Bezos turn it into a non-profit, give it an endowment, and leave it alone.

By October 2023, Froomkin was arguing that the Post had blown it by failing to cover the decade’s most important story, the threat to the US’s democratic system posed by “the increasingly demented and authoritarian Republican Party”. As of yesterday, more than 250,000 subscribers had canceled, literally decimating its subscriber base, though barely, as Jason Koebler writes at 404 Media, a rounding error in Bezos’ wealth.

Almost simultaneously, a similar story was playing out 3,000 miles across the country at the LA Times. There, owner Patrick Soon-Shiong overrode the paper’s editorial board’s intention to endorse Harris/Walz. Several board members have since resigned, along with editorials editor Mariel Garza.

At Columbia Journalism Review, Jeff Jarvis uses Timothy Snyder’s term, “anticipatory obedience” to describe these situations.

On his Mea Culpa podcast, former Trump legal fixer Michael Cohen has frequently issued a hard-to-believe warning that if Trump is elected he will assemble the country’s billionaires and take full control of their assets, Putin-style. As unAmerican as that sounds, Cohen has been improbably right before; in 2019 Congressional testimony he famously predicted that Trump would never allow a peaceful transition of power. If Trump wins and proves Cohen correct, anticipatory obedience won’t save Bezos or any other billionaire.

The Internet was supposed to provide an escape from this sort of control (in the 1990s, pundits feared The Drudge Report!). Into this context, several bits of social media news also dropped. Bluesky announced $15 million in venture capital funding and a user base of 13 million. Reddit announced its first-ever profit, apparently solely due to the deals the 19-year-old service signed to give Google and OpenAI to access user postings and use AI to translate users’ posts into multiple languages. Finally, the owner of the Mastodon server botsin.space, which allows users to run bots on Mastodon, is shutting down, ending new account signups and shifting to read-only by December. The owner blames unsustainably increasing costs as the user base and postings continue to grow.

Even though Bluesky is incorporated as a public benefit LLC, the acceptance of venture capital gives pause: venture capital always looks for a lucrative exit rather than value for users. Reddit served tens of millions of users for 19 years without ever making any money; it’s only profitable now because AI developers want its data.

Bluesky’s board includes the notable free speech advocate Techdirt’s Mike Masnick, who this week blasted the Washington Post’s decision in scathing terms. Masnick’s paper proposing promoting free speech by developing protocols rather than platforms serves as a sort of founding document. Platforms centralize user data and share it back out again; protocols are standards anyone can use to write compliant software to enable new connections. Think proprietary (Apple) versus open source (Linux, email, the web).

The point is this: platforms either start with or create billionaire owners; protocols allow participation by both large and small owners. That still leaves the long-term problem of how to make such services sustainable. Koebler writes of the hard work of going independent, but notes that the combination of new technology and the elimination of layers of management and corporate executives makes it vastly cheaper than before. Bluesky so far has no advertising, but plans to offer higher-level features by subscription, still implying a centralized structure. Mastodon instances survive on user donations and volunteer administrators. Its developers should target making it much easier and more efficient to run their instances: democratize the master switch.

Illustrations: Charles Foster Kane (Orson Welles) in his newsroom in the 1941 film Citizen Kane, (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Sectioned

Social media seems to be having a late-1990s moment, raising flashbacks to the origins of platform liability and the passage of Section 230 of the Communications Decency Act (1996). It’s worth making clear at the outset: most of the people talking about S230 seem to have little understanding of what it is and does. It allows sites to moderate content without becoming liable for it. It is what enables all those trust and safety teams to implement sites’ restrictions on acceptable use. When someone wants to take an axe to it because there is vile content circulating, they have not understood this.

So, in one case this week a US appeals court is allowing a lawsuit to proceed that seeks to hold TikTok liable for users’ postings of the “blackout challenge”, the idea being to get an adrenaline rush by reviving from near-asphyxiation. Bloomberg reports that at least 20 children have died trying to accomplish this, at least 15 of them age 12 or younger (TikTok, like all social media, is supposed to be off-limits to under-13s). The people suing are the parents of one of those 20, a ten-year-old girl who died attempting the challenge.

The other case is that of Pavel Durov, CEO of the messaging service Telegram, who has been arrested in France as part of a criminal investigation. He has been formally charged with complicity in managing an online platform “in order to enable an illegal transaction in organized group”, and refusal to cooperate with law enforcement authorities and ordered not to leave France, with bail set at €5 million (is that enough to prevent the flight of a billionaire with four passports?).

While there have been many platform liability cases, there are relatively few examples of platform owners and operators being charged. The first was in 1997, back when “online” still had a hyphen; the German general manager of CompuServe, Felix Somm, was arrested in Bavaria on charges of “trafficking in pornography”. That is, German users of Columbus, Ohio-based CompuServe could access pornography and illegal material on the Internet through the service’s gateway. In 1998, Somm was convicted and given a two-year suspended sentence. In 1999 his conviction was overturned on appeal, partly, the judge wrote, because there was no technology at the time that would have enabled CompuServe to block the material.

The only other example I’m aware of came just this week, when an Arizona judge sentenced Michael Lacey, co-founder of the classified ads site Backpage.com, to five years in prison and fined him $3 million for money laundering. He still faces further charges for prostitution facilitation and money laundering; allegedly he profited from a scheme to promote prostitution on his site. Two other previously convicted Backpages executives were also sentenced this week to ten years in prison.

In Durov’s case, the key point appears to be his refusal to follow industry practice with respect to to reporting child sexual abuse material or cooperate with properly executed legal requests for information. You don’t have to be a criminal to want the social medium of your choice to protect your privacy from unwarranted government snooping – but equally, you don’t have to be innocent to be concerned if billionaire CEOs of large technology companies consider themselves above the law. (See also Elon Musk, whose X platform may be tossed out of Brazil right now.)

Some reports on the Durov case have focused on encryption, but the bigger issue appears to be failure to register to use encryption , as Signal has. More important, although Telegram is often talked about as encrypted, it’s really more like other social media, where groups are publicly visible, and only direct one-on-one messages are encrypted. But even then, they’re only encrypted if users opt in. Given that users notoriously tend to stick with default settings, that means that the percentage of users who turn that encryption on is probably tiny. So it’s not clear yet whether France is seeking to hold Durov responsible for the user-generated content on his platform (which S230 would protect in the US), or accusing him of being part of criminal activity relating to his platform (which it wouldn’t).

Returning to the Arizona case, in allowing the lawsuit to go ahead, the appeals court judgment says that S230 has “evolved away from its original intent”, and argues that because TikTok’s algorithm served up the challenge on the child’s “For You” page, the service can be held responsible. At TechDirt, Mike Masnick blasts this reasoning, saying that it overturns numerous other court rulings upholding S230, and uses the same reasoning as the 1995 decision in Stratton Oakmont v. Prodigy. That was the case that led directly to the passage of S230, introduced by then-Congressman Christopher Cox (R-CA) and Senator Ron Wyden (D-OR), who are still alive to answer questions about their intent. Rather than evolving away, we’ve evolved back full circle.

The rise of monopolistic Big Tech has tended to obscure the more important point about S230. As Cory Doctorow writes for EFF, killing S230 would kill the small federated communities (like Mastodon and Discord servers) and web boards that offer alternatives to increasing Big Tech’s pwoer. While S230 doesn’t apply outside the US (some Americans have difficulty understanding that other countries have different laws), its ethos is pervasive and the companies it’s enabled are everywhere. In the end, it’s like democracy: the alternatives are worse.

Illustrations: Drunken parrot in Putney (by Simon Bisson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The fear factor

Be careful what you allow the authorities to do to people you despise, because one day those same tools will be turned against you.

In the last few weeks, the shocking stabbing of three young girls at a dance class in Southport became the spark to ignite riots across the UK by people who apparently believed social media theories that the 17-year-old boy responsible was Muslim, a migrant, or a terrorist. With the boy a week from his 18th birthday, the courts ruled police could release his name in order to make clear he was not Muslim and born in Wales. It failed to stop the riots.

Police and the courts have acted quickly; almost 800 people have been arrested, 350 have been charged, and hundreds are in custody. In a moving development, on a night when more than 100 riots were predicted, tens of thousands of ordinary citizens thronged city streets and formed protective human chains around refugee centers in order to block the extremists. The riots have quieted down, but police are still busy arresting newly-identified suspects. And the inevitable question is being asked: what do we do next to keep the streets safe and calm?

London mayor Sadiq Khan quickly called for a review of the Online Safety Act, saying he doesn’t believe it’s fit for purpose. Cabinet minister Nick Thomas-Symonds (Labour-Torfaen) has suggested the month-old government could change the law.

Meanwhile, prime minister Keir Starmer favours a wider rollout of live facial recognition to track thugs and prevent them from traveling to places where they plan to cause social unrest, copying systems the police use to prevent football hooligans from even boarding trains to matches. This proposal is startling because: before standing for Parliament Starmer was a human rights lawyer. One could reasonably expect him to know that facial recognition systems have a notorious history of inaccuracy due to biases baked into their algorithms via training data, and that in the UK there is no regulatory framework to provide oversight. Silkie Carlo, the director of Big Brother Watch immediately called the proposal “alarming” and “ineffective”, warning that it turns people into “walking ID cards”.

As the former head of Liberty, Shami Chakrabarti used to say when ID cards were last proposed, moves like these fundamentally change the relationship between the citizen and the state. Such a profound change deserves more thought than a reflex fear reaction in a crisis. As Ciaran Thapar argues at the Guardian, today’s violence has many causes, beginning with the decay of public services for youth, mental health, and , and it’s those causes that need to be addressed. Thapar invokes his memories of how his community overcame the “open, violent racism” of the 1980s Thatcher years in making his recommendations.

Much of the discussion of the riots has blamed social media for propagating hate speech and disinformation, along with calls for rethinking the Online Safety Act. This is also frustrating. First of all, the OSA, which was passed in 2023, isn’t even fully implemented yet. When last seen, Ofcom, the regulator designated to enforce it, was in the throes of recruiting people by the dozen, working out what sites will be in scope (about 150,000, they said), and developing guidelines. Until we see the shape of the regulation in practice, it’s too early to say the act needs expansion.

Second, hate speech and incitement to violence are already illegal under other UK laws. Just this week, a woman was jailed for 15 months for a comment to a Facebook group with 5,100 members that advocated violence against mosques and the people inside them. The OSA was not needed to prosecute her.

And third, while Elon Musk and Mark Zuckerberg definitely deserve to have anger thrown their way, focusing solely on the ills of social media makes no sense given the decades that right-wing newspapers have spent sowing division and hatred. Even before Musk, Twitter often acted as a democratization of the kind of angry, hate-filled coverage long seen in the Daily Mail (and others). These are the wedges that created the divisions that malicious actors can now exploit by disseminating disinformation, a process systematically explained by Renee DiResta in her new book, Invisible Rulers.

The FBI’s investigation of the January 6, 2021 insurrection at the US Capitol provides a good exemplar for how modern investigations can exploit new technologies. Law enforcement applied facial recognition to CCTV footage and massive databases, and studied social media feeds, location data and cellphone tracking, and other data. As Charlie Warzel and Stuart A. Thompson wrote at the New York Times in 2021, even though most of us agree with the goal of catching and punishing insurrectionists and rioters, the data “remains vulnerable to use and abuse” against protests of other types – such as this year’s pro-Palestinian encampments.

The same argument applies in the UK. Few want violence in the streets. But the unilateral imposition of live facial recognition, among other tracking technologies, can’t be allowed. There must be limits and safeguards. ID cards issued in wartime could be withdrawn when peace came; surveillance technologies, once put in place, tend to become permanent.

Illustrations: The CCTV camera at 22 Portobello Road, where George Orwell once lived.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

The bridge

Seven months ago, Mastodon was fretting about Meta’s newly-launched Threads. The issue: Threads, which was built on top of Instagram’s user database, had said it complied with the Activity Pub protocol, which allows Mastodon servers (“instances”) to federate with any other service that also uses that protocol. The potential threat that Threads would become interoperable and that potentially millions of Threads users would swamp Mastodon, ignoring its existing social norms and culture created an existential dilemma: to federate or not to federate?

Today, Threads’ integration is still just a plan.

Instead, it seems the first disruptive arrival looks set to be Bluesky, created by a team backed by Twitter co-founder Jack Dorsey and facilitated by a third party. Bluesky wrote a new open source protocol, AT, so the proposal isn’t federation with Mastodon but a bridge, as Amanda Silberling reports at TechCrunch. According to Silberling’s numbers, year-old Bluesky stands at 4.8 million users to Mastodon’s 8.7 million. Anyone familiar with the history of AOL’s gateway to Usenet will tell you that’s big enough to disrupt existing social norms. The AOL exercise was known as Eternal September (because every September Usenet had to ingest a new generation of incoming university freshmen).

There are two key differences, however. First, a third of those Blusky users are new to that system, only joining last week, when the service opened fully to the public. They will bring challenges to the culture Bluesky has so far developed. Second, AOL’s gateway was unidirectional: AOLers could read and post to Usenet newsgroups, but Usenet posters could not read anything on AOL without paying for access. The Bluesky-Mastodon bridge is planned to be bidirectional, so anything posted publicly on one service would be accessible to both – or to outsiders using BridgyFed to connect via website feeds.

I haven’t spent a lot of time on Bluesky, but it’s clear it and Mastodon have different cultures. Friends who spend more time there say Bluesky has a “weirdness” they like and is less “scoldy” than Mastodon, where long-time users tended to school incoming ex-Twitter users in 2022 on their mistakes. That makes sense, when you consider that Mastodon has had time since its 2016 founding to develop an existing culture that newcomers are joining, where Bluesky has been a closed beta group until last week, and its users to date were the ones defining its culture for the future. The newcomers of the past week may have a very different experience.

Even if they don’t, there’s a fundamental economic difference that no technology can bridge: Mastodon is a non-profit cooperative endeavor, while Bluesky is has venture capital funding, although the list of investors is not the usual suspects. Social media users have often been burned by corporate business decisions. It’s therefore easy to believe that the $8 million in seed funding will lead inevitably to user data exploitation, no matter what they say now about being determined to find a different and more sustainable business model based on selling ancillary servicesx. Even if that strategy works, later owners or the dictates of shareholders may demand higher profits via a pivot to advertising, just as the Netflix and Amazon Prime streaming services are doing now.

Designing any software involves making rules for how it will operate and setting defaults. Here’s where the project hit trouble: should it be opt-out, so that users who don’t want their posts to be visible outside their home system have to specifically turn it off, or opt-in, so that users who want their posts published far and wide have to turn it on? BridgyFed’s creator, Ryan Barrett chose opt-out. It was immediately divisive: privacy versus openness.

Silberman reports that Barrett has fashioned a solution, giving users warning pop-ups and a chance to decline if someone from another service tries to follow them, and is thinking more carefully about the risks to safety his bridge might bring.

That’s great, but the next guy may not be so willing to reconsider. As we’ve observed before, there is no way to restrict the use of open protocols without closing them and putting them under centralized control – which is the opposite of the federated, decentralized systems Mastodon and Bluesky were created to build.

In a federated system anything one person can open another can close. Individual admins will decide for their users how their instances will operate. Those who don’t like their choice will be told they can port their accounts to one whose policies they prefer. That’s true, but unsatisfying as an answer. As the “Fediverse” grows, it must accommodate millions of mainstream users for whom moving servers is too complicated.

The key point, however, is that the illusion of control Mastodon seemed to offer is being punctured. Usenet users could have warned them: from its creation in 1979, users believed their postings were readable for a few weeks before expiring and being expunged. Then, in 1995, Steve Madere created the Deja News archive from scattered collections. Overnight, those “ephemeral” postings became permanent and searchable – and even more so, after 2001, when Google bought the archive (see groups.google.com).

The upshot: privacy in public networks is only ever illusory. Assume you have no control over anything you post, no matter how cozy and personal the network seems. As we’ve said before, the privacy-in-public afforded by the physical world has no online counterpart.

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: The Gutenberg Parenthesis

The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet
By Jeff Jarvis
Bloomsbury Academic
ISBN: 978-1-5013-9482-9

There’s a great quote I can’t trace in which a source singer from whom Sir Walter Scott collected folk songs told him he’d killed their songs by printing them. Printing had, that is, removed the song from the oral culture of repeated transmission, often with alterations, from singer to singer. Like pinning down a butterfly.

In The Gutenberg Parenthesis, Jeff Jarvis argues that modern digital culture offers the chance of a return to the collaborative culture that dominated most of human history. Jarvis is not the first to suggest that our legacy media are an anomaly. In his 2013 book Writing on the Wall, Tom Standage calls out the last 150 years of corporate-owned for-profit media as an anomaly in the 2,000-year sweep of social media. In his analogy, the earliest form was “Roman broadband” (slaves) carrying messages back and forth. Standage finds other historical social media analogues in the coffeehouses that hatched the scientific revolution. Machines, both print and broadcast, made us consumers instead of participants. In Jarvis’s account, printing made institutions and nation-states, the same ones that now are failing to control the new paradigm.

The “Gutenberg parenthesis” of Jarvis’s title was coined by Lars Ore Sauerberg, a professor at the University of Southern Denmark, who argues (in, for example, a 2009 paper for the journal Orbis Literarum) that the arrival of the printing press changed the nature of cognition. Jarvis takes this idea and runs with it: if we are, as he believes, now somewhere in a decades- or perhaps centuries-long process of closing the parenthesis – that is, exiting the era of print bracketed by Gutenberg’s invention of the printing press and the arrival of digital media – what comes next?

To answer this question, Jarvis begins by examining the transition *into* the era of printing. The invention of movable type and printing presses by themselves brought a step down in price and a step up in scale – what had once been single copies available only to people rich enough to pay a scribe suddenly became hundreds of copies that were still expensive. It took two centuries to arrive at the beginnings of copyright law, and then the industrial revolution to bring printing and corporate ownership at today’s scale.

Jarvis goes on to review the last two centuries of increasingly centralized and commercialized publishing. The institutions print brought provided authority that enabled them to counter misinformation effectively. In our new world, where these institutions are being challenged, many more voices can be heard – good, for obvious reasons of social justice and fairness, but unfortunate in terms of the spread of misinformation, malinformation, and disinformation. Jarvis believes we need to build new institutions that can enable the former and inhibit the latter. Exactly what those will look like is left as an exercise for the reader in the times to come. Could Gutenberg have predicted Entertainment Weekly?

Five seconds

Careful observers posted to Hacker News this week – and the Washington Post reported – that the X formerly known as Twitter (XFKAT?) appeared to be deliberately introducing a delay in loading links to sites the owner is known to dislike or views as competitors. These would be things like the New York Times and selected other news organizations, and rival social media and publishing services like Facebook, Instagram, Bluesky, and Substack.

The 4.8 seconds users clocked doesn’t sound like much until you remember, as the Post does, that a 2016 Google study found that 53% of mobile users will abandon a website that takes longer than three seconds to load. Not sure whether desktop users are more or less patient, but it’s generally agreed that delay is the enemy.

The mechanism by which XFKAT was able to do this is its built-in link shortener, t.co, through which it routes all the links users post. You can see this for yourself if you right-click on a posted link and copy the results. You can only find the original link by letting the t.co links resolve and copying the real link out of the browser address bar after the page has loaded.

Whether or not the company was deliberately delaying these connections, the fact is that it *can* – as can Meta’s platforms and many others. This in itself is a problem; essentially it’s a failure of network neutrality. This is the principle that a telecoms company should treat all traffic equally, and it is the basis of the egalitarian nature of the Internet. Regulatory insistence on network neutrality is why you can run a voice over Internet Protocol connection over broadband supplied by a telco or telco-owned ISP even though the services are competitors. Social media platforms are not subject to these rules, but the delaying links story suggests maybe they should be once they reach a certain size.

Link shorteners have faded into the landscape these days, but they were controversial for years after the first such service – TinyURL – was launched in 2002 (per Wikipedia). Critics cited several main issues: privacy, persistence, and obscurity. The latter refers to users’ inability to know where their clicks are taking them; I feel strongly about this myself. The privacy issue is that the link shorteners-in-the-middle are in a position to collect traffic data and exploit it (bad actors could also divert links from their intended destination). The ability to collect that data and chart “impact” is, of course, one reason shorteners were widely adopted by media sites of all types. The persistence issue is that intermediating links in this way creates one or more central points of failure. When the link shortener’s server goes down for any reason – failed Internet connection, technical fault, bankrupt owner company – the URL the shortener encodes becomes unreachable, even if the page itself is available as normal. You can’t go directly to the page, or even located a cached copy at the Internet Archive, without the original URL.

Nonetheless, shortened links are still widely used, for the same reasons why they were invented. Many URLs are very long and complicated. In print publications, they are visually overwhelming, and unwieldy to copy into a web address bar; they are near-impossible to proofread in footnotes and citations. They’re even worse to read out on broadcast media. Shortened links solve all that. No longer germane is the 140-character limit Twitter had in its early years; because the URL counted toward that maximum, short was crucial. Since then, the character count has gotten bigger, and URLs aren’t included in the count any more.

If you do online research of any kind you have probably long since internalized the routine of loading the linked content and saving the actual URL rather than the shortened version. This turns out to be one of the benefits of moving to Mastodon: the link you get is the link you see.

So to network neutrality. Logically, its equivalent for social media services ought to include the principle that users can post whatever content or links they choose (law and regulation permitting), whether that’s reposted TikTok videos, a list of my IDs on other systems, or a link to a blog advocating that all social media companies be forced to become public utilities. Most have in fact operated that way until now, infected just enough with the early Internet ethos of openness. Changing that unwritten social contract is very bad news even though no one believed XFKAT’s CEO when he insisted he was a champion of free speech and called the now-his site the “town square”.

If that’s what we want social media platforms to be, someone’s going to have to force them, especially if they begin shrinking and their owners start to feel the chill wind of an existential threat. You could even – though no one is, to the best of my knowledge – make the argument that swapping in a site-created shortened URL is a violation of the spirit of data protection legislation. After all, no one posts links on a social media site with the view that their tastes in content should be collected, analyzed, and used to target ads. Librarians have long been stalwarts in resisting pressure to disclose what their patrons read and access. In the move online in general, and to corporate social media in particular, we have utterly lost sight of the principle of the right to our own thoughts.

Illustrations: The New York City public library in 2006..

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series she is a contributing editor for the Plutopia News Network podcast. Follow on Wendy M. GrossmanPosted on Categories Media, Net life, UncategorizedTags , Leave a comment on Five seconds

Watching YouTube

One of the reasons it’s so difficult to figure out what to do about misinformation, malinformation, and disinformation online is the difficulty of pinpointing how online interaction translates to action in the real world. The worst content on social media has often come from traditional media or been posted by an elected politician.

At least, that’s how it seems to text-based people like me. This characteristic, along with the quick-hit compression of 140 (later 280) characters, was the (minority) appeal of Twitter. It’s also why legacy media pays so little attention to what’s going on in game worlds, struggle with TikTok, and underestimate the enormous influence of YouTube. The notable exception is the prolific Chris Stokel-Walker, who’s written books about both YouTube and TikTok.

Stokel-Walker has said he decided to write YouTubers because the media generally only notices YouTube when there’s a scandal. Touring those scandals occupies much of filmmaker Alex Winter‘s now-showing biography of the service, The YouTube Effect.

The film begins by interviewing co-founder Steve Chen, who giggles a little uncomfortably to admit that he and co-founders Chad Hurley and Jawed Karim thought it could be a video version of Hot or Not?. In 2006, Google bought the year-old site for $1.65 billion in Google stock, to derision from financial commentators certain it had overpaid.

Winter’s selection of clips from early YouTube reminds of early movies, which pulled people into theaters with little girls having a pillow fight. Winter moves on through pioneering stars like Smosh and K-Pop, 2010’s Arab spring, the arrival of advertising and monetization, the rise of alt-right channels, Gamergate, the 2016 US presidential election, the Christchurch shooting, the horrors lurking in YouTube Kids, George Floyd, the multimillion-dollar phenomenon of Ryan Kaji, January 6, the 2020 Congressional hearings. Somewhere in the middle is the arrival of the Algorithm that eliminated spontaneous discovery in favor of guided user experience, and a brief explanation of the role of Section 230 of the Communications Decency Act in protecting platforms from liability for third-party content.

These stories are told by still images and video clips interlaced with interviews with talking heads like Caleb Cain, who was led into right-wing extremism and found his way back out; Andy Parker, father of Alison Parker, footage of whose murder he has been unable to get expunged; successful YouTuber (“ContraPoints”) Natalie Wynn; technology writer and video game developer Brianna Wu; Jillian C. York, author of Silicon Values; litigator Carrie Goldberg, who works to remediate online harms one lawsuit at a time; Anthony Padilla, co-founder of Smosh; and YouTube then-CEO Susan Wojcicki.

Not included among the interviewees: political commentators (though we see short clips of Alex Jones) or free speech fundamentalists. In addition, Winter sticks to user-generated content, ignoring the large percentage of YouTube’s library that is copies of professional media, many otherwise unavailable. Countries outside the US are mentioned only by York, who studies censorship around the world. Also missing is anyone from Google who could explain how YouTube fits into its overall business model.

The movie concludes by asking commentators to recommend changes. Parker wants families of murder victims to be awarded co-copyright and therefore standing to get footage of victims’ deaths removed. Hany Farid, a UC Berkeley professor who studies deepfakes, thinks it’s essential to change the business model from paying with data and engagement to paying with money – that is, subscriptions. Goldberg is afraid we will all become captives of Big Tech. A speaker whose name is illegible in my notes mentions antitrust law. Cain notes that there’s nothing humans have built that we can’t destroy. Wojcicki says only that technology offers “a tremendous opportunity to do good in the long-term”. York notes the dual-use nature of these technologies; their effects are both good and bad, so what you change “depends what you’re looking for”.

Cain gets the last word. “What are we speeding towards?” he asks, as the movie’s accelerating crescendo of images and clips stops on a baby’s face.

Unlike predecessors Coded Bias (2021) and The Great Hack (2019), The YouTube Effect is unclear about what it intends us to understand about YouTube’s impact on the world beyond the sheer size of audience a creator can assemble via the platform. The array of scandals, all of them familiar from mainstream headlines, makes a persuasive case that YouTube deserves Facebook and Twitter-level scrutiny. What’s missing, however, is causality. In fact, the film is wrongly titled: there is no one YouTube effect. York had it right: “fixing” YouTube requires deciding what you’re trying to change. My own inclination is to force change to the business model. The algorithm distorts our interactions, but it’s driven by the business model.

Perhaps this was predictable. Seven years on, we still struggle to pinpoint exactly how social media affected the 2016 US presidential election or the UK’s EU referendum vote. Letting it ride is dangerous, but so is government regulation. Numerous governments are leaning toward the latter.

Even the experts assembled at last week’s Cambridge Disinformation Summit reached no consensus. Some saw disinformation as an existential threat; others argued that disinformation has always been with us and humanity finds a way to live through it. It wouldn’t be reasonable to expect one filmmaker to solve a conundrum that is vexing so many. And yet it’s still disappointing not to have found greater clarity.

Illustrations: YouTube CEO (2014-2023) Susan Wojcicki (via The YouTube Effect).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.

The horns of a dilemma

It has always been possible to conceive a future for Mastodon and the Fediverse that goes like this: incomers join the biggest servers (“instances”). The growth of those instances, if they can afford it, accelerates. When the sysadmins of smaller instances burn out and withdraw, their users also move to the largest instances. Eventually, the Fediverse landscape is dominated by a handful of very large instances (who enshittify in the traditional way) with a long tail of small and smaller ones. The very large ones begin setting rules – mostly for good reasons like combating abuse, improving security, and offering new features – that the very small ones struggle to keep up with. Eventually, it becomes too hard for most small instances to function.

This is the history of email. In 2003, when I set up my own email server at home, almost every techie had one. By this year, when I decommissioned it in favor of hosted email, almost everyone had long since moved to Gmail or Hotmail. It’s still possible to run an independent server, but the world is increasingly hostile to them.

Another possible Fediverse future: the cultural norms that Mastodon and other users have painstakingly developed over time become swamped by a sudden influx of huge numbers of newcomers when a very large instance joins the federation. The newcomers, who know nothing of the communities they’re joining, overwhelm their history and culture. The newcomers are despised and mocked – but meanwhile, much of the previous organically grown culture is lost, and people wanting intelligent conversation leave to find it elsewhere.

This is the history of Usenet, which in 1994 struggled to absorb 1 million AOLers arriving via a new gateway and software whose design reflected AOL’s internal design rather than Usenet’s history and culture. The result was to greatly exacerbate Usenet’s existing problems of abuse.

A third possible Fediverse future: someone figures out how to make money out of it. Large and small instances continue to exist, but many become commercial enterprises, and small instances increasingly rely on large instances to provide services the small instances need to stay functional. While both profit from that division of labor, the difficulty of discovery means small servers stay small, and the large servers become increasingly monopolistic, exploitative, and unpleasant to use. This is the history of the web, with a few notable exceptions such as Wikipedia and the Internet Archive.

A fourth possible future: the Fediverse remains outside the mainstream, and admins continue to depend on donations to maintain their servers. Over time, the landscape of servers will shift as some burn out or run out of money and are replaced. This is roughly the history of IRC, which continues to serve its niche. Many current Mastodonians would be happy with this; as long as there’s no corporate owner no one can force anyone out of business for being insufficiently profitable.

These forking futures are suddenly topical as Mastodon administrators consider how to respond to this: Facebook will launch a new app that will interoperate with Mastodon and any other network that uses the ActivityPub protocol. Early screenshots suggest a clone of Twitter, Meta’s stated target, and reports say that Facebook is talking to celebrities like Oprah Winfrey and the Dalai Lama as potential users. The plan is reportedly that users will access the new service via their Instagram IDs and passwords. Top-down and celebrity-driven is the opposite of the Fediverse.

It should not be much comfort to anyone that the competitor the company wants to kill with this initiative is Twitter, not Mastodon, because either way Meta doesn’t care about Mastodon and its culture. Mastodon is a rounding error even for just Instagram. Twitter is also comparatively small (and, like Reddit, too text-based to grow much further) but Meta sees in it the opportunity to capture its influencers and build profits around them.

The Fediverse is a democracy in the sense that email and Usenet were; admins get to decide their server’s policy, and users can only accept or reject by moving their account (which generally loses their history). For admins, how to handle Meta is not an easy choice. Meta has approached for discussions the admins of some of the larger Mastodon instances, who must sign an NDA or give up the chance to influence developments. That decision is for the largest few; but potentially every Mastodon instance operator will have to decide the bigger question: do they federate with Meta or not? Refusal means their users can’t access Meta’s wider world, which will inevitably include many of their friends; acceptance means change and loss of control. As I’ve said here before, something that is “open” only to your concept of “good people” isn’t open at all; it’s closed.

At Chronicles of the Instantly Curious, Carey Lening deplores calls to shun Meta as elitist; the AOL comparison draws itself. Even so, the more imminent bad future for Mastodon is the possibility that this is the fork that could split the Fediverse into two factions. Of course the point of being decentralized is to allow more choice over who you socially network with. But until now, none of those choices took on the religious overtones associated with the most heated cyberworld disputes. Fasten your seatbelts…

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Follow on Mastodon.