Review: The Seven Rules of Trust

The Seven Rules of Trust: Why It Is Today’s Essential Superpower
by Jimmy Wales
Bloomsbury
ISBN: 978-1-5266-6501-0

Probably most people have either forgotten or never known that when Jimmy Wales first founded Wikipedia it was widely criticized. A lot of people didn’t believe an encyclopedia written and edited by volunteers could be any good. Many others believed free access would destroy Britannica’s business model, and reacted resentfully. Teachers warned students against using it, despite the fact that Wikipedia’s talk pages offer rare transparency into how knowledge is curated.

Now we know the Internet is big enough for both Wikipedia and Britannica.

Much of Wikipedia’s immediate value lay in its infinite expandability; it covered in detail many subjects the more austere Britannica considered unworthy. But, as Wales writes at the beginning of his recent book, The Seven Rules of Trust, Wikipedia’s biggest challenge was finding a way to become trusted. Britannica must have faced this too, once. Its solution was to build upon the reputation of the paid experts who write its entries. Wikipedia settled on passion, transparency, and increasingly rigorous referencing. As it turns out, collectively we know a lot. Today, Wikipedia is nearly 100 times the size of Britannica, has hundreds of language editions, and is so widely trusted that most of us don’t even think about how often we consult it.

In The Seven Rules of Trust, Wales tells the story of: how Wikipedia got from joke to trusted resource. It began, he says, with its editors trusting each other. For this part of his story, he relies on Frances Frei‘s model of trust, a triangle balancing authenticity, empathy, and logic. Editors’ trust enabled the collaboration that could build public trust in their work, which is guided by Wikipedia’s five pillars.

Wales’s seven rules are not complicated: trust is personal, even at scale; people are born to connect and collaborate; successful collaboration requires a clear positive shared purpose; give trust to get trust; practice civility; stick to your mission and avoid getting involved in others’ disputes; embrace transparency. Some of these could be reframed as the traditional virtues, as when Wales talks about the principle of “assume good faith” when trying to negotiate the diversity of others’ opinions to reach consensus on how to present a topic. I think of this as “charity”. Either way, it’s not meant to be infinite; good faith can be abused, and Wales goes on to talk about how Wikipedia handles trolls, self-promoters, and other problems.

Yet, Wales’s account feels rosy. Many of his stories about remediating the site’s flaws revolve around one or two individuals who personally built up areas such as Wikipedia’s coverage of female scientists. I’m not sure he’s in a position to recognize how often would-be contributors are quickly deterred by an editor fiercely defending their domain or how difficult it’s become to create a new page and make sure it stays up. And, although he nods at the hope that the book will help recruit new editors, he doesn’t discuss the problem of churn Wikipedia surely faces.

Having steered the creation of something as gigantic and seemingly unlikely as Wikipedia, Wales has certainly earned the right to explain how he did it in the hope of helping others embarking on similarly large and unlikely projects. Wales argues that trust has enabled diversity of opinion, and the resulting internal disagreement has improved Wikipedia’s quality. Almost certainly true, but hard to apply to more diffuse missions; see today’s cross-party politics.

Sovereign immunity

At the Gikii conference in 2018, a speaker told us of her disquiet after receiving a warning from Tumblr that she had replied to several messages posted there by a Russian bot. After inspecting the relevant thread, her conclusion was that this bot’s postings were designed to increase the existing divisions within her community. There would, she warned, be a lot more of this.

We’ve seen confirming evidence over the years since. This week provided even more when X turned on location identification for all accounts, whether they wanted it or not. The result has been, as Jason Koebler writes at 404 Media, to expose the true locations of accounts purporting to be American, posting on political matters. A large portion of the accounts behind viral posts designed to exacerbate tensions are being run by people in countries like Bangladesh, Vietnam, India, Cambodia, and Russia, among others, with generative AI acting as an accelerant.

Unlike the speaker we began with, in his analysis, Koebler finds that the intention behind most of this is not to stir up divisions but simply to make money from an automated ecosystem that makes it easy. The US is the main target simply because it’s the most lucrative market. He also points out that while X’s new feature has led people to talk about it, the similar feature that has long existed on Facebook and YouTube has never led to change because, he writes, “social media companies do not give a fuck about this”. Cue the Upton Sinclair quote: “It is difficult to get a man to understand something when his Salary depends upon his not understanding it”

The incident reminded that this type of fraud in general seems to be endemic, especially in the online advertising ecosystem. In March, Portsmouth senior lecturer Karen Middleton submitted evidence (PDF) to a UK Parliamentary Select Committee Inquiry arguing that the advertising ecosystem urgently needs regulatory attention as a threat to information integrity. At the Financial Times, Martin Wolf thinks that users should be able to sue the platforms for reimbursement when they are tricked by fraudulent ads – a model that might work for fraudulent ads that cause quantifiable harm but not for those that cause wider, less tangible, social harm. Wolf cites a Reuters report from Jeff Horwitz, who analyzes internal Facebook documents to find that the company itself expected 10% of its 2024 revenues – $16 billion – to come from ads for scams and banned goods.

Search Engine Land, citing Juniper Research, estimated in 2023 that $84 billion in advertising spend would be lost to ad fraud that year, and predicted a rise to $172 billion by 2028. Spider Labs estimates 2024 losses at over $37.7 billion, based on traffic data it’s analyzed through its fraud prevention tool, and 2025 losses at $41.4 billion. For context, DataReportal puts global online ad revenue at close to $790.3 billion in 2024. Also for comparison, Adblock Tester estimated last week that ad blockers cut publishers’ advertising revenues on average by 25% in 2023, costing them up to $50 billion a year.

If Koebler is correct in his assessment, until or unless advertisers rebel the incentives are misplaced and change will not happen.

***

Enforcement of the Online Safety Act has continued to develop since it came into force in July. This week, Substack became the latest to announce it would implement age verification for whatever content it deems to be potentially harmful. Paid subscribers are exempt on the basis that they have signed up with credit cards, which are unavailable in the UK to those under 18.

In October, we noted the arrival of a lawsuit against Ofcom brought in US courts by 4Chan and Kiwi Farms. The lawyer’s name, Preston Byrne, sounded familiar; I now remember he talked bitcoin at the 2015 Tomorrow’s Transaction Forum.

James Titcomb writes at the Daily Telegraph that Ofcom’s lawyers have told the US court that it is a public regulatory authority and therefore has “sovereign immunity”. The lawsuit contends that Ofcom is run as a “commercial enterprise” and therefore doesn’t get to claim sovereign immunity. Plus: the First Amendment.

Meanwhile, with age verification spreading to Australia and the EU, on X Byrne is advocating that US states enact foreign censorship shield laws. One state – Wyoming – has already introduced one. The draft GRANITE Act was filed on November 19. Among other provisions, the law would permit US citizens who have been threatened with fines to demand three times the amount in damages – potentially billions for a company like Meta, which can be fined up to 10% of global revenue under various UK and EU laws. That clause would have to pass the US Congress. In the current mood, it might; in July in a report the House of Representatives Judiciary Committee called the EU’s Digital Services Act a foreign censorship threat.

It’s hard to know how – or when – this will end. In 1990s debates, many imagined that the competition to enforce national standards for speech across the world would lead either to unrestricted free speech or to a “least common denominator” regime in which the most restrictive laws applied everywhere. Byrne’s battle isn’t about that; it’s about who gets to decide.

Illustrations: A wild turkey strutting (by Frank Schulenberg at Wikimedia). Happy Thanksgiving!

Also this week:
At Plutopia, we interview Jennifer Granick, surveillance and cybersecurity counsel at ACLU.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Software is still forever

On October 14, a few months after the tenth anniversary of its launch, Microsoft will end support for Windows 10. That is, Microsoft will no longer issue feature or security updates or provide technical support, and everyone is supposed to either upgrade their computers to Windows 11 or, if Microsoft’s installer deems the hardware inadequate, replace them with newer models. People who “need more time”, in the company’s phrasing, can buy a year’s worth of security updates. Either way, Microsoft profits at our expense.

In 2014, Microsoft similarly end-of-lifed 13-year-old Windows XP. Then, many were unsympathetic to complaints about it; many thought it unreasonable to expect a company to maintain software for that long. Yet it was obvious even then that software lives on with or without support for far longer than people expect, and also that trashing millions of functional computers was stupidly wasteful. Microsoft is giving Windows 10 a *shorter* life, which is rather obviously the wrong direction for a planet drowning in electronic waste.

XP’s end came at a time when the computer industry was transitioning from adolescence to maturity. As long as personal computing was being constrained by the limited capabilities of hardware and research and development was improving them at a fast pace, a software company like Microsoft could count on frequent new sales. By 2014, that happy time had ended, and although computers continue to add power and speed, it’s not coming back. The same pattern has been repeated with phones, which no longer improve on an 18-month cycle as in the 2010s, and cameras.

For the vast majority, there’s no reason to replace their old machine unless a non-replaceable part is failing – and there should be less of that as manufacturers are forced to embrace repairability. Significantly, there’s less and less difference for many of us if we keep the old hardware and switch to Linux, eliminating Microsoft entirely.

Those fast-moving days were real obsolescence. What we have now is what we used to call “planned obsolescence”. That is, *forced* obsolescence that companies impose on us because it’s convenient and profitable for *them*.

This time round, people are more critical, not least because of the vast amounts of ewaste being generated. The Public Interest Research Group has written an open letter asking people to petition Microsoft to extend free support for Windows 10. As Ed Bott explains at ZDNet, you do have the option of kicking the can down the road by paying for updates for another three years.

The other antisocial side of terminating free security updates is that millions of those still-functional machines will remain in use, and will be increasingly insecure as new vulnerabilities are discovered and left unpatched.

Simultaneously, Windows is enshittifying; it’s harder to run Windows without a Microsoft login; avoid stupid gewgaws and unwanted news headlines, and turn off its “Copilot AI”. Tom Warren reports at The Verge that Microsoft wants to turn Copilot into an agent that can book restaurants and control its Edge browser. There are, it appears, ways to defeat all this in Windows 11, but for how long?

In a piece on solar technology, Doctorow outlines the process by which technology companies seize control once they can no longer rely on consumer demand to drive sales. They lock down their technology if they can, lock in customers, add advertising and block market entry claiming safety and/or security make it necessary. They write and lobby for legislation that enshrines their advantage. And they use technological changes to render past products obsolete. Many think this is the real story behind the insistence on forcing unwanted “AI” features into everything: it’s the one thing they can do to make their offerings sound new.

Seen in that light, the rush to build “AI” into everything becomes a rush to find a way to force people to buy new stuff. The problem is that – it feels like – most people don’t see much benefit in it, and go around turning off the AI features that are forced on them. Microsoft’s Recall feature, which takes a screen snapshot every few seconds, was so controversial at launch that the company rolled it back – for a while, anyway.

Carelessness about ewaste is everywhere, particularly with respect to the Internet of Things. This week: Logitech’s Pop smart home buttons. At least when Google ended support for older Nest thermostats they could go on working as “dumb” thermostats (which honestly seems like the best kind).

Ewaste is getting a whole lot worse when it desperately needs to be getting a whole lot better.

***

In the ongoing rollout of the Online Safety Act and age verification update, at 404 Media, Joseph Cox reports that Discord has become the first site reporting a hack of age verification data. Hackers have collected data pertaining to 70,000 users, including selfies, identity documents, email addresses, approximate residences, and so on, and are trying to extort Discord, which says the hackers breached one of its third-party vendors that handles age-related appeals. Security practitioners warned about this from the beginning.

In addition, Ofcom has launched a new consultation for the next round of Online Safety Act enforcement. Up next are livestreaming and algorithmic recommendations; the Open Rights Group has an explainer, as does lawyer Graham Smith. The consultation closes on October 20.

Illustrations: One use for old computers – movie stardom, as here in Brazil.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Passing the Uncanny Valley

A couple of weeks ago, the Greenwich Skeptics in the Pub played host to Sophie Nightingale, who studies the psychology of AI deepfakes. The particular project she spoke about was an experiment in whether people can be trained to be better at distinguishing them from real images.

In Nightingale’s experiments, she carefully matched groups of real images to synthetic ones, first created by generative adversarial networks (GANs), later by diffusion models (GeeksforGeeks raters’ demographics.

Then the humans were given some training in what to look for to detect fakes and the experiment was rerun with new sets of faces. The bad news: the training made a little difference, but not much. She went on to do similar experiments with diffusion images.

Nightingale has gone on to do some cross-modal experiments, including audio as well as images, following the 2024 election incident in which New Hampshire voters received robocalls from a faked Joe Biden intended to discourage voters in the January 2024 primary. In the audio experiment, she played the test subjects very short snippets. Played for us in the pub, it was very hard to tell real from fake, and her experimental subjects did no better. I would expect longer clips to be more identifiable as fake. The Biden call succeeded in part because that type of fake had never been tried before. Now, voters, at least in New Hampshire, will know it’s possible that the call they’re getting is part of a newer type of disinformation campaign aimed at

In another experiment, she asked participants to rate the trustworthiness of the facial images they were shown, and was dismayed when they rated the synthetic faces slightly (7.7%) higher than the real ones. In the resulting paper for Journal of Vision, she hypothesizes that this may be because synthetic faces tend to look more like “average” faces, which tend to be rated higher in trustworthiness, even if they’re not the most attractive.

Overall, she concludes that both still images and voice have “passed the Uncanny Valley“, and video will soon follow. In the past, I’ve chosen optimism about this sort of thing, on the basis that earlier generations have been fooled by technological artifacts that couldn’t fool us now for a second. The Cottingley Fairies looks ridiculous after generations of knowledge of photography. On the other hand, Johannes Vermeer’s Girl with a Pearl Earring looks more real than modern deepfakes, even though the subject is generally described as imaginary. So it’s possible to think of it as a “deepfake”, painted in oils in the 17th century.

Fakes have always been with us. What generative AI has done to change this landscape is to democratize and scale their creation, just as it’s amping up the scale and speed of cyber attacks. It’s no longer necessary to be even barely competent; the tools keep getting easier.

Listening to Nightingale it seems most likely that work like that in progress by an audience member on identifying technological artifacts that identify fakes will prove to be the right way forward. If those differences can be reliably identified, they could be built into technological tools that can spot indicators we can’t perceive directly. If something like that can be embedded into devices – phones, eyeglasses, wristwatches, laptops – and spot and filter out fakes in real time, and we should be able to regain some ability to trust what we see.

There are some obvious problems with this hoped-for future. Some people will continue to seek to exploit fakes; some may prefer them. The most likely outcome will be an arms race like that surrounding email spam and other battles between malware producers and security people. Still, it’s the first approach that seems to offer a practical solution to coping with a vastly diminished ability to know what’s real and what isn’t.

***

On the Internet your home always leaves you, part 4,563. Twenty-two-year-old blogging site Typepad will disappear in a few weeks. To those of us who have read blogs ever since they began, this news is shocking, like someone’s decided to tear down an old community church. Yes, the congregation has shrunk and aged, and it’s drafty and built on creaking old technology (in Typepad’s case, Moveable Type), but it’s part of shared local history. Except it isn’t, because, as Wikipedia documents, corporate musical chairs means it’s now owned by private equity. Apparently it’s been closed to new signups since 2020, and its bloggers are now being told to move their sites before everything is deleted in September. It feels like the stars of the open web are winking out, one by one.

On the Internet everything is forever, but everything is also ephemeral. Ironically, the site’s marketing slug still reads: “Typepad is the reliable, flexible blogging platform that puts the publisher in control.”

Illustrations: “Girl with a Pearl Earring”, painted by Johannes Vermeer circa 1665.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Big bang

In 2008, when the recording industry was successfully lobbying for an extension to the term of copyright to 95 years, I wrote about a spectacular unfairness that was affecting numerous folk and other musicians. Because of my own history and sometimes present with folk music, I am most familiar with this area of music, which aside from a few years in the 1960s has generally operated outside of the world of commercial music.

The unfairness was this: the remnants of a label that had recorded numerous long-serving and excellent musicians in the 1970s were squatting on those recordings and refusing to either rerelease them or return the rights. The result was both artistic frustration and deprivation of a sorely-needed source of revenue.

One of these musicians is the Scottish legend Dick Gaughan, who had a stroke in 2016 and was forced to give up performing. Gaughan, with help from friends, is taking action: a GoFundMe is raising the money to pay “serious lawyers” to get his rights back. Whether one loved his early music or not – and I regularly cite Gaughan as an important influence on what I play – barring him from benefiting from his own past work is just plain morally wrong. I hope he wins through; and I hope the case sets a precedent that frees other musicians’ trapped work. Copyright is supposed to help support creators, not imprison their work in a vault to no one’s benefit.

***

This has been the first week of requiring age verification for access to online content in the UK; the law came into effect on July 25. Reddit and Bluesky, as noted here two weeks ago, were first, but with Ofcom starting enforcement, many are following. Some examples: Spotify; X (exTwitter); Pornhub.

Two classes of problems are rapidly emerging: technical and political. On the technical side, so far it seems like every platform is choosing a different age verification provider. These AVPs are generally unfamiliar companies in a new market, and we are being asked to trust them with passports, driver’s licenses, credit cards, and selfies for age estimation. Anyone who uses multiple services will find themselves having to widely scatter this sensitive information. The security and privacy risks of this should be obvious. Still, Dan Malmo reports at the Guardian that AVPs are already processing five million age checks a day. It’s not clear yet if that’s a temporary burst of one-time token creation or a permanently growing artefact of repetitious added friction, like cookie banners.

X says it will examine users’ email addresses and contact books to help estimate ages. Some systems reportedly send referring page links, opening the way for the receiving AVP to store these and build profiles. Choosing a trustworthy VPN can be tricky, and these intermediaries are in a position to log what you do and exploit the results.

The BBC’s fact-checking service finds that a wide range of public interest content, including news about Ukraine and Gaza and Parliamentary debates, is being blocked on Reddit and X. Sex workers see adults being locked out of legal content.

Meanwhile, many are signing up for VPNs at pace, as predicted. The spike has led to rumors that the government is considering banning them. This seems unrealistic: many businesses rely on VPNs to secure connections for remote workers. But the idea is alarming; its logical extension is the war on general-purpose computation Cory Doctorow foresaw as a consequence of digital rights management in 2011. A terrible and destructive policy can serve multiple masters’ interests and is more likely to happen if it does.

On the political side, there are three camps. One wants the legislation repealed. Another wants to retain aspects many people agree on, such criminalizing cyberflashing and some other types of online abuse, and fix its flaws. The third thinks the OSA doesn’t go far enough, and they’re already saying they want it expanded to include all services, generative AI, and private messaging.

More than 466,000 people have signed a petition calling on the government to repeal the OSA. The government responded: thanks, but no. It will “work with Ofcom” to ensure enforcement will be “robust but proportionate”.

Concrete proposals for fixing the OSA’s worst flaws are rare, but a report from the Open Rights Group offers some; it advises an interoperable system that gives users choice and control over methods and providers. Age verification proponents often compare age-gating websites to ID checks in bars and shops, but those don’t require you to visit a separate shop the proprietor has chosen and hand over personal information. At Ctrl-Shift, Kirra Pendergast explains some of the risks.

Surrounding all that is noise. A US lawyer wants to sue Ofcom in a US federal court (huh?). Reform leader Nigel Farage has called for the Act’s repeal, which led technology secretary Peter Kyle to accuse him – and then anyone else who criticizes the act – of being on the side of sexual predators. Kyle told Mumsnet he apologizes to the generation of UK kids who were “let down” by being exposed to toxic online content because politicians failed to protect them all this time. “Never again…”

In other news, this government has lowered the voting age to 16.

Illustrations: The back cover of Dick Gaughan’s out-of-print 1972 first album, No More Forever.

Wendy M. Grossman is an award-winnning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Cautionary tales

I’ve been online for nearly 34 years, and I’m thinking of becoming a child. Or at least, a child to big user-to-user social media services, which next week will start asking for proof of adulthood. On July 25, the new age verification requirements under the Online Safety Act come into effect in the UK. The regulator, Ofcom, has published a guide.

Plenty of companies aim to join this new market. Some are familiar: credit scorers Experian and Transunion. Others are new: Yoti, which we saw demonstrated back in 2016, and Sam Altman and Andreessen Horowitz-backed six-year-old startup World, which recently did a promotional tour for the UK launch of its Orb identification system. Summary: many happy privacy words, but still dubious.

Reddit picked Persona; Dearbail Jordan at the BBC says Redditors will need to upload either a selfie for age estimation or a government-issued ID. Reddit says it will not see this data, only storing each user’s verification status along with the birth date they’ve (optionally) provided.

Bluesky has chosen Kids’ Web Services from Epic Games. The announcement says KWS accepts multiple options: payment cards, ID scans, and face scans. Users who decline to supply this information will be denied access to adult content and direct messaging. How much do I care about either? Would I rather just be a child to two-year-old Bluesky?

On older sites my adulthood ought to be obvious: I joined Twitter/X in 2008 and Reddit in 2015. Do the math, guys! I suppose there is a chance I could have created the account, forgotten it, and then revived it for a child (the “older brother problem”), but I’m not sure these third-party verifiers solve that either.

Everyone wants to protect children. But it doesn’t make sense to do it by creating a system that exposes everyone, including children, to new privacy risks. In its report on how to fix the OSA, the Open Rights Group argues that interoperability and portability should be first principles, and that users should be able to choose providers and methods. Today, the social media companies don’t see age verification data; in five years will they be buying up those providers? These first steps matter, as they are setting the template for what is to come.

This is the opening of a floodgate. On June 27 the US Supreme Court ruled in Free Speech Coalition v. Paxton to uphold a law requiring pornographic websites to verify users’ ages through government-issued ID. At TechDirt, Mike Masnick called the ruling taking a chainsaw to the First Amendment.

It’s easy to predict that here will be scandals surrounding the data age verifiers collect, and others where technological failures let children access the wrong sort of content. We’ll hear less about the frustrations of people who are blocked by age verification from essential information. Meanwhile, child safety folks will continue pushing for new levels of control.

The big question is this: how will we know if it’s working? What does “success” look like?

***

At Platformer, Casey Newton covers Substack’s announcement that it has closed series C funding round of $100 million, valuing the company at $1.1 billion. The eight-year-old company gets to say it’s a unicorn.

Newton tries to understand how Substack is worth that. He predicts – logically – that its only choice to justify its venture capitalists’ investment will be rampant enshittification. These guys don’t put in that kind of money without expecting a full-bore return, which is why Newton is dubious about the founders’ promise to invest most of that newly-raised capital in creators. Recall the stages Cory Doctorow laid out: first they amass as many users as possible; then they abuse those users to amass as many business customers (advertisers) as possible; then they squeeze everyone.

Substack, which announced four months ago that it – or, more correctly, its creators – has more than 5 million paid subscriptions, is different in that its multi-sided market structure is more like Uber or Amazon Marketplace than like a social media site or traditional publisher. It has users (readers and listeners), creators (like Uber’s drivers or Amazon’s sellers), and customers (advertisers). Viewed that way, it’s easy to see Substack’s most likely path: raise prices (users and advertisers), raise thresholds and commissions (creators), and, like Amazon, force sellers (creators) into using fee-based additional services in order to stay afloat. Plus, it must crush the competition. See similar math from Anil Dash.

Less ponderable is the headwind of Substack’s controversial hospitality to extremists, noted in 2023 by Jonathan Katz at The Atlantic. Some creators – like Newton – have opted to leave for competitor Ghost, which is both open source and cheaper. Many friends refuse to pay Substack even when they want to support creators whose work they admire. At the time, Stephen Bush responded at the Financial Times that Substack should admit that it’s not a publisher but a “handy bit of infrastructure for sending newsletters”. Is that worth $1.1 billion?

Like earlier Silicon Valley companies, Substack is planning to reverse its previous disdain for advertising, as Benjamin Mullin and Jessica Testa report at the New York Times. The company is apparently also looking forward to embracing social networking.

So, no really new ideas, then?

Illustrations: Unicorn (by Pearson Scott Foresman via Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Revival

There appears to be media consensus: “Bluesky is dead.”

At The Commentary, James Meigs calls Bluesky “an expression of the left’s growing hypersensitivity to ideas leftists find offensive”, and says he accepts exTwitter’s “somewhat uglier vibe” in return for “knowing that right-wing views aren’t being deliberately buried”. Then he calls Bluesky “toxic” and a “hermetically sealed social-media bubble”.

At New Media and Marketing, Rich Meyer says Bluesky is in decline and engagement is dropping, and exTwitter is making a comeback.

At Slate, Alex Kirshner and Nitish Pahwa complain that Bluesky feels “empty”, say that its too-serious users are abandoning it because it isn’t fun, and compare it to a “small liberal arts college” and exTwitter to a “large state university”.

At The Spectator, Sean Thomas regrets that “Bluesky is dying” – and claims to have known it would fail from his first visit to the site, “a bad vegan cafe, full of humorless puritans”.

Many of these pieces – Mark Cuban at Fortune, for example, and Megan McArdle at the Washington Post – blame a “lack of diversity of thought”.

As Mike Masnick writes on TechDirt in its defense (Masnick is a Bluesky board member), “It seems a bit odd: when something is supposedly dying or irrelevant, journalists can’t stop writing about it.”

Have they so soon forgotten 2014, when everyone was writing that Twitter was dead?

Commentators may be missing that success for Bluesky looks different: it’s trying to build a protocol-driven ecosystem, not a site. Twitter had one, but destroyed it as its ad-based business model took over. Both Bluesky and Mastodon, which media largely ignores, aim to let users create their own experience and are building tools that give users as much control as possible. It seems to offend some commentators that one of them lets you block people you don’t want to deal with, but that’s weird, since it’s the one every social site has.

All social media have ups and downs, especially when they’re new (I really wonder how many of these commentators experienced exTwitter in its early days or have looked at Truth Social’s user numbers). Settling into a new environment and rebuilding take time – it may look like the old place, but its affordances are different, and old friends are missing. Meanwhile, anecdotally, some seem to be leaving social media entirely, driven away by privacy issues, toxic behavior, distaste for platform power and its owners, or simply distracted by life. Few of us *have* to use social media.

***

In 2002, the UK’s Financial Services Authority was the first to implement an EU directive allowing private organizations to issue their own electronic money without a banking license if they could meet the capital requirements. At the time, the idea seemed kind of cute, especially since there was a plan to waive some of the requirements for smaller businesses. Everyone wanted micropayments; here was a framework of possibility.

And then nothing much happened. The Register’s report (the first link above) said that organizations such as the Post Office, credit card companies, and mobile operators were considering launching emoney offerings. If they did, the results sank without trace. Instead, we’re all using credit/debit cards to pay for stuff online, just as we were 23 years ago. People are relucrtant to trust weird, new-fangled forms of money.

Then, in 2008, came cryptocurrencies – money as lottery ticket.

Last week, the Wall Street Journal reported that Amazon, Wal-Mart, and other multinationals are exploring stablecoins as a customer payment option – in other words, issuing their own cryptocurrencies, pegged to the US dollar. As Andrew Kassel explains at Investopedia, the result could be to bypass credit cards and banks, saving billions in fees.

It’s not clear how this would work, but I’m suspicious of the benefits to consumers. Would I have to buy a company’s stablecoin before doing business with it? And maintain a floating balance? At Axios, Brady Dale explores other possibilities. Ultimately, it sounds like a return to the 1970s, before multipurpose credit cards, when people had store cards from the retailers they used frequently, and paid a load of bills every month. Dale seems optimistic that this could be a win for consumers as well as retailers, but I can’t really see it.

In other words, the idea seems less cute now, less fun technological experiment, more rapacious. There’s another, more disturbing, possibility: the return of the old company town. Say you work for Amazon or Wal-Mart, and they offer you a 10% bonus for taking your pay in their stablecoin. You can’t spend it anywhere but their store, but that’s OK, right, because they stock everything you could possibly want? A modern company town doesn’t necessarily have to be geographical.

I’ve long thought that company towns, which allowed companies to effectively own employees, are the desired endgame for the titans. Elon Musk is heading that way with Starbase, Texas, now inhabited primarily by SpaceX employees, as Elizabeth Crisp reports at The Hill.

I don’t know if the employees who last month voted enthusiastically for the final incorporation of Starbase realize how abusive those old company towns were.

Illustrations: The Starbase sign adjoining Texas Highway 4, in 2023 (via Jenny Hautmann at Wikimedia.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Sovereign

On May 19, a group of technologists, researchers, economists, and scientists published an open letter calling on British prime minister Keir Starmer to prioritize the development of “sovereign advanced AI capabilities through British startups and industry”. I am one of the many signatories. Britain’s best shot at the kind of private AI research lab under discussion was Deepmind, sold to Google in 2014; the country has nothing now that’s domestically owned. ”

Those with long memories know that Leo was the first computer used for a business application – running Lyons tea rooms. In the 1980s, Britain led personal computing.

But the bigger point is less about AI in specific and more about information technology generally. At a panel at Computers, Privacy, and Data Protection in 2022, the former MEP Jan Philipp Albrecht, who was the special rapporteur for the General Data Protection Regulation, outlined his work building up cloud providers and local hardware as the Minister for Energy, Agriculture, the Environment, Nature and Digitalization of Schleswig-Holstein. As he explained, the public sector loses a great deal when it takes the seemingly easier path of buying proprietary software and services. Among the lost opportunities: building capacity and sovereignty. While his organization used services from all over the world, it set its own standards, one of which was that everything must be open source,

As the events of recent years are making clear, proprietary software fails if you can’t trust the country it’s made in, since you can’t wholly audit what it does. Even more important, once a company is bedded in, it can be very hard to excise it if you want to change supplier. That “customer lock-in” is, of course, a long-running business strategy, and it doesn’t only apply to IT. If we’re going to spend large sums of money on IT, there’s some logic to investing it in building up local capacity; one of the original goals in setting up the Government Digital Service was shifting to smaller, local suppliers instead of automatically turning to the largest and most expensive international ones.

The letter calls relying on US technology companies and services a “national security risk. Elsewhere, I have argued that we must find ways to build trusted systems out of untrusted components, but the problem here is more complex because of the sensitivity of government data. Both the US and China have the right to command access to data stored by their companies, and the US in particular does not grant foreigners even the few privacy rights it grants its citizens.

It’s also long past time for countries to stop thinking in terms of “winning the AI race”. AI is an umbrella term that has no single meaning. Instead, it would be better to think in terms of there being many applications of AI, and trying to build things that matter.

***

As predicted here two years ago, AI models are starting to collapse, Stephen J. Vaughan writes at The Register.

The basic idea is that as the web becomes polluted with synthetically-generated data, the quality of the data used to train the large language models degrades, so the models themselves become less useful. Even without that, the AI-with-everything approach many search engines are taking is poisoning their usefulness. Model collapse just makes it worse.

We would point out to everyone frantically adding “AI” to their services that the historical precedents are not on their side. In the late 1990s, every site felt it had to be a portal, so they all had search, and weather, and news headlines, and all sorts of crap that made it hard to find the search results. The result? Google disrupted all that with a clean, white page with no clutter (those were the days). Users all switched. Yahoo is the most obvious survivor from that period, and I think it’s because it does have some things – notably financial data – that it does extremely well.

It would be more satisfying to be smug about this, but the big issue is that companies are going on spraying toxic pollution over the services we all need to be able to use. How bad does it have to get before they stop?

***

At Privacy Law Scholars this week, in a discussion of modern corporate oligarchs and their fantasies of global domination, an attendee asked if any of us had read the terms of service for Starlink. She wanted to draw out attention to the following passage, under “Governing Law”:

For Services provided to, on, or in orbit around the planet Earth or the Moon, this Agreement and any disputes between us arising out of or related to this Agreement, including disputes regarding arbitrability (“Disputes”) will be governed by and construed in accordance with the laws of the State of Texas in the United States. For Services provided on Mars, or in transit to Mars via Starship or other spacecraft, the parties recognize Mars as a free planet and that no Earth-based government has authority or sovereignty over Martian activities. Accordingly, Disputes will be settled through self-governing principles, established in good faith, at the time of Martian settlement.

Reminder: Starlink has contracts worth billions of dollars to provide Internet infrastructure in more than 100 countries.

So who’s signing this?

Illustrations: The Martian (Ray Walston) in the 1963-1966 TV series My Favorite Martian.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Hallucinations

It makes obvious sense that the people most personally affected by a crime should have the right to present their views in court. Last week, in Arizona, Stacey Wales, the sister of Chris Pelkey, who was killed in a road rage shooting in 2021, delegated her victim impact statement offering forgiveness to Pelkey’s artificially-generated video likeness. According to Cy Neff at the Guardian, the judge praised this use of AI and said he felt the forgiveness was “genuine”. It is unknown if it affected his sentencing.

It feels instinctively wrong to use a synthesized likeness this way to represent living relatives, who could have written any script they chose – even, had they so desired, one presenting this reportedly peaceful religious man’s views as a fierce desire for vengeance. *Of course* seeing it acted out by a movie-like AI simulation of the deceased victim packs emotional punch. But that doesn’t make it *true* or, as Wales calls it at the YouTube video link above, “his own impact statement”. It remains the thoughts of his family and friends, culled from their possibly imperfect memories of things Pelkey said during his lifetime, and if it’s going to be presented in a court, it ought to be presented by the people who wrote the script.

This is especially true because humans are so susceptible to forming relationships with *anything*, whether it’s a basketball that reminds you of home, as in the 2000 movie Cast Away, or a chatbot that appears to answer your questions, as in 1966’s ELIZA or today’s ChatGPT.

There is a lot of that about. Recently, Miles Klee reported at Rolling Stone that numerous individuals are losing loved ones to “spiritual fantasies” engendered by intensive and deepening interaction with chatbots. This reminds of Ouija boards, which seem to respond to people’s questions but in reality react to small muscle movements in the operators’ hands.

Ouija boards “lie” because their operators unconsciously guide them to spell out words via the ideomotor effect. Those small, unnoticed muscle movements are also, more impressively, responsible for table tilting. The operators add to the illusion by interpreting the meaning of whatever the Ouija board spells out.

Chatbots “hallucinate” because the underlying large language models, based on math and statistics, predict the most likely next words and phrases with no understanding of meaning. But a conundrum is developing: as the large language models underlying chatbots improve, the bots are becoming *more*, not less, prone to deliver untruths.

At The Register, Thomas Claburn reports that researchers at Carnegie-Mellon, the University of Michigan, and the Allen Institute for AI find that AI models will “lie” in to order to meet the goals set for them. In the example in their paper, a chatbot instructed to sell a new painkiller that the company knows is more addictive than its predecessor will deny its addictiveness in the interests of making the sale. This is where who owns the technology and sets its parameters is crucial.

This result shouldn’t be too surprising. In her 2019 book, You Look Like a Thing and I Love You, Janelle Shane highlighted AIs’ tendency to come up with “short-cuts” that defy human expectations and limitations to achieve the goals set for them. No one has yet reported that a chatbot has been intentionally programmed to lead its users from simple scheduling to a belief that they are talking to a god – or are one themselves, as Klee reports. This seems more like operator error, as unconscious as the ideomotor effect

OpenAI reported at the end of April that it was rolling back GPT-4o to an earlier version because the chatbot had become too “sycophantic”. Tthe chatbot’s tendency to flatter its users apparently derived from the company’s attempt to make it “feel more intuitive”.

It’s less clear why Elon Musk’s Grok has been shoehorning rants alleging white genocide in South Africa into every answer it gives to every question, no matter how unrelated, as Kyle Orland reports at Ars Technica.

Meanwhile, at the New York Times Cade Metz and Karen Weise find that AI hallucinations are getting worse as the bots become more powerful. They give examples, but we all have our own: irrelevant search results, flat-out wrong information, made-up legal citations. Metz and Weise say “it’s not entirely clear why”, but note that the reasoning systems that DeepSeek so explosively introduced in February are more prone to errors, and that those errors compound the more time they spend stepping through a problem. That seems logical, just as a tiny error in an early step can completely derail a mathematical proof.

This all being the case, it would be nice if people would pause to rethink how they use this technology. At Lawfare, Cullen O’Keefe and Ketan Ramakrishnan are already warning about the next stage, agentic AI, which is being touted as a way to automate law enforcement. Lacking fear of punishment, AIs don’t have the motivations humans do to follow the law (nor can a mistargeted individual reason with them). Therefore, they must be instructed to follow the law, with all the problems of translating human legal code into binary code that implies.

I miss so much the days when you could chat online with a machine and know that really underneath it was just a human playing pranks.

Illustrations: “Mystic Tray” Ouija board (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The Skype of it all

This week, Microsoft shuttered Skype. For a lot of people, it’s a sad but nostalgic moment. Sad, because for older Internet users it brings back memories of the connections it facilitated; nostalgic because hardly anyone seemed to be using it any more. As Chris Stokel-Walker wrote at Wired in 2021, somehow when covid arrived and poured accelerant on remote communications, everyone turned to Zoom instead. Stokel-Walker blamed the Microsoft team for lacking focus on the bit that mattered most: keeping the video link up and stable. Zoom had better video, true, but also far better usability in terms of getting people to calls.

Skype’s service – technically, VoIP, for Voice over Internet Protocol – was pioneering in its time, which arguably peaked around 2010. Like CompuServe before it and Twitter since, there was a period when everyone had their Skype ID on their business cards. In 2005, when eBay bought it for $1.3 billion, it was being widely copied. In 2009, when eBay sold it to an investor group, it was valued at $2.75 billion.

In 2011, Microsoft bought it for $8.5 billion in cash, to general puzzlement as to *why* and why for *so much*. I thought eBay would somehow embed it into its transaction infrastructure as it had Paypal, which it had bought in 2002 for $1.5 billion (and then in 2014 spun off as a public company). Similarly, Wired talked of Microsoft embedding it into its Xbox Live network. Instead, the company fiddled with the app in the general shift from desktop to mobile. Ironic, given that Skype was a *phone* app. If it struggled like Facebook did to make the change, it’s kind of embarrassing.

Forgotten in all this is the fact that although Skype was the first VoIP application to gain mainstream acceptance, it was not the first to connect phone calls over the Internet. That was the long-forgotten Free World Dial-Up project, pioneered by Jeff Pulver. On the ground I imagined Free World Dial-Up as looking something like the switchboard and radio phone Radar O’Reilly (Gary Burghoff) used in the TV series M*A*S*H (1973-1982), who was patching phone calls being transmitted via radio networks. As Pulver described it, calls were sent across the Internet between servers, each connected to a box that patched the calls into the local phone system.

Rereading my notes from my 1995 interview with Pulver, when he was just getting his service up and running, it’s astonishing to remember how many hurdles there were for his prototype VoIP project to overcome – and this was all being done by volunteers. In many countries outside North America, charges for local phone calls made it financially risky to run a server. Some countries had prohibitive licensing regulations that made it illegal to offer such a service if you weren’t a telephone company. The hardware and software were readily available but had to be bought and required tinkering to set up. Plus, few outside the business world had continuous high-speed connections; most of us were using modems to dial up a service provider.

Small surprise that those early calls were not great. A Chicago recipient of a test call said she’d had better connections over the traditional phone network to Harare. Network lag made it more like a store-and-forward audio clipping service than a phone call. This didn’t matter as much to people with a history in ham radio, like Pulver himself; they were used to the cognitive effort to understand despite static and dropouts.

On the other hand, international calling was so wildly expensive at the time that even so FWD opened up calling for half a million people.

FWD was the experiment that proved the demand and the potential. Soon, numerous companies were setting up to offer VoIP services via desktop applications of varying quality and usability. It was into this hodge-podge that Skype was launched in 2003 from Estonia. For a time, it kept getting better: it began with free calling between Skype users and paid calls to phone lines, and moved on to offering local phone numbers around the world, as Google Voice does now.

Around the early 2000s it was popular to predict that VoIP services would kill off telephone companies. This was a moment when network neutrality, now under threat, was crucial; had telcos been allowed to discriminate against VoIP traffic, we’d all still be paying through the nose for international calling and probably wouldn’t have had video calling during the covid lockdowns.

Instead, the telcos themselves have become VoIP companies. In 2007, BT was the first to announce it was converting its entire network to IP. That process is supposed to complete this year. My landline is already a VoIP line. (Downside: no electricity, no telecommunications.)

Pulver, I find, is still pushing away at the boundaries of telecommunications. His website these days is full of virtualized conversations (vCons) and Supply Chain Integrity, Transparency, and Trust (SWICC), which he explains here (PDF). The first is an IETF proposed standard for AI-enhanced digital records. The second is an IETF proposed framework that intends to define “a set of interoperable building blocks that will allow implementers to build integrity and accountability into software supply chain systems to help assure trustworthy operation”. This is the sort of thing that may make a big difference to companies while being invisible and/or frustrating to most of us.

As for Skype, it will fade from human memory. If it ever comes up, we’ll struggle to explain what it was to a generation who have no idea that calling across the world was ever difficult and expensive.

Illustrations: Radar O’Reilly (Gary Burghoff) in the TV series M*A*S*H with his radio telephone setup.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.