Review: More Everything Forever

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity
By Adam Becker
Basic Books (Hachette)
ISBN: 9781541619593
Publication date: April 22, 2025

A friend who would be 93 now used to say that the first time he’d read about the idea of living long enough to live forever was when he was about eight. Even at that age, he was a dedicated reader of science fiction, though he also said this was a habit so weird at the time that he had to hide it from his classmates.

Cut to 1992, when I reviewed Ed Regis’s book Great Mambo Chicken and the Transhuman Condition for New Scientist. Regis traveled the American southwest, finding cryonicists, guys building rockets in the desert, wondering whether gravity was really necessary, figuring out how to make backups of our brains, spinning chickens in centrifuges to understand the impact of heavier-than-earth gravity, that sort of thing. Regis called it “fin-de-siecle hubris”.

In 1992 it was certainly tempting to believe that this sort of craziness was somehow related to the upcoming millennium. Today’s techbros have no such excuse, yet their dreams are the same. This is the collection Timnit Gebru and Émile Torres have dubbed TESCREAL: Transhumanism, extropianism, Singularitarianism, Cosmism, Rationalists, Altruism, and Longtermism, all of it, as Adam Becker explains in More Everything Forever, more of a rebranding than a new vision of the future.

You could accordingly view Becker’s book as a follow-up, 25 years on. Regis could present all this as a mostly whacked-out bunch of dreamers, but since then it’s all become much more serious. Today’s chicken-spinners are armed with massive amounts of money and power and are willing to ignore the present suffering of millions if it means enabling their image of the future. We’ve met this crowd before, in the pages of Douglas Rushkoff’s Survival of the Richest. These are the folks who treat science fiction’s cautionary tales as a manual for what to build.

Becker does a fine job of tracing the history of the various TESCREAL strands. Most are older than one might expect, some with roots in thousands-year-old Christian beliefs. Isn’t fear of death, which Becker believes lies at the core of all this, as old as humanity? At last year’s CPDP, Mireille Hildebrandt called TESCREAL “paradise engineering”.

“If it violates physics, you can ignore it,” I was told at a conference on these topics after I asked how to distinguish the appealing-but-impossible from the well-maybe-someday. Becker proves the wisdom of this: his grounding in engineering and physics helps him provide essential debunking. Mars is too far away and too poisonous for humans to settle there any time soon. Meanwhile, he points out, Moore’s Law, which underpins projections by folks like Ray Kurzweil that computational power will continue to accelerate exponentially, is far more likely to end, like all other exponential trends. Physics, resource constraints, the increasing difficulty of finding new technological paradigms, and the fact that we understand so little of how the human brain or consciousness really works are all factors. The reality, Becker concludes, is that AGI is at best a long, long way off.

The censorship-industrial complex

In a sign of the times, the Academy of Motion Picture Arts and Sciences has announced that in 2029 the annual Oscars ceremony will move from ABC to YouTube, where it will be viewable worldwide for free. At Variety, Clayton Davis speculates how advertising will work – perhaps mid-roll? The obvious answer is to place the ads between the list of nominees and opening the envelope to announce the winner. Cliff-hanger!

The move is notable. Ratings for the awards show have been declining for decades. In 1960, 45.8 million people in the US watched the Oscars – live, before home video recording. In 1998, the peak, 55.2 million, after VCRs, but before YouTube. In 2024: 19.5 million. This year, the Oscars drew under 18.1 million viewers.

On top of that, broadcast TV itself is in decline. One of the biggest audiences ever gathered for a single episode of a scripted show was in 1983: 100 million, for the series finale of M*A*S*H. In 2004, the Friends finale drew 52.5 million. In 2019, the Big Bang Theory finale drew just 17.9 million. YouTube has more than 2.7 billion active users a month. Whatever ABC was paying for the Oscars, reach may matter more than money, especially in an industry that is also threatened by shrinking theater audiences. In the UK, YouTube is second most-watched TV service ($), after only the BBC.

The move suggests that the US audience itself may also not be as uniquely important as it was historically. The Academy’s move fits into many other similar trends.

***

During this week’s San Francisco power outage, an apparently unexpected consequence was that non-functioning traffic lights paralyzed many of the city’s driverless Waymo taxis. In its blog posting, the company says, “While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.”

Friends in San Francisco note that the California Driver’s Handbook (under “Traffic Control”) is specific about what to do in such situations: treat the intersection as if it had all-way stop signs. It’s a great example of trusting human social cooperation.

Robocars are, of course, not in on this game. In an uncertain situation they can’t read us. So the volume of requests overwhelmed the remote human controllers and the cars froze, blocking intersections and even sidewalks. Waymo suspended the service temporarily, and says it is updating the cars’ software to make them act “more decisively” in such situations in future.

Of course, all these companies want to do away with the human safety drivers and remote controllers as they improve cars’ programming to incorporate more edge cases. I suspect, however, that we’ll never really reach the point where humans aren’t needed; there will always be new unforeseen issues. Driving a car is a technical challenge. Sharing the roads with others is a social effort requiring the kind of fuzzy flexibility computers are bad at. Getting rid of the humans will mean deciding what level of dysfunction we’re willing to accept from the cars.

Self-driving taxis are coming to London in 2026, and I’m struggling to imagine it. It’s a vastly more complex city to navigate than San Francisco, and has many narrow, twisty little streets to flummox programmers used to newer urban grids.

***

The US State Department has announced sanctions barring five people and potentially their families from obtaining visas to enter or stay in the US, labeling them radical activists and weaponized NGOs. They are: Imran Ahmed, an ex-Labour advisor and founder and CEO of the Centre for Countering Digital Hate; Clare Melford, founder of the Global Disinformation Index; Thierry Breton, a former member of the European Commission, whom under secretary of state for public diplomacy Sarah B. Rogers, called “a mastermind” of the Digital Services Act; and Josephine Ballon and Anna-Lena von Hodenberg, managing directors of the independent German organization HateAid, which supports people affected by digital violence. Ahmed, who lived in Washington, DC, has filed suit to block his deportation; a judge has issued a temporary restraining order.

It’s an odd collection as a “censorship-industrial complex”. Breton is no longer in a position to make laws calling US Big Tech to account; his inclusion is presumably a warning shot to anyone seeking to promote further regulation of this type. GDI’s site’s last “news” posting was in 2022. HateAid has helped a client file suit against Google in August 2025, and sued X in July for failing to remove criminal antisemitic content. The Center for Countering Digital Hate has also been in court to oppose antisemitic content on X and Instagram; in 2024 Elon Musk called it a ‘criminal organization’. There was more logic to”the three people in hell” taught to an Irish friend as a child (Cromwell, Queen Elizabeth I, and Martin Luther).

Whatever the Trump administration’s intention, the result is likely to simply add more fuel to initiatives to lessen European dependence on US technology.

Illustrations: Christmas tree in front of the US Capitol in 2020 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Slop

Sometimes it doesn’t pay to be first. iRobot, the maker of the Roomba, has filed for Chapter 11 bankruptcy protection and been acquired by Picea, one of its Chinese suppliers, Lauren Almeida reports at the Guardian. The company’s value has cratered since 2021.

Given the wild enthusiasm that greeted the Roomba’s release in 2002, it seems incredible. Years before then, I recall an event where a speaker whose identity I don’t remember said that ever since he had mentioned the possibility of a robot vacuum sometime like the 1960s he’d gotten thousands of letters asking when it would be ready. There was definitely customer demand. It helped that the Roomba itself was kind of cute as it banged randomly into furniture. People named them, and took them on vacation. But, as often happens, the Roomba’s success attracted lower-cost competitors, and the first mover failed to keep up.

I got one in 2003. After a great few months, I realized that Roombas are not compatible with long hair, which ties them into knots that take longer to cut out than vacuuming. I gave it away within a year and haven’t tried again.

At Mashable, Leah Stodart warns that although the Roombas people already have will continue to work “for now”, users can’t be confident that this state of affairs will continue. Like so many other things that used to be things we owned and are now things we subscribe to (but still think we “buy”), newer-model Roombas are controlled by an app that the manufacturer can change or discontinue at will. She calls it “unplanned obsolescence”. Her advice not to buy a new one this year is sound from the consumer’s point of view, but hardly likely to help the company survive.

***

If generative AI is so great, why is everyone forcing it on us? The latest example, Luke James reports at Tom’s Hardware, is LG “smart” TVs whose users woke up the other day to find a new update had installed “CoPilot: Your AI Companion” without asking permission and that there was no option to remove it. The most you can do to disable it, James says, is keep your TV disconnected from the Internet.

There are of course many more, the automated summaries popping up everywhere being the most obvious. Then, Matthew Gault reports at 404 Media, a Discord moderator and an Anthropic executive added Anthropic’s Claude chatbot to a community for queer gamers, who had voted to restrict Claude to its own channel. Result: major exodus. Duh.

And, of course, as Lance Ulanoff reminds at TechRadar, there is “AI slop” everywhere – music playlists, YouTube videos, ebooks – threatening people’s livelihood even though, as Cory Doctorow has written, “AI can’t do your job. But an AI salesman can convince your boss to fire you and replace you with a chatbot that can’t do your job.” For a while, anyway: Microsoft is halving its sales targets for AI.

And thus we get “slop” as the word of the year, per Merriam-Webster. Any time companies are this intent on foisting something on us – chatbots, ads – you have to know that they’re intent on favoring their interests, not ours.

***

Last week, Customs and Borders Patrol published a notice in the Federal Register proposing new rules for foreigners traveling to the US on an ESTA (“Electronic System for Travel Authorization”) as part of the visa waiver program. It has drawn a lot of discussion in the UK, which is one of the 42 affected countries. Under the new rules, applicants must install CBP’s app, into which they must submit a massive load of “high-value” personal information. The list is long, allows for a so-far-imaginary future of DNA sampling, and expects you to be able to give five years’ worth of family members’ residences, phone numbers, and places of birth, and all the email addresses you’ve used for ten years. CBP thinks the average applicant should be able to complete on their smartphone in 22 minutes. I think it would take hours of painful, resentful typing on a stupid touch keyboard, and even then I doubt I could fill it out with any certainty that the information I supplied was complete or accurate. Data collection at this scale makes it easy to find an error to use as an excuse to deny entry to or deport someone you want to get rid of. As Edward Hasbrouck writes at Papers, Please, “Welcome to the 2026 World Cup”.

“They have to be planning to use AI on all that data,” a friend commented last week. Probably – to build social graphs and find connections deemed suspicious. Privacy International predicts that the masses of data being demanded will in fact enable the AI tools necessary to implement automated decision making and calls the proposals “disproportionate for “a family’s visit to Disney World”,

One of the problems Hasbrouck highlights while opposing this level of suspicionless data collection is that CBP has not provided any way for would-be respondents to the Federal Register notice to examine the app’s source code. What other data might it be collecting?

As Hasbrouck adds in a follow-up, the rules the US imposes on visitors are often adopted by other countries as requirements for US travelers. In this game of ping-pong escalation, no one wins.

ID is football

On Wednesday, Australia woke up to its new social media ban for under-16s. As Ange Lavoipierre explains at ABC News, the ban isn’t total. Under-16s are barred from owning their own accounts on a a list of big platforms – Facebook, Instagram, Threads, Twitch, YouTube, TikTok, X, Reddit, Kick, and Snapchat – but not barred from *using* those platforms. So, inevitably, there are already reports of errors and kids figuring out how to bypass the rules in order to stay in touch with their friends. The Washington Post’s report contains this contradiction: “Numerous recent polls indicate that a solid majority of Australians support the ban, but that young respondents largely don’t plan to comply.”

Helpfully, ABC News reported a couple of months ago that researchers, led by the UK’s Age Check Certification Scheme, have tested age assurance vendors, and found that “Old man” masks and other cheap party costumes apparently work to fool age estimation algorithms).

Edge cases are appearing, such as the country’s teen Olympians – skateboarders and triathletes – for whom the ban disrupts years of building fan communities, potentially also disrupting some of their funding.

Meanwhile, the BBC reports that a pair of 15-year-olds, backed by the Digital Freedom Project, are challenging the ban in court. The Josh Taylor reports at the Guardian that Reddit is also suing.

At Nature, Rachel Fieldhouse and Mohana Basu write that the ban’s wider effects will be assessed by scientists independently. This is good; defining “success” solely by the numbers of blocks bypassed substitutes an easy measure for the long-term impacts, which are diffuse, difficult to measure, and subject to many confounding variables.

But we know this: the ratchet effect applies. I first encountered it in the context of alternative medicine. Chronic illnesses have cycles; they improve, plateau, get worse. Apply a harmless remedy. If the patient gets better, the remedy is working. If it stays the same, the remedy has halted the decline. If it gets worse, the remedy came too late. In all cases, the answer is more of the remedy. So with online safety. In child safety, the answer is always that more restrictions are needed. In the UK, where the Online Safety Act has been in force for mere months, three members of the House of Lords have already proposed a similar ban as an amendment to the Children’s Wellbeing and Schools Bill.

***

Keir Starmer’s vague plan for a mandatory digital ID is back. This week saw a Westminster Hall debate, as required after nearly three million people signed an online petition opposing it.

At Computer Weekly, Liz Evenstead reports that MPs across all parties attacked the plan, making familiar points: the target such a scheme could create for criminals, the change it would bring to the relationship between citizens and the state, and the potential threat to civil liberties. They also attacked its absence from Labour’s election manifesto; last month, Fiona Brown reported at The National that on Times Radio UK head Louis Mosley said that Palantir would not bid on contracts for the digital ID because it hasn’t had “a clear, resounding ballot box”.

Also a potential issue is cost, which the Office of Budget Responsibility recently estimated at £1.8 billion. According to SA Mathieson at The Register, the government has rejected the figure but declined to provide an alternative estimate until its soon-to-be-launched consultation has been completed.

Also hovering in the background, weirdly ignored, is the digital identity and attributes trust framework, which has been in progress for the last several years at least.

Beyond that, we still have no real details. For this reason, in a panel I moderated at this week’s UK Internet Governance Forum, I asked panelists – Dave Birch, Karla Prudencio, and Mirca Madianou to try to produce some principles for what digital ID should and should not be. Birch in particular has often said he thinks Britain as a sovereign state in the 21st century sorely needs a digital identity infrastructure – by which he *doesn’t* mean anything like the traditional “ID card” so many are talking about. As we all agree, technology has changed a lot since 2005, when this was last attempted. Since then: blockchain, smartphones, social media, machine learning, generative. So we agree that far: anything the government proposes really should look very different than the last attempt, in 2005.

Here are the principles our discussion came up with:
– Design for edge cases, as a system that works for them will work for everyone.
– Design for plural identities.
– Don’t design the system as a hostile environment.
– Don’t create a target for hackers.
– Understand the real purpose .
– Identification is not authentication.
– Understand public-private partnerships as three-way relationships with users.
– Design to build public trust.

And one last thought:
– Sometimes, ID is football.

That last is from Madianou’s field work in Karen refugee camps along the border between Thailand and Myanmar. One teenaged boy really wanted an ID card so he could leave the camp and return safely without being arrested in order to go play football in a nearby village. It’s a reminder: identification can mean many different things in different situations.

Illustrations: The Mae La refugee camp in Thailand (by Tayzar44 at Wikimedia.

Also this week: TechGrumps 3.34 – ChatGPT is not my wingman.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A road not taken

Nearly 20 years ago, I attended a conference on road pricing. The piece I wrote about it (PDF) for Infosecurity magazine suggests it was in late 2007, three years after transport secretary Alistair Darling proposed bringing in a national road pricing scheme. The idea represented a profound change; until a few years earlier, congestion had always led to building more roads. In 2003, however, London mayor Ken Livingstone implemented instead the congestion charge – and both traffic and pollution levels had dropped.

So this conference explored the idea that road pricing would cut traffic to match road capacity, taking us off the vicious spiral of increasing road capacity and watching traffic rise to choke it. Darling’s proposal was for a satellite tracking following a 2004 feasibility study. In 2007, however, prime minister Tony Blair effectively dropped the idea after 1.8 million people signed a petition opposing it.

This week’s announcement of road pricing for electric vehicles is rather differently motivated, but reawakened my memory of the 2008 discussion. Roads must be paid for somehow, and, as foreseen by the the Institute for Fiscal Studies in 2012 the rise of electric vehicles inevitably eats away revenues from fuel taxes. EVs have many benefits: they can be powered without fossil fuels; their engines emit no carbon or other pollutants; and they are quieter. However, they weigh 10% to 30% more than internal combustion engine vehicles, and tire wear remains a significant pollutant.

Back in 2005 there were three main contenders for per-mile road pricing: automated number/license plate readers; tag and beacon; and time-distance-place. At the time, versions of these were already in use: the first was in place to administer London’s congestion charge; the second, effectively an update to paying at the tollbooth, was in place on turnpikes in the American northeast and in the UK at Dartford Crossing; the third was being used in Germany’s HGV system, which collects tolls for the kilometers driven on the country’s autobahns. In a 2007 paper, Cambridge researchers David N. Cottingham, Alastair Beresford, and Robert K. Harle analyzed the technologies available.

Whatever you call them, limited-access highways – autobahns, motorways, interstates, thruways – are a relatively simple problem because there are relatively few entry and exit points. Tracking, as transponders read by automated tollbooths have made possible, remains a privacy concern. Such a scheme was deemed unworkable for London, where TfL counted 227 entry points to the most congested area, and barriers would simply create new chokepoints. For this reason, and also because it estimated that 80% of cars entering the congestion zone are infrequent users, TfL opted for a system of cameras that read license plates on the fly and an automated system to send out penalty notices if someone hasn’t paid. This system also seems difficult to imagine scaling to a national level; every road, street, and back alley would have to have ANPR cameras. In the US, where Flock cameras are collecting ANPR data at scale, law enforcement and immigration authorities are already exploiting it in anti-democratic ways, as 404 Media reports.

In 2008, TDP, a much more likely approach for a nationwide system of per-mile pricing, would have required a box to be installed in every vehicle to track it, likely via GPS, and report time and location data via mobile networks for use to calculate what the owner should pay. No one was then sure whether road users would accept having tags in their vehicles or be willing to pay the considerable expense; as I seem to have written in that 2008 Infosecurity article, “‘We’re going to change your behavior and charge you for the privilege’ isn’t much of a sales pitch.” But such a system would enable proportionately charging people based on their actual road use.

If we were updating that discussion, parts would be unchanged. Congestion charge-style ANPR cameras everywhere will be no more feasible than then. Germany’s system for motorways will similarly not be feasible for smaller roads and within cities. TDP, however…

Here in 2025, most people are already carrying smart phones with GPS just part of the package. So there could be a choice: buy a box that is irretrievably embedded in the vehicle or download a TDP app that’s somehow tied to and paired with the car, perhaps via its electronic key, so that it won’t start unless the app-car link is enabled. (Fun for anyone whose battery dies in the course of an evening out.) In addition, cars already collect all sorts of data and send it to their manufacturers. So it’s also possible to imagine a government requring manufacturers active in the UK to transmit time and location data to a specified authority.

Obviously, the privacy implications of such a system would be staggering. Law enforcement would demand access. Businesses whose fleet patterns are commercially sensitive would hate it. And the UK’s successive governments have shown themselves to be highly partial to centralized databases that are built for one purpose and then are exploited in other ways. For this reason, Beresford’s idea in 2008 was for a privacy-protecting decentralized system using low-cost equipment that would allow cars to identify neighboring non-payers and report only those.

The good news is that the details we have so far government proposals suggest something far simpler: report the odometer reading at each year’s annual vehicle check and multiply by the per-mile charge. So unusual these days to see a government propose something so simple and cheap. Whether it’s a good idea to discourage the shift to EVs at this particular time is a different question.

Illustrations: A fork in a road (via Wikimedia).

At Plutopia, we interview Bruce Schneier about his new book, Rewiring Democracy, which examines the good and bad of what AI may bring to democracy.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: The Seven Rules of Trust

The Seven Rules of Trust: Why It Is Today’s Essential Superpower
by Jimmy Wales
Bloomsbury
ISBN: 978-1-5266-6501-0

Probably most people have either forgotten or never known that when Jimmy Wales first founded Wikipedia it was widely criticized. A lot of people didn’t believe an encyclopedia written and edited by volunteers could be any good. Many others believed free access would destroy Britannica’s business model, and reacted resentfully. Teachers warned students against using it, despite the fact that Wikipedia’s talk pages offer rare transparency into how knowledge is curated.

Now we know the Internet is big enough for both Wikipedia and Britannica.

Much of Wikipedia’s immediate value lay in its infinite expandability; it covered in detail many subjects the more austere Britannica considered unworthy. But, as Wales writes at the beginning of his recent book, The Seven Rules of Trust, Wikipedia’s biggest challenge was finding a way to become trusted. Britannica must have faced this too, once. Its solution was to build upon the reputation of the paid experts who write its entries. Wikipedia settled on passion, transparency, and increasingly rigorous referencing. As it turns out, collectively we know a lot. Today, Wikipedia is nearly 100 times the size of Britannica, has hundreds of language editions, and is so widely trusted that most of us don’t even think about how often we consult it.

In The Seven Rules of Trust, Wales tells the story of: how Wikipedia got from joke to trusted resource. It began, he says, with its editors trusting each other. For this part of his story, he relies on Frances Frei‘s model of trust, a triangle balancing authenticity, empathy, and logic. Editors’ trust enabled the collaboration that could build public trust in their work, which is guided by Wikipedia’s five pillars.

Wales’s seven rules are not complicated: trust is personal, even at scale; people are born to connect and collaborate; successful collaboration requires a clear positive shared purpose; give trust to get trust; practice civility; stick to your mission and avoid getting involved in others’ disputes; embrace transparency. Some of these could be reframed as the traditional virtues, as when Wales talks about the principle of “assume good faith” when trying to negotiate the diversity of others’ opinions to reach consensus on how to present a topic. I think of this as “charity”. Either way, it’s not meant to be infinite; good faith can be abused, and Wales goes on to talk about how Wikipedia handles trolls, self-promoters, and other problems.

Yet, Wales’s account feels rosy. Many of his stories about remediating the site’s flaws revolve around one or two individuals who personally built up areas such as Wikipedia’s coverage of female scientists. I’m not sure he’s in a position to recognize how often would-be contributors are quickly deterred by an editor fiercely defending their domain or how difficult it’s become to create a new page and make sure it stays up. And, although he nods at the hope that the book will help recruit new editors, he doesn’t discuss the problem of churn Wikipedia surely faces.

Having steered the creation of something as gigantic and seemingly unlikely as Wikipedia, Wales has certainly earned the right to explain how he did it in the hope of helping others embarking on similarly large and unlikely projects. Wales argues that trust has enabled diversity of opinion, and the resulting internal disagreement has improved Wikipedia’s quality. Almost certainly true, but hard to apply to more diffuse missions; see today’s cross-party politics.