The gated web

What is an AI browser?

Or, in a more accurate representation of my mental reaction, *WTF* is an AI browser?

In wondering about this, I’m clearly behind the times. Tech sites are already doing roundups of their chosen “best” ones. At Mashable, Cecily Mouran compares “top” AI browsers because “The AI browser wars hath begun.”

Is the war that no one wants these things but they’re being forced on us anyway? Because otherwise…it’s just a bunch of heavily financed companies trying to own a market they think will be worth billions.

In Tim Berners-Lee’s original version, the web was meant to simplify sharing information. A key element was giving users control over presentation. Then came designers, who hated that idea. That battle between users’ preferences and browser makers’ interests continues to this day. What most people mean by the browser wars), though, was the late-1990s fight between Microsoft and Netscape, or the later burst of competition around smartphones. A big concern has long been market domination: a monopoly could seek to slowly close down the web by creating proprietary additions to the open standards and lock all others out.

Mouran, citing Casey Newton’s Platformer newsletter, suggests that Google specifically has exploited its browser to increase search use (and therefore ad revenues), partly by merging the address and search bars. I know I’m not typical, but for me search remains a separate activity. Most of the time I’m following a link or scanning familiar sites. Yes, when my browser history fills in a URL, I guess you could say I’m searching the browser history, but to me the better analogy is scanning an array of daily newspapers. Many people *also* use their browser to access cloud-based productivity software and email or play online games, none of which is search.

Nor are chatbots, since they don’t actually *find* information; they apply mathematics and statistics to a load of ingested text and create sentences by predicting the most likely next word. This is why Emily Bender and Alex Hanna call them “synthetic text extruding machines” in their book, The AI Con. I am in the business of trying to make sense of the impact of fast-moving technology, or at least of documenting the conflicts it creates. The only chatbot I’ve found of any value for this – or for personal needs such as a tech issue – is Perplexity, and that’s because it cites (or can be ordered to cite) sources one can check. There is every difference in the world between just wanting an answer and wanting the background from which to derive an answer that may possibly be new.

In any event, Newton’s take is that a company that’s serious about search must build its own browser. Therefore: AI companies are building them. Hence these roundups. Mauron’s pitch: “Imagine a browser that acts as your research assistant, plans trips, sends emails, and schedules meetings. As AI models become more advanced, they’re capable of autonomously handling more complex tasks on your behalf. For tech companies, the browser is the perfect medium for realizing this vision.”

OK, I can see exactly what it does for tech companies. It gives them control over what information you can access, how you use it, and who and how much you pay for the services its agent selects (plus it gets a commission).

I can also see what it does for employers. My browser agent can call your browser agent and negotiate a meeting plan. Then they attend the meeting on our behalf and send us both summaries, which they ingest and file, later forwarding them to our bosses’ agents to verify we were at work that day. In between, they can summarize emails, and decide which ones we need to see. (As Charles Arthur quipped at The Overspill, “Could they…send fewer emails?”)

Remember when part of the excitement of the Internet was the direct access it gave to people who were formerly inaccessible? Now, we appear to be building systems to ensure that every human is their own gated community.

What part of this is good for users? If you are fortunate enough not to care about the price of anything, maybe it’s great to replace your personal assistant with an agentic web browser. Most of us have struggled along doing things for ourselves and each other. At Cybernews, Mayank Sharma warns that AI browsers’ intentional preemption of efforts to browse for yourself, filtering anything they deem “irrelevant”, threaten the open web. Newton quantifies the drop in traffic news publishers are already seeing from generative AI. Will we soon be complaining about information underload?

At Pluralistic last year, Cory Doctorow wrote about the importance of faithful agents: software that is loyal to us rather than its maker. He particularly focused on browsers, which have gone from that initial vision of user control to become software that spies on us and reports home. In Mauron’s piece, Perplexity openly hopes to use chats to build user profiles and eventually show ads.

The good news, such as it is, is that from what I’ve read in writing this, most of these companies hope to charge for these browsers – AI as a subscription service. So avoiding them is also cheaper. Double win.

Illustrations: John Tenniel’s drawing of Davy Jones, sitting on his locker (via Wikimedia, published in Punch, 1892 with the caption, “AHA! SO LONG AS THEY STICK TO THEM OLD CHARTS, NO FEAR O’ MY LOCKER BEIN’ EMPTY!!”

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

It’s always DNS…

Years ago, someone in tech support at Telewest, then the cable supplier for southwest London, told me that if my broadband went out I should hope its television service went down too: the volume of complaints would get it fixed much faster. You could see this in action some years later, in 2017, when Amazon Web Services went down, taking with it Netflix. Until that moment few had realized that Netflix built its streaming service on Amazon’s cloud computing platform to take advantage of its flexibility in up- and down-sizing infrastructure. The source – an engineer’s typing error – was quickly traced and fixed, and later I was told the incident led Netflix to diversify its suppliers. You would think!

Even so, Netflix was one of the companies affected on Monday, when a DNS error took out a chunk of AWS, and people from gamers on Roblox to governments with mission-critical dependencies were affected. On the list of the affected are both the expected (Alexa and Ring) and the unexpected (Apple TV, Snapchat, Hulu, Google, Fortnite, Lyft, T-Mobile, Verizon, Venmo, Zoom, and the New York Times). To that add the UK government. At the Guardian, Simon Goodley says the UK government has awarded AWS £1.7 billion in contracts across 35 public sector authorities, despite warnings from the Treasury, the Financial Conduct Authority, and the Prudential Regulation Authority. Among the AWS-dependent: the Home Office, the Department of Work and Pensions, HM Revenue and Customs, and the Cabinet Office.

First, to explain the mistake – so common that experts said “It’s always DNS” and so old that early Internet pioneers said “We shouldn’t be having DNS errors any more”. The Domain Name System, conceived in 1983 by Paul Mockapetris, is a core piece of how the Internet routes traffic. When you type or click on a domain name such as “pelicancrossing.net”, behind the scenes a computer translates that name into a series of dotted numbers that identify the request’s destination. An error in those numbers, no matter how small, means the message – data, search request, email, whatever – can’t reach its destination, just as you can’t reach the recipient you want if you get a telephone number wrong. The upshot of all that is that DNS errors snarl traffic. In the AWS case, the error affected just one of its 30 regions, which is why Monday’s outages were patchy.

As Dan Milmo and Graham Wearden write at the Guardian, the outage has focused many minds on the need to diversify cloud computing. Taken together, Amazon (30%), Microsoft Azure (20%), and Google (13%) jointly control 63% of the market worldwide. There have been many such warnings.

At The Register, Carly Page reports on the individual level: smart homes turned dumb. Eightsleep beds stuck in an upright position and lost their temperature controls. App-controlled litter boxes stopped communicating. “Smart” light bulbs stayed dark. The Internet of Other People’s Things at its finest.

Also at The Register, Corey Quinn suggests the DNS error was ultimately attributable to an ongoing exodus of senior AWS engineers who took with them essential institutional memory. Once you’ve reached a certain level of scale, Quinn writes, every problem is complex and being able to remember that a similar issue on a previous occasion was traced improbably to a different system in a corner somewhere can be crucial. As departures continue, Quinn believes failures like these will become more common.

If that global picture is dispiriting, consider also the question of dependence within organizations; if your country depends on a single company’s infrastructure to power mission-critical systems, the diversity in the rest of the world won’t help you if that single company goes out. In the UK, Sam Trendall reports at Public Technology, the government activated incident-response mechanisms. Notable among the failures as prime minister Keir Starmer pushes for a mandatory digital ID: the government’s new One Login, as well as some UK banks. This outage provides evidence for the digital sovereignty many have been advocating.

I admit to mixed feelings. I agree with the many who believe the public sector should embrace digital sovereignty…but I also know that the UK government has a terrible record of failed IT projects, no matter who builds them. In 2010, fixing that was part of the motivation for setting up the Government Digital Service, as first GDS leader Mike Bracken writes at Public Digital. Yet the failures keep coming; see also the Post Office Horizon scandal. Bracken believes the solution is to invest in public sector capacity and digital expertise in order to end this litany of expensive failures.

At TechRadar, Benedict Collins rounds up further expert commentary, largely in agreement about the lessons we should learn. But will we? We should have learned in 2017.

Still, it would be a mistake to focus solely on Amazon. It is just one of many centralized points of failure. The is dangerously important as a unique resource for archived web pages. And the UK is not the only government flying at high-risk. Consider South Korea, where a few weeks ago a data center fire may have consumed 85TB of government data – with no backups. It seems we never really learn.

Illustrations: Traffic jam in New York’s Herald Square, 1973 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The bottom drawer

It only now occurs to me how weirdly archaic the UK government’s rhetoric around digital ID really is. Here’s prime minister Keir Starmer in India, quoted in the Daily Express (and many elsewheres):

“I don’t know how many times the rest of you have had to look in the bottom drawer for three bills when you want to get your kids into school or apply for this or apply for that – drives me to frustration.”

His image of the bottom drawer full of old bills is the bit. I asked an 82-year-old female friend: “What do you do if you have to supply a utility bill to confirm your address?” Her response: “I download one.”

Right. And she’s in the exact demographic geeks so often dismiss as technically incompetent. Starmer’s children are teenagers. Lots of people under 40 have never seen a paper statement.

Sure, many people can’t do that download, for various reasons. But they are the same people who will struggle with digital IDs, largely for the same reasons. So claiming people will want digital IDs because they’re more “convenient” is specious. The inconvenience isn’t in obtaining the necessary documentation. It lies in inconsistent, poorly designed submission processes – this format but not that, or requiring an in-person appointment. Digital IDs will provide many more opportunities for technical failure, as the system’s first targets, veterans, may soon find out.

A much cheaper solution for meeting the same goal would be interoperable systems that let you push a button to send the necessary confirmation direct to those who need it, like transferring a bank payment. This is, of course, close to the structure Mydex and researcher Derek McAuley have been working on for years, the idea being to invert today’s centralized databases to give us control of our own data. Instead, Starmer has rummaged in Tony Blair’s bottom drawer to pull out old ID proposals.

In an analysis published by the research organization Careful Industries, Rachel Coldicutt finds a clash: people do want a form of ID that would make life easier, but the government’s interest is in creating an ID that will make public services more efficient. Not the same.

Starmer himself has been in India this week, taking advantage to study its biometric ID system Aadhaar. Per Bloomberg, Starmer met with Infosys co-founder Nandan Nilekani, Aadhaar’s architect, because 16-year-old Aadhaar is a “massive success”.

According to the Financial Times, Aadhaar has 99% penetration in India, and “has also become the bedrock for India’s domestic online payments network, which has become the world’s largest, and enabled people to easily access capital markets, contributing to the country’s booming domestic investor base.” The FT also reports that Starmer claims Aadhaar has saved India $10 billion a year by reducing fraud and “leakages” in welfare schemes. In April, authentication using Aadhaar passed 150 billion transactions, and continues to expand through myriad sectors where its use was never envisioned. Visitors to India often come away impressed. However…

At Yale Insights, Ted O’Callahan tells the story of Aadhaar’s development. Given India’a massive numbers of rural poor with no way to identify themselves or access financial services, he writes, the project focused solely on identification.

Privacy International examines the gap between principle and practice. There have been myriad (and continuing) data breaches, many hit barriers to access, and mandatory enrollment for accessing many social protection schemes adds to preexisting exclusion.

In a posting at Open Democracy, Aman Sethi is even less impressed after studying Aadhaar for a decade. The claim of annual savings of $10 billion is not backed by evidence, he writes, and Aadhaar has brought “mass surveillance; a denial of services to the elderly, the impoverished and the infirm; compromised safety and security, and a fundamentally altered relationship between citizen and state.” As in Britain in 2003, when then-prime minister Tony Blair proposed the entitlement card, India cited benefit fraud as a key early justification for Aadhaar. Trying to get it through, Blair moved on to preventing illegal working and curbing identity theft. For Sethi, a British digital ID brings a society “where every one of us is a few failed biometrics away from being postmastered” (referring to the postmaster Horizon scandal).

In a recent paper for the Indian Journal of Law and Legal Research, Angelia Sajeev finds economic benefits but increased social costs. At the Christian Science Monitor, Riddhima Dave reports that many other countries that lack ID systems, particularly developing countries, are looking to India as a model. The law firm AM Legals warns of the spread of data sharing as Aadhaar has become ubiquitous, increasing privacy risks. Finally, at the Financial Times, John Thornhill noted in 2021 the system’s extraordinary mission creep: the “narrow remit” of 2009 to ease welfare payments and reduce fraud has sprawled throughout the public sector from school enrollment to hospital admissions, and into private companies.

Technology secretary Liz Kendall told Parliament this week that the digital ID will absolutely not be used for tracking. She is utterly powerless to promise that on behalf of the governments of the future.

If Starmer wants to learn from another country, he would do well to look at those problems and consider the opportunity costs. What has India been unable to do while pursuing Aadhaar? What could *we* do with the money and resources digital IDs will cost?

Illustrations: In 1980’s Yes, Minister (S01e04, “Big Brother”), minister Jim Hacker (Paul Eddington) tries to explain why his proposed National Integrated Database is not a “Big Brother”.

Update: Spelling of “Aadhaar” corrected.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Software is still forever

On October 14, a few months after the tenth anniversary of its launch, Microsoft will end support for Windows 10. That is, Microsoft will no longer issue feature or security updates or provide technical support, and everyone is supposed to either upgrade their computers to Windows 11 or, if Microsoft’s installer deems the hardware inadequate, replace them with newer models. People who “need more time”, in the company’s phrasing, can buy a year’s worth of security updates. Either way, Microsoft profits at our expense.

In 2014, Microsoft similarly end-of-lifed 13-year-old Windows XP. Then, many were unsympathetic to complaints about it; many thought it unreasonable to expect a company to maintain software for that long. Yet it was obvious even then that software lives on with or without support for far longer than people expect, and also that trashing millions of functional computers was stupidly wasteful. Microsoft is giving Windows 10 a *shorter* life, which is rather obviously the wrong direction for a planet drowning in electronic waste.

XP’s end came at a time when the computer industry was transitioning from adolescence to maturity. As long as personal computing was being constrained by the limited capabilities of hardware and research and development was improving them at a fast pace, a software company like Microsoft could count on frequent new sales. By 2014, that happy time had ended, and although computers continue to add power and speed, it’s not coming back. The same pattern has been repeated with phones, which no longer improve on an 18-month cycle as in the 2010s, and cameras.

For the vast majority, there’s no reason to replace their old machine unless a non-replaceable part is failing – and there should be less of that as manufacturers are forced to embrace repairability. Significantly, there’s less and less difference for many of us if we keep the old hardware and switch to Linux, eliminating Microsoft entirely.

Those fast-moving days were real obsolescence. What we have now is what we used to call “planned obsolescence”. That is, *forced* obsolescence that companies impose on us because it’s convenient and profitable for *them*.

This time round, people are more critical, not least because of the vast amounts of ewaste being generated. The Public Interest Research Group has written an open letter asking people to petition Microsoft to extend free support for Windows 10. As Ed Bott explains at ZDNet, you do have the option of kicking the can down the road by paying for updates for another three years.

The other antisocial side of terminating free security updates is that millions of those still-functional machines will remain in use, and will be increasingly insecure as new vulnerabilities are discovered and left unpatched.

Simultaneously, Windows is enshittifying; it’s harder to run Windows without a Microsoft login; avoid stupid gewgaws and unwanted news headlines, and turn off its “Copilot AI”. Tom Warren reports at The Verge that Microsoft wants to turn Copilot into an agent that can book restaurants and control its Edge browser. There are, it appears, ways to defeat all this in Windows 11, but for how long?

In a piece on solar technology, Doctorow outlines the process by which technology companies seize control once they can no longer rely on consumer demand to drive sales. They lock down their technology if they can, lock in customers, add advertising and block market entry claiming safety and/or security make it necessary. They write and lobby for legislation that enshrines their advantage. And they use technological changes to render past products obsolete. Many think this is the real story behind the insistence on forcing unwanted “AI” features into everything: it’s the one thing they can do to make their offerings sound new.

Seen in that light, the rush to build “AI” into everything becomes a rush to find a way to force people to buy new stuff. The problem is that – it feels like – most people don’t see much benefit in it, and go around turning off the AI features that are forced on them. Microsoft’s Recall feature, which takes a screen snapshot every few seconds, was so controversial at launch that the company rolled it back – for a while, anyway.

Carelessness about ewaste is everywhere, particularly with respect to the Internet of Things. This week: Logitech’s Pop smart home buttons. At least when Google ended support for older Nest thermostats they could go on working as “dumb” thermostats (which honestly seems like the best kind).

Ewaste is getting a whole lot worse when it desperately needs to be getting a whole lot better.

***

In the ongoing rollout of the Online Safety Act and age verification update, at 404 Media, Joseph Cox reports that Discord has become the first site reporting a hack of age verification data. Hackers have collected data pertaining to 70,000 users, including selfies, identity documents, email addresses, approximate residences, and so on, and are trying to extort Discord, which says the hackers breached one of its third-party vendors that handles age-related appeals. Security practitioners warned about this from the beginning.

In addition, Ofcom has launched a new consultation for the next round of Online Safety Act enforcement. Up next are livestreaming and algorithmic recommendations; the Open Rights Group has an explainer, as does lawyer Graham Smith. The consultation closes on October 20.

Illustrations: One use for old computers – movie stardom, as here in Brazil.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Undue process

To the best of my knowledge, Imgur is the first mainstream company to quit the UK in response to the Online Safety Act (though many US news sites remain unavailable due to 2018’s General Data Protection Regulation. Widely used to host pictures for reuse on web forums and social media, Imgur shut off UK connections on Tuesday. In a statement on Wednesday, the company said UK users can still exercise their data protection rights. That is, Imgur will reply within the statutory timeframe to requests for copies of our data or for the account to be deleted.

In this case, the push came from the Information Commissioner’s Office. In a statement, the ICO explains that on September 10 it notified Imgur’s owner, MediaLab AI of its provisional findings from its previously announced investigation into “how the company uses children’s information and its approach to age assurance”. The ICO proposed to fine Imgur. Imgur promptly shut down UK access. The ICO’s statement says departure changes nothing: “We have been clear that exiting the UK does not allow an organisation to avoid responsibility for any prior infringement of data protection law, and our investigation remains ongoing.”

The ICO calls Imgur’s departure “a commercial decision taken by the company”. While that’s true, EU and UK residents have dealt for years with unwanted cookie consent banners because companies subject to data protection laws have engaged in malicious compliance intended to spark a rebellion against the law. So: wash.

Many individual users stick to Imgur’s free tier, but it profits from subscriptions and advertising. MediaLab AI bought it in 2021, and uses it as a platform to mount advertising campaigns at scale for companies like Kraft-Heinz and Alienware.

Meanwhile, UK users’ Imgur accounts are effectively hostages. We don’t want lawless companies. We also don’t want bad laws – or laws that are badly drafted and worse implemented. Children’s data should be protected – but so should everyone’s. There remains something fundamentally wrong with having a service many depend upon yanked with no notice.

Companies’ threats to leave the market rather than comply with the law are often laughable – see for example Apple’s threat to leave the EU if it doesn’t repeal the Digital Markets Act. This is the rare occasion when a company has actually done it (although presumably they can turn access back on at any time). If there’s a lesson here, it may be that without EU membership Britain is now too small for foreign companies to bother complying with its laws.

***

Boundary disputes and due process are also the subject of a lawsuit launched in the US against Ofcom. At the end of August, 4chan and Kiwi Farms filed a complaint in a Washington, DC federal court against Ofcom, claiming the regulator is attempting to censor them and using the OSA to “target the free speech rights of Americans”.

We hear less about 4chan these days, but in his book The Other Pandemic, journalist James Ball traces much of the spread of QAnon and other conspiracy theories to the site. In his account, these memes start there, percolate through other social media, and become mainstream and monetized on YouTube. Kiwi Farms is equally notorious for targeted online and offline harassment.

The argument mooted by the plaintiffs’ lawyer Preston Byrne is that their conduct is lawful within the jurisdictions where they’re based and that UK and EU countries seeking to enforce their laws should do so through international treaties and courts. There’s some precedent to the first bit, albeit in a different context. In 2010. the New York State legislature and then the US Congress passed the Libel Tourism Protection Act. Under it, US courts are prevented from enforcing British libel judgments if the rulings would not stand in a US court. The UK went on to modify its libel laws in 2013.

Any country has the sovereignty to demand that companies active within its borders comply with its laws, even laws that are widely opposed, and to punish them if they don’t, which is another thing 4chan’s lawyers are complaining about. The question the Internet has raised since the beginning (see also the Apple case and, before it the 1996 case United States v. Thomas) is where the boundary is and how it can be enforced. 4chan is trying to argue that the penalties Ofcom provisionally intends to apply are part of a campaign of targeted harassment of US technology companies. Odd to see *4chan* adopting the technique long ago advocated by staid, old IBM: when under attack, wrap yourself in the American flag.

***

Finally, in the consigned-to-history category, AOL shut down dialup on September 30. I recall traveling with a file of all of the dialup numbers the even earlier service, CompuServe maintained around the world. It was, in its time, a godsend. (Then AOL bought up the service, its biggest competitor before the web, and shut it down, seemingly out of spite.) For this reason, my sympathies are with the 124,000 US users the US Census Bureau says still rely on dial-up – only a few thousand of them were paying for AOL, per CNBC – and the uncounted others elsewhere. It’s easy to forget when you’re surrounded by wifi and mobile connections that Internet access remains hard for many people.

Elsewhere this week: Childproofing the Internet, at Skeptical Inquirer.

Illustrations: Imgur’s new UK home page.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: The AI Con

The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
By Emily Bender and Alex Hanna
HarperCollins
ISBN: 978-0-06-341856-1

Enormous sums of money are sloshing around AI development. Amazon is handing $8 billion to Anthropic. Microsoft is adding $1 billion worth of Azure cloud computing to its existing massive stake in Open AI. And Nvidia is pouring $100 billion in the form of chips into Open AI’s project to build a gigantic data center, while Oracle is borrowing $100 billion in order to give OpenAI $300 billion worth of cloud computing. Current market *revenue* projections? 85 billion in 2029. So they’re all fighting for control over the Next Big Thing, which projections suggest will never pay off. Warnings that the AI bubble may be about to splatter us all are coming from Cory Doctorow and Ed Zitron – and the Daily Telegraph, The Atlantic, and the Wall Street Journal. Bain Capital says the industry needs another $800 billion in investment now and $2 trillion by 2030 to meet demand.

Many talk about the bubble and economic consequences if it bursts. Few talk about the opportunity costs as AI sucks money and resources away from other things that might be more valuable. In The AI Con, linguistics professor Emily Bender and DAIR Institute director of research Alex Hanna provide an exception. Bender is one of the four authors of the seminal 2021 paper On the Dangers of Stochastic Parrots, which arguably founded AI-skepticism.

In the book, the authors review much that’s familiar: the many layers of humans required to code, train, correct, and mind “AI”: the programmers, designers, data labelers, and raters, along with the humans waiting to take over when the AI fails. They also go into the water, energy, and labor demands of data centers and present approaches to AI.

Crucially, they avoid both doomerism and boosterism, which they understand as alternative sides of the same coin. Both the fully automated hellscape Doomers warn against and and the Boosters’ world governed by a benign synthetic intelligence ignore the very real harms taking place at present. Doomers promote “AI safety” using “fake scenarios” meant to frighten us. Think HAL in the movie 2001: A Space Odyssey or Nick Bostrum’s paperclip maximizer. Boosters rail against the constraints implicit in sustainability, trust and safety organizations within technology companies, and government regulation. We need, Bender and Hanna write, to move away from speculative risks and toward working on the real problems we have. Hype, they conclude, doesn’t have to be true to do harm.

The book ends with a chapter on how to resist hype. Among their strategies: persistently ask questions such as how a system is evaluated, who is harmed and who benefits, how the system was developed and with what kind of data and labor practices. Avoid language that humanizes the system – no “hallucinations” for errors. Advocate for transparency and accountability, and resist the industry’s claims that the technology is so new there is no way to regulate it. The technology may be new, but the principles are old. And, when necessary, just say no and resist the narrative that its progress is inevitable.