Review: The AI Con

The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
By Emily Bender and Alex Hanna
HarperCollins
ISBN: 978-0-06-341856-1

Enormous sums of money are sloshing around AI development. Amazon is handing $8 billion to Anthropic. Microsoft is adding $1 billion worth of Azure cloud computing to its existing massive stake in Open AI. And Nvidia is pouring $100 billion in the form of chips into Open AI’s project to build a gigantic data center, while Oracle is borrowing $100 billion in order to give OpenAI $300 billion worth of cloud computing. Current market *revenue* projections? 85 billion in 2029. So they’re all fighting for control over the Next Big Thing, which projections suggest will never pay off. Warnings that the AI bubble may be about to splatter us all are coming from Cory Doctorow and Ed Zitron – and the Daily Telegraph, The Atlantic, and the Wall Street Journal. Bain Capital says the industry needs another $800 billion in investment now and $2 trillion by 2030 to meet demand.

Many talk about the bubble and economic consequences if it bursts. Few talk about the opportunity costs as AI sucks money and resources away from other things that might be more valuable. In The AI Con, linguistics professor Emily Bender and DAIR Institute director of research Alex Hanna provide an exception. Bender is one of the four authors of the seminal 2021 paper On the Dangers of Stochastic Parrots, which arguably founded AI-skepticism.

In the book, the authors review much that’s familiar: the many layers of humans required to code, train, correct, and mind “AI”: the programmers, designers, data labelers, and raters, along with the humans waiting to take over when the AI fails. They also go into the water, energy, and labor demands of data centers and present approaches to AI.

Crucially, they avoid both doomerism and boosterism, which they understand as alternative sides of the same coin. Both the fully automated hellscape Doomers warn against and and the Boosters’ world governed by a benign synthetic intelligence ignore the very real harms taking place at present. Doomers promote “AI safety” using “fake scenarios” meant to frighten us. Think HAL in the movie 2001: A Space Odyssey or Nick Bostrum’s paperclip maximizer. Boosters rail against the constraints implicit in sustainability, trust and safety organizations within technology companies, and government regulation. We need, Bender and Hanna write, to move away from speculative risks and toward working on the real problems we have. Hype, they conclude, doesn’t have to be true to do harm.

The book ends with a chapter on how to resist hype. Among their strategies: persistently ask questions such as how a system is evaluated, who is harmed and who benefits, how the system was developed and with what kind of data and labor practices. Avoid language that humanizes the system – no “hallucinations” for errors. Advocate for transparency and accountability, and resist the industry’s claims that the technology is so new there is no way to regulate it. The technology may be new, but the principles are old. And, when necessary, just say no and resist the narrative that its progress is inevitable.

Review: Tor

Tor: From the Dark Web to the Future of Privacy
by Ben Collier
MIT Press
ISBN: 978-0-262-54818-2

The Internet began as a decentralized system designed to reroute traffic in case a part of the network was taken out by a bomb. Far from being neutral, the technology intentionally supported the democratic ideals of its time: freedom of expression, freedom of access to information, and freedom to code – that is, build new applications for the Internet without needing permission. Over the decades since, IT has relentlessly centralized. Among the counterweights to this consolidation is Tor, “the onion routing”.

In Tor: From the Dark Web to the Future of Privacy (free for download), Ben Collier recounts a biography that seems to recapitulate those early days – but so far with a different outcome.

Collier traces Tor’s origins to the late Ross Anderson‘s 1997 paper The Eternity Service. In it, Anderson proposed a system for making information indelible by replicating it anonymously across a large number of machines of unknown location so that it would become too expensive to delete it (or, in Anderson’s words, “drive up the cost of selection service denial attacks”). That sort of redundancy is fundamental to the way the Internet works for communications. Around the same time, people were experimenting with ways of routing information such as email through multiple anonymized channels in order to protect it from interference – much used, for example, to protect those exposing Scientology’s secrets. Anderson himself indicated the idea’s usefulness in guaranteeing individual liberties.

As Collier writes, in those early days many spoke as though the Internet’s technology was sufficient to guarantee the export of democratic values to countries where they were not flourishing. More recently, I’ve seen arguments that technology is inherently anti-democratic. Both takes attribute to the technology motivations that properly belong to its controllers and owners.

This is where Collier’s biography strikes a different course by showing the many adaptations the the project has made since its earliest discussions circa 2001* between Roger Dingledine and Nick Mathewson to avoid familiar trends such as centralization and censorship – think the trends that got us the central-point-of-failuire Internet Archive instead of the Eternity Server. Because it began later, Dingledine and Mathewson were able to learn from previous efforts such as PGP and Zero Knowledge Systems to spread strong encryption and bring privacy protection to the mainstream. One such lesson was that the mathematical proofs that dominated cryptography were less important than ensuring usability. At the same time, Collier watches Dingledine and Mathewson resist the temptation to make a super-secure mode and a “stupid mode” that would become the path of least resistance for most users, jeopardizing the security of the entire network.

Most technology biographies focus on one or two founders. Faced with a sprawling system, Collier has resisted that temptation, and devotes a chapter each to the project’s technological development, relay node operators, and maintainers. The fact that these are distinct communities, he writes, has helped keep the project from centralizing. He goes on to discuss the inevitable emergence of criminal uses for Tor, its use as a tool for activism, and finally the future of privacy.

To those who have heard of Tor only as a browser used to access the “dark web” the notion that it deserves a biography may seem surprising. But the project ambitions have grown over time, from privacy as a service, to privacy as a structure, to privacy as a struggle. Ultimately, he concludes, Tor is a hack that has penetrated the core of Internet infrastructure, designing around control points. It is, in other words, much closer to the Internet the pioneers said they were building than the Internet of Facebook and Google.

*This originally said “founding in 2006; that is when the project created today’s formal non-profit organization.

Review: The Promise and Peril of CRISPR

The Promise and the Peril of CRISPR
Edited by Neal Baer
Johns Hopkins University Press
ISBN: 978-1-4214493-02

It’s an interesting question: why are there so many articles in which eminent scientists fear an artificial general superintelligence (which is pure fantasy for the foreseeable future)…and so few that are alarmed by human gene editing tools, which are already arriving? The pre-birth genetic selection in the 1997 movie Gattaca is closer to reality than an AGI that decides to kill us all and turn us into paperclips.

In The Promise and the Peril of CRISPR, Neal Baer collects a series of essays considering the ethical dilemmas posed by a technology that could be used to eliminate whole classes of disease and disabilities. The promise is important: gene editing offers the possibility of curing chronic, painful, debilitating congenital conditions. But for everything, there may be a price. A recent episode of HBO Max’s TV series The Pitt showed the pain that accompanies sickle cell anemia. But that same condition confers protection against malaria, which was an evolutionary advantage in some parts of the world. There may be many more such tradeoffs whose benefits are unknown to us.

Baer started with a medical degree, but quickly found work as a TV writer. He is best known for his work on the first seven years of ER and seasons two through 12 of Law and Order: Special Victims Unit. In his medical career as an academic pediatrician, he writes extensively and works with many health care-related organizations.

Most books on new technologies like CRISPR (for clustered regularly interspaced short palindromic repeats) are either all hype or all panic. In pulling together the collection of essays that make up The Promise and Peril of CRISPR, Baer has brought in voices rarely heard in discussions of new technologies. Ethan Weiss tells the story of his daughter, who was born with albinism, which has more difficult consequences than simple lack of pigmentation. Had the technology been available, he writes, they might have opted to correct the faulty gene that causes it; lacking that, they discovered new richness in life that they would never wish to give up. In another essay, Florence Ashley explores the potential impact from several directions on trans people, who might benefit from being able to alter their bodies through genetics rather than frequent medical interventions such as hormones. And in a third, Krystal Tsosie considers the impact on indigenous peoples, warning against allowing corporate ownership of DNA.

Other essays consider the potential for conditions such as cystic fibrosis (Sandra Sufian) and deafness (Carol Padden and Jacqueline Humphries), and international human rights. One use he omits, despite its status as intermittent media fodder since techniques for gene editing were first developed, is performance enhancement in sports. There is so far no imaginable way to test athletes for it. And anyone who’s watched junior sports knows there are definitely parents crazy enough to adopt any technology that will improve their kids’ abilities. Baer was smart to skip this; it will be a long time before CRISPR is cheap enough and advanced enough to be accessible for that sort of thing.

In one essay, molecular biologist Ellen D. Jorgenson discusses a class she co-designed to facilitate teaching CRISPR to anyone who cared to learn. At the time, the media were focused on its dangers, and she believed that teaching it would help alleviate public fear. Most uses, she writes, are benign. Based on previous experience with scientific advances, it will depend who wields it and for what purpose.

Review: Vassal State

Vassal State: How America Runs Britain
by Angus Hanton
Swift Press
978-1-80075390-7

Tax organizations estimate that a bit under 200,000 expatriate Americans live in the UK. It’s only a tiny percentage of the overall population of 70 million, but of course we’re not evenly distributed. In my bit of southwest London, the (recently abruptly shuttered due to rising costs) butcher has advertised “Thanksgiving turkeys” for more than 30 years.

In Vassal State, however, Angus Hanton shows that US interests permeate and control the UK in ways far more significant than a handful of expatriates. This is not, he stresses, an equal partnership, despite the perennial photos of the British prime minister being welcomed to the White House by the sitting president, as shown satirically in 1986’s Yes, Prime Minister. Hunton cites the 2020 decision to follow the US and ban Huawei as an example, writing that the US pressure at the time “demonstrated the language of partnership coupled with the actions of control”. Obama staffers, he is told, used to joke about the “special relationship”.

Why invade when you can buy and control? Hanton lists a variety of vectors for US influence. Many of Britain’s best technology startups wind up sold to US companies, permanently alienating their profits – see, for example, DeepMind, sold to Google in 2014, and Worldpay, sold to Vantiv in 2019, which then took its name. US buyers also target long-established companies, such as 176-year-old Boots, which since 2014 has been part of Walgreens and is now being bought up by the Sycamore Partners private equity fund. To Americans, this may not seem like much, but Boots is a national icon and an important part of delivering NHS services such as vaccinations. No one here voted for Sycamore Partners to benefit from that, nor did they vote for Kraft to buy Cadbury’s in 2010 and abandon its Bournville headquarters since 1824.

In addition, US companies are burrowed into British infrastructure. Government ministers communicate with each other over WhatsApp. Government infrastructure is supplied by companies like Oracle and IBM, and, lately, Palantir, which are hard to dig out once embedded. A seventh of the workforce are precariously paid by the US-dominated gig economy. The vast majority of cashless transactions pay a slice to Visa or Mastercard. And American companies use the roads, local services, and other infrastructure while paying less in tax than their UK competition. More controversially for digital rights activists, Hanton complains about the burden that US-based streamers like Netflix, Apple, and Amazon place on the telecommunications networks. Among the things he leaves out: the technology platforms in education.

Hanton’s book comes at a critical moment. Previous administrations have perhaps been more polite about demanding US-friendly policies, but now Britain, on its own outside the EU, is facing Donald Trump’s more blatant demands. Among them: that suppliers to the US government comply with its anti-DEI policies. In countries where diversity, equity, and inclusion are fundamental rights, the US is therefore demanding that its law should take precedence.

In a timeline fork in which Britain remained in the EU, it would be in a much better position to push back. In *this* timeline, Hanton’s proposed remedies – reform the tax structure, change policies, build technological independence – are much harder to implement.

Review: Careless People

Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism
By Sarah-Wynn-Williams
Macmillan
ISBN: 978-1035065929

In his 2021 book Social Warming, Charles Arthur concludes his study of social media with the observation that the many harms he documented happened because no one cared to stop them. “Nobody meant for this to happen,” he writes to open his final chapter.

In her new book, Careless People, about her time at Facebook, former New Zealand diplomat Sarah Wynn-Williams shows the truth of Arthur’s take. A sad tale of girl-meets-company, girl-loses-company, girl-tells-her-story, it starts with Wynn-Williams stalking Facebook to identify the right person to pitch hiring her to build its international diplomatic relationships. I kept hoping increasing dissent and disillusion would lead her to quit. Instead, she stays until she’s fired after HR dismisses her complaint of sexual harassment.

In 2011, when Wynn-Williams landed her dream job, Facebook’s wild expansion was at an early stage. CEO Mark Zuckerberg is awkward, sweaty, and uncomfortable around world leaders, who are dismissive. By her departure in 2017, presidents of major countries want selfies with him and he’s much more comfortable – but no longer cares. Meanwhile, then-Chief Operating Officer Sheryl Sandberg, wealthy from her time at Google, becomes a celebrity via her book, Lean In, written with the former TV comedy writer Nell Scovell. Sandberg’s public feminism clashes with her employee’s experience. When Wynn-Williams’s first child is a year old, a fellow female employee congratulates her on keeping the child so well-hidden she didn’t know it existed.

The book provides hysterically surreal examples of American corporatism. She is in the delivery room, feet in stirrups, ordered to push, when a text arrives: can she draft talking points for Davos? (She tries!) For an Asian trip, Zuckerberg wants her to arrange a riot or peace rally so he can appear to be “gently mobbed”. When the company fears “Mark” or “Sheryl” might be arrested if they travel to Korea, managers try to identify a “body” who can be sent in as a canary. Wynn-Williams’s husband has to stop her from going. Elsewhere, she uses her diplomatic training to land Zuckerberg a “longer-than-normal handshake” with Xi Jinping.

So when you get to her failure to get her bosses to beef up the two-person content moderation team for Myanmar’s 60 million people, rewrite the section so Burmese characters render correctly, and post country-specific policies, it’s obvious what her bosses will decide. The same is true of internal meetings discussing the tools later revealed to let advertisers target depressed teens. Wynn-Williams hopes for a safe way forward, but warns that company executives’ “lethal carelessness” hasn’t changed.

Cultural clash permeates this book. As a New Zealander, she’s acutely conscious of the attitudes she encounters, and especially of the wealth and class disparity that divide the early employees from later hires. As pregnancies bring serious medical problems and a second child, the very American problem of affording health insurance makes offending her bosses ever riskier.

The most important chapters, whose in-the-room tales fill in gaps in books by Frances Haugen, Sheera Frankel and Cecilia Kang, and Steven Levy, are those in which Wynn-Williams recounts the company’s decision to embrace politics and build its business in China. If, her bosses reason, politicians become dependent on Facebook for electoral success, they will balk at regulating it. Donald Trump’s 2016 election, which Zuckerberg initially denied had been significantly aided by Facebook, awakened these political aspirations. Meanwhile, Zuckerberg leads the company to build a censorship machine to please China. Wynn-Williams abhors all this – and refuses to work on China. Nonetheless, she holds onto the hope that she can change the company from inside.

Apparently having learned little from Internet history, Meta has turned this book into a bestseller by trying to suppress it. Wynn-Williams managed one interview, with Business Insider, before an arbitrator’s injunction stopped her from promoting the book or making any “disparaging, critical or otherwise detrimental comments” related to Meta. This fits the man Wynn-Williams depicts who hates to lose so much that his employees let him win at board games.

Review: Dark Wire

Dark Wire
by Joseph Cox
PublicAffairs (Hachette Group)
ISBNs: 9781541702691 (hardcover), 9781541702714 (ebook)

One of the basic principles that emerged as soon as encryption software became available to ordinary individuals on home computers was this: everyone should encrypt their email so the people who really need the protection don’t stick out as targets. Also at that same time, the authorities were constantly warning that if encryption weren’t controlled by key escrow, an implanted back door, or restrictions on its strength, it would help hide the activities of drug traffickers, organized crime, pedophiles, and terrorists. This same argument continues today.

Today, billions of people have access to encrypted messaging via WhatsApp, Signal, and other services. Governments still hate it, but they *use* it; the UK government is all over WhatsApp, as multiple public inquiries have shown.

In Dark Wire: The Incredible True Story of the Largest Sting Operation Ever, Joseph Cox, one of the four founders of 404 Media, takes us on a trip through law enforcement’s adventures in encryption, as police try to identify and track down serious criminals making and distributing illegal drugs by the ton.

The story begins with PhantomSecure, a scheme that stripped down Blackberry devices and installed PGP to encrypt emails and systems to ensure the devices could exchange emails only with other Phantom Secure devices. The service became popular among all sorts of celebrities, politicians, and other non-criminals who value privacy – but not *only* them. All perfectly legal.

One of my favorite moments comes early,when a criminal debating whether to trust a new contact decides he can because he has one of these secure Blackberries. The criminal trusted the supply chain; surely no one would have sold him one of these things without thoroughly checking that he wasn’t a cop. Spoiler alert: he was a cop. That sale helped said cop and his colleagues in the United States, Australia, Canada, and the Netherlands infiltrate the network, arrest a bunch of criminals, and shut it down – eventually, after setbacks, and with the non-US forces frustrated and amazed by US Constitutional law limiting what agents were allowed to look at.

PhantomSecure’s closure made a hole in the market while security-conscious criminals scrambled to find alternatives. It was rapidly filled by competitors working with modified phones: Encrochat and Sky ECC. As users migrated to these services and law enforcement worked to infiltrate and shut them down as well, former PhantomSecure salesman “Afgoo” had a bright idea, which he offered to the FBI: why not build their own encrypted app and take over the market?

The result was Anom, From the sounds of it, some of its features were quite cool. For example, the app itself hid behind an innocent-looking calculator, which acted as a password gateway. Type in the right code, and the messaging app appeared. The thing sold itself.

Of course, the FBI insisted on some modifications. Behind the scenes, Anom devices sent copies of every message to the FBI’s servers. Eventually, the floods of data the agencies harvested this way led to 500 arrests on one day alone, and the seizure of hundreds of firearms and dozens of tons of illegal drugs and precursor chemicals.

Some of the techniques the criminals use are fascinating in their own right. One method of in-person authentication involved using the unique serial number on a bank note, sending it in advance; the mule delivering the money would simply have to show they had the bank note, a physical one-time pad. Banks themselves were rarely used. Instead, cash would be stored in safe houses in various countries and the money would never have to cross borders. So: no records, no transfers to monitor. All of this spilled open for law enforcement because of Anom.

And yet. Cox waits until the end to voice reservations. All those seizures and arrests barely made a dent in the world’s drug trade – a “rounding error”, Cox calls it.

Review: The Web We Weave

The Web We Weave
By Jeff Jarvis
Basic Books
ISBN: 9781541604124

Sometime in the very early 1990s, someone came up to me at a conference and told me I should read the work of Robert McChesney. When I followed the instruction, I found a history of how radio and TV started as educational media and wound up commercially controlled. Ever since, this is the lens through which I’ve watched the Internet develop: how do we keep the Internet from following that same path? If all you look at is the last 30 years of web development, you might think we can’t.

A similar mission animates retired CUNY professor Jeff Jarvis in his latest book, The Web We Weave. In it, among other things, he advocates reanimating the open web by reviving the blogs many abandoned when Twitter came along and embracing other forms of citizen media. Phenomena such as disinformation, misinformation, and other harms attributed to social media, he writes, have precursor moral panics: novels, comic books, radio, TV, all were once new media whose evils older generations fretted about. (For my parents, it was comic books, which they completely banned while ignoring the hours of TV I watched.) With that past in mind, much of today’s online harms regulation leaves him skeptical.

As a media professor, Jarvis is interested in the broad sweep of history, setting social media into the context that began with the invention of the printing press. That has its benefits when it comes to later chapters where he’s making policy recommendations on what to regulate and how. Jarvis is emphatically a free-speech advocate.

Among his recommendations are those such advocates typically support: users should be empowered, educated, and taught to take responsibility, and we should develop business models that support good speech. Regulation, he writes, should include the following elements: transparency, accountability, disclosure, redress, and behavior rather than content.

On the other hand, Jarvis is emphatically not a technical or design expert, and therefore has little to say about the impact on user behavior of technical design decisions. Some things we know are constants. For example, the willingness of (fully identified) online communicators to attack each other was noted as long ago as the 1980s, when Sara Kiesler studied the first corporate mailing lists.

Others, however, are not. Those developing Mastodon, for example, deliberately chose not to implement the ability to quote and comment on a post because they believed that feature fostered abuse and pile-ons. Similarly, Lawrence Lessig pointed out in 1999 in Code and Other Laws of Cyberspae (PDF) that you couldn’t foment a revolution using AOL chatrooms because they had a limit of 23 simultaneous users.

Understanding the impact of technical decisions requires experience, experimentation, and, above all, time. If you doubt this, read Mike Masnick’s series at Techdirt on Elon Musk’s takeover and destruction of Twitter. His changes to the verification system alone have undermined the ability to understand who’s posting and decide how trustworthy their information is.

Jarvis goes on to suggest we should rediscover human scale and mutual obligation, both crucial as the covid pandemic progressed. The money will always favor mass scale. But we don’t have to go that way.

Review: Supremacy

Supremacy: AI, ChatGPT, and the Race That Will Change the World
By Parmy Olson
Macmillan Business
ISBN: 978-1035038220

One of the most famous books about the process of writing software is Frederick Brooks’ The Mythical Man Month. The essay that gives the book its title makes the point that you cannot speed up the process by throwing more and more people at it. The more people you have, the more they have to all communicate with each other, and the pathways multiply exponentially. Think of it this way: 500 people can’t read a book faster than five people can.

Brooks’ warning immediately springs to mind when Parmy Olson reports, late in her new book, Supremacy, that Microsoft CEO Sadya Nadella was furious to discover that Microsoft’s 5,000 direct employees working on AI lagged well behind the rapid advances being made by the fewer than 200 working working at OpenAI. Some things just aren’t improved by parallel processing.

The story Olson tells is a sad one: two guys, both eager to develop an artificial general intelligence in order to save, or least help, humanity, who both wind up working for large commercial companies whose primary interests are to 1) make money and 2) outcompete the other guy. For Demis Hassabis, the company was Google, which bought his DeepMind startup in 2014. For Sam Altman, founder of OpenAI, it was Microsoft. Which fits: Hassabis’s approach to “solving AI” was to let them teach themselves by playing games, hoping to drive science discovery; Altman sought to solve real-world problems and bring everyone wealth. Too late for Olson’s book, Hassabis has achieved enough of a piece of his dream to have been one of three awarded the 2024 Nobel Prize in chemistry for using AI to predict how proteins will fold.

For both the reason was the same: the resources they sought to work in AI – data, computing power, and high-priced personnel – are too expensive for either traditional startup venture capital funding or for academia. (Cure Vladen Joler, at this year’s Computers, Privacy, and Data Protection, noting that AI is arriving “pre-monopolized”.) As Olson tells the story, they both tried repeatedly to keep the companies they founded independent. Yet, both have wound up positioned to run the companies whose money they took apparently believing, like many geek founders with more IQ points than sense, that they would not have to give up control.

In comparing and contrasting the two founders, Olson shows where many of today’s problems came from. Allying themselves with Big Tech meant giving up on transparency. The ethicists who are calling out these companies over real and present harms caused by the tools they’ve built, such as bias, discrimination, and exploitation of workers performing tasks like labeling data, have 1% or less of the funding of those pushing safety for superintelligences that may never exist.

Olson does a good job of explaining the technical advances that led to the breakthroughs of recent years, as well as the business and staff realities of their different paths. A key point she pulls out is the extent to which both Google and Microsoft have become the kind of risk-averse, slow-moving, sclerotic company they despised when they were small, nimble newcomers.

Different paths, but ultimately, their story is the same: they fought the money, and the money won.

This perfect day

To anyone remembering the excitement over DNA testing just a few years ago, this week’s news about 23andMe comes as a surprise. At CNN, Allison Morrow reports that all seven board members have resigned to protest CEO Anne Wojcicki’s plan to take the company private by buying up all the shares she doesn’t already own at 40 cents each (closing price yesterday was 0.3301. The board wanted her to find a buyer offering a better price.

In January, Rolfe Winkler reported at the Wall Street Journal ($) that 23andMe is likely to run out of cash by next year. Its market cap has dropped from $6 billion to under $200 million. He and Morrow catalogue the company’s problems: it’s never made a profit nor had a sustainable business model.

The reasons are fairly simple: few repeat customers. With DNA testing, as Winkler writes, “Customers only need to take the test once, and few test-takers get life-altering health results.” 23andMe’s mooted revolution in health care instead was a fad. Now, the company is pivoting to sell subscriptions to weight loss drugs.

This strikes me as an extraordinarily dangerous moment: the struggling company’s sole unique asset is a pile of more than 10 million DNA samples whose owners have agreed they can be used for research. Many were alarmed when, in December 2023, hackers broke into 1.7 million accounts and gained access to 6.9 million customer profiles<, though. The company said the hacked data did not include DNA records but did include family trees and other links. We don't think of 23andMe as a social network. But the same affordances that enabled Cambridge Analytica to leverage a relatively small number of user profiles to create a mass of data derived from a much larger number of their Friends worked on 23andMe. Given the way genetics works, this risk should have been obvious.

In 2004, the year of Facebook’s birth, the Australian privacy campaigner Roger Clarke warned in Very Black “Little Black Books” that social networks had no business model other than to abuse their users’ data. 23andMe’s terms and conditions promise to protect user privacy. But in a sale what happens to the data?

The same might be asked about the data that would accrue from Oracle CEO Larry Ellison‘s surveillance-embracing proposals this week. Inevitably, commentators invoked George Orwell’s 1984. At Business Insider, Kenneth Niemeyer was first to report: “[Ellison] said AI will usher in a new era of surveillance that he gleefully said will ensure ‘citizens will be on their best behavior.'”

The all-AI-surveillance all-the-time idea could only be embraced “gleefully” by someone who doesn’t believe it will affect him.

Niemeyer:

“Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras.

“We’re going to have supervision,” Ellison said. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.”

Ellison is twenty-six years behind science fiction author David Brin, who proposed radical transparency in his 1998 non-fiction outing, The Transparent Society. But Brin saw reciprocity as an essential feature, believing it would protect privacy by making surveillance visible. Ellison is claiming that *inscrutable* surveillance will guarantee good behavior.

At 404 Media, Jason Koebler debunks Ellison point by point. Research and other evidence shows securing schools is unlikely to make them safer; body cameras don’t appear to improve police behavior; and all the technologies Ellison talks about have problems with accuracy and false positives. Indeed, the mayor of Chicago wants to end the city’s contract with ShotSpotter (now SoundThinking), saying it’s expensive and doesn’t cut crime; some research says it slows police 911 response. Worth noting Simon Spichak at Brain Facts, who finds with AI tools humans make worse decisions. So…not a good idea for police.

More disturbing is Koebler’s main point: most of the technology Ellison calls “future” is already here and failing to lower crime rates or solve its causes – while being very expensive. Ellison is already out of date.

The book Ellison’s fantasy evokes for me is the less-known This Perfect Day, by Ira Levin, written in 1970. The novel’s world is run by a massive computer (“Unicomp”) that decides all aspects of individuals’ lives: their job, spouse, how many children they can have. Enforcing all this are human counselors and permanent electronic bracelets individuals touch to ubiquitous scanners for permission.

Homogeneity rules: everyone is mixed race, there are only four boys’ and girls’ names, they eat “totalcakes”, drink cokes, wear identical clothing. For the rest, regularly administered drugs keep everyone healthy and docile. “Fight” is an abominable curse word. The controlled world over which Unicomp presides is therefore almost entirely benign: there is no war, crime, and little disease. It rains only at night.

Naturally, the novel’s hero rebels, joins a group of outcasts (“the Incurables”), and finds his way to the secret underground luxury bunker where a few “Programmers” help Unicomp’s inventor, Wei Li Chun, run the world to his specification. So to me, Ellison’s plan is all about installing himself as world ruler. Which, I mean, who could object except other billionaires?

Illustrations: The CCTV camera on George Orwell’s Portobello Road house.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Service Model

Service Model
By Adrian Tchaikovsky
Tor Publishing Group
ISBN: 978-1-250-29028-1

Charles is a highly sophisticated robot having a bad day. As a robot, “he” would not express it that way. Instead, he would say that he progresses through each item on his task list and notes its ongoing pointlessness. He checks his master’s travel schedule and finds no plans, Nonetheless, he completes his next tasks, laying out today’s travel clothes, dusting off yesterday’s unused set, and placing them back in the wardrobe, as he has every day for the 2,230 days since his master last left the house.

He goes on to ask House, the manor’s major-domo system, to check with the lady of the house’s maidservant for travel schedules, planned clothing, and other aspects of life. There has been no lady of the house, and therefore no maidservant, for 17 years and 12 days. An old subroutine suggests ways to improve efficiency by eliminating some of the many empty steps, but Charles has no instructions that would let him delete any of them, even when House reports errors. The morning routine continues. It’s tempting to recall Ray Bradbury’s short story “There Will Come Soft Rains”.

Until Charles and House jointly discover there are red stains on the car upholstery Charles has just cleaned…and on Charles’s hands, and on the master’s laid-out clothes, and on his bedclothes and on his throat where Charles has recently been shaving him with a straight razor…

The master has been murdered.

So begins Adrian Tchaikovsky’s post-apocalyptic science fiction novel Service Model.

Some time later – after a police investigation – Charles sets out to walk the long miles to report to Diagnostics, and perhaps thereafter to find a new master in need of a gentleman’s gentlebot. Charles would not say he “hoped”; he would say he awaits instructions, and that the resulting uncertainty is inefficiently consuming his resources.

His journey takes him through a landscape filled with other robots that have lost their purpose. Manor after manor along the road is dark or damaged; at one, a servant robot waits pointlessly to welcome guests who never come. The world, it seems, is stuck in recursive loops that cannot be overridden because the human staff required to do so have been…retired. At the Diagnostics center Charles finds more of the same: a queue of hundreds of robots waiting to be seen, stalled by the lack of a Grade Seven human to resolve the blockage.

Enter “the Wonk”, a faulty robot with no electronic link and a need to recharge at night and consume food, who sees Charles – now Uncharles, since he no longer serves the master who named him – as infected with the “protagonist virus” and wants him to join in searching for the mysterious Library, which is preserving human knowledge. Uncharles is more interested in finding humans he can serve.

Their further explorations of a post-apocalyptic world, thinly populated and filled with the rubble of cities, along with Uncharles’s efforts to understand his nature, form most of the rest of the book. Is Wonk’s protagonist virus even a real thing? He doubts that it is. And yet, he feels himself finding excuses to avoid taking on yet another pointless job.

The best part of all this is Tchaikovsky’s rendering of Cbarles/Uncharles’s thoughts about himself and his attempts to make sense of the increasingly absurd world around him. A long, long way into the book it’s still not obvious how it will end.