Cognitive dissonance

The annual State of the Net, in Washington, DC, always attracts politically diverse viewpoints. This year was especially divided.

Three elements stood out: the divergence between the only remaining member of the Privacy and Civil Liberties Oversight Board (PCLOB) and a recently-fired colleague; a contentious panel on content moderation; and the yay, American innovation! approach to regulation.

As noted previously, on January 29 the days-old Trump administration fired PCLOB members Travis LeBlanc, Ed Felten, and chair Sharon Bradford Franklin; the remaining seat was already empty.

Not to worry, remaining member Beth Williams, said. “We are open for business. Our work conducting important independent oversight of the intelligence community has not ended just because we’re currently sub-quorum.” Flying solo she can greenlight publication, direct work, and review new procedures and policies; she can’t start new projects. A review is ongoing of the EU-US Privacy Framework under Executive Order 14086 (2022). Williams seemed more interested in restricting government censorship and abuse of financial data in the name of combating domestic terrorism.

Soon afterwards, LeBlanc, whose firing has him considering “legal options”, told Brian Fung that the outcome of next year’s reauthorization of Section 702, which covers foreign surveillance programs, keeps him awake at night. Earlier, Williams noted that she and Richard E. DeZinno, who left in 2023, wrote a “minority report” recommending “major” structural change within the FBI to prevent weaponization of S702.

LeBlanc is also concerned that agencies at the border are coordinating with the FBI to surveil US persons as well as migrants. More broadly, he said, gutting the PCLOB costs it independence, expertise, trustworthiness, and credibility and limits public options for redress. He thinks the EU-US data privacy framework could indeed be at risk.

A friend called the panel on content moderation “surreal” in its divisions. Yael Eisenstat and Joel Thayer tried valiantly to disentangle questions of accountability and transparency from free speech. To little avail: Jacob Mchangama and Ari Cohn kept tangling them back up again.

This largely reflects Congressional debates. As in the UK, there is bipartisan concern about child safety – see also the proposed Kids Online Safety Act – but Republicans also separately push hard on “free speech”, claiming that conservative voices are being disproportionately silenced. Meanwhile, organizations that study online speech patterns and could perhaps establish whether that’s true are being attacked and silenced.

Eisenstat tried to draw boundaries between speech and companies’ actions. She can still find on Facebook the sme Telegram ads containing illegal child sexual abuse material that she found when Telegram CEO Pavel Durov was arrested. Despite violating the terms and conditions, they bring Meta profits. “How is that a free speech debate as opposed to a company responsibility debate?”

Thayer seconded her: “What speech interests do these companies have other than to collect data and keep you on their platforms?”

By contrast, Mchangama complained that overblocking – that is, restricting legal speech – is seen across EU countries. “The better solution is to empower users.” Cohn also disliked the UK and European push to hold platforms responsible for fulfilling their own terms and conditions. “When you get to whether platforms are living up to their content moderation standards, that puts the government and courts in the position of having to second-guess platforms’ editorial decisions.”

But Cohn was talking legal content; Eisenstat was talking illegal activity: “We’re talking about distribution mechanisms.” In the end, she said, “We are a democracy, and part of that is having the right to understand how companies affect our health and lives.” Instead, these debates persist because we lack factual knowledge of what goes on inside. If we can’t figure out accountability for these platforms, “This will be the only industry above the law while becoming the richest companies in the world.”

Twenty-five years after data protection became a fundamental right in Europe, the DC crowd still seem to see it as a regulation in search of a deal. Representative Kat Cammack (R-FL), who described herself as the “designated IT person” on the energy and commerce committee, was particularly excited that policy surrounding emerging technologies could be industry-driven, because “Congress is *old*!” and DC is designed to move slowly. “There will always be concerns about data and privacy, but we can navigate that. We can’t deter innovation and expect to flourish.”

Others also expressed enthusiasm for “the great opportunities in front of our country”, compared the EU’s Digital Markets Act to a toll plaza congesting I-95. Samir Jain, on the AI governance panel, suggested the EU may be “reconsidering its approach”. US senator Marsha Blackburn (R-TN) highlighted China’s threat to US cybersecurity without noting the US’s own goal, CALEA.

On that same AI panel, Olivia Zhu, the Assistant Director for AI Policy for the White House Office of Science and Technology Policy, seemed more realistic: “Companies operate globally, and have to do so under the EU AI Act. The reality is they are racing to comply with [it]. Disengaging from that risks a cacophony of regulations worldwide.”

Shortly before, Johnny Ryan, a Senior Fellow at the Irish Council for Civil Liberties posted: “EU Commission has dumped the AI Liability Directive. Presumably for “innovation”. But China, which has the toughest AI law in the world, is out innovating everyone.”

Illustrations: Kat Cammack (R-FL) at State of the Net 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The Gulf of Google

In 1945, the then mayor of New York City, Fiorello La Guardia signed a bill renaming Sixth Avenue. Eighty years later, even with street signs that include the new name, the vast majority of New Yorkers still say things like, “I’ll meet you at the southwest corner of 51st and Sixth”. You can lead a horse to Avenue of the Americas, but you can’t make him say it.

US president Donald Trump’s order renaming the Gulf of Mexico offers a rarely discussed way to splinter the Internet (at the application layer, anyway; geography matters!), and on Tuesday Google announced it would change the name for US users of its Maps app. As many have noted, this contravenes Google’s 2008 policy on naming bodies of water in Google Earth: “primary local usage”. A day later, reports came that Google has placed the US on its short list of sensitive countries – that is, ones whose rulers dispute the names and ownership of various territories: China, Russia, Israel, Saudi Arabia, Iraq.

Sharpieing a new name on a map is less brutal than invading, but it’s a game anyone can play. Seen on Mastodon: the bay, now labeled “Gulf of Fragile Masculinity”.

***

Ed Zitron has been expecting the generative AI bubble to collapse disastrously. Last week provided an “Is this it?” moment when the Chinese company DeepSeek released reasoning models that outperform the best of the west at a fraction of the cost and computing power. US stock market investors: “Let’s panic!”

The code, though not the training data, is open source, as is the relevant research. In Zitron’s analysis, the biggest loser here is OpenAI, though it didn’t seem like that to investors in other companies, especially Nvidia, whose share price dropped 17% on Tuesday alone. In an entertaining sideshow, OpenAI complains that DeepSeek stole its code – ironic given the history.

On Monday, Jon Stewart quipped that Chinese AI had taken American AI’s job. From there the countdown started until someone invoked national security.

Nvidia’s chips have been the picks and shovels of generative AI, just as they were for cryptocurrency mining. In the latter case, Nvidia’s fortunes waned when cryptocurrency prices crashed, ethercoin, among others, switched to proof of stake, and miners shifted to more efficient, lower-cost application-specific integrated circuits. All of these lowered computational needs. So it’s easy to believe the pattern is repeating with generative AI.

There are several ironies here. The first is that the potential for small language models to outshine large ones has been known since at least 2020, when Timnit Gebru, Emily Bender, Margaret Mitchell, and Angelina McMillan-Major published their stochastic parrots paper. Google soon fired Gebru, who told Bloomberg this week that AI development is being driven by FOMO rather than interesting questions. Second, as an AI researcher friend points out, Hugging Face, which is trying to replicate DeepSeek’s model from scratch, said the same thing two years ago. Imagine if someone had listened.

***

A work commitment forced me to slog through Ross Douthat’s lengthy interview with Marc Andreessen at the New York Times. Tl;dr: Andreessen says Silicon Valley turned right because Democrats broke The Deal under which Silicon Valley supported liberal democracy and the Democrats didn’t regulate them. In his whiny victimhood, Andreessen has no recognition that changes in Silicon Valley’s behavior – and the scale at which it operates – are *why* Democrats’ attitudes changed. If Silicon Valley wants its Deal back, it should stop doing things that are obviously exploitive. Random case in point: Hannah Ziegler reports at the Washington Post that a $1,700 bassinet called a “Snoo” suddenly started demanding $20 per month to keep rocking a baby all night. I mean, for that kind of money I pretty much expect the bassinet to make its own breast milk.

***

Almost exactly eight years ago, Donald Trump celebrated his installation in the US presidency by issuing an executive order that risked up-ending the legal basis for data flows between the EU, which has strict data protection laws, and the US, which doesn’t. This week, he did it again.

In 2017, Executive Order 13768 dominated Computers, Privacy, and Data Protection. The deal in place at the time, Privacy Shield, eventually survived until 2020, when it was struck down in lawyer Max Schrems’s second such case. It was replaced by the Transatlantic Data Privacy Framework, which established the five-member Privacy and Civil Liberties Oversight Board to oversee surveillance and, as Politico explains, handle complaints from Europeans about misuse of their data.

This week, Trump rendered the board non-operational by firing its three Democrats, leaving just one Republican-member in place.*

At Techdirt, Mike Masnick warns the framework could collapse, costing Facebook, Instagram, WhatsApp, YouTube, exTwitter, and other US-based services (including Truth Social) their European customers. At his NGO, noyb, Schrems himself takes note: “This deal was always built on sand.”

Schrems adds that another Trump Executive Order gives 45 days to review and possibly scrap predecessor Joe Biden’s national security decisions, including some the framework also relies on. Few things ought to scare US – and, in a slew of new complaints, Chinese – businesses more than knowing Schrems is watching.

Illustrations: The Gulf of Mexico (NASA, via Wikimedia).

*Corrected to reflect that the three departing board members are described as Democrats, not Democrat-appointed. In fact, two of them, Ed Felten and Travis LeBlanc, were appointed by Trump in his original term.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The AI moment

“Why are we still talking about digital transformation?” The speaker was convening a session at last weekend’s UK Govcamp, an event organized by and for civil servants with an interest in digital stuff.

“Because we’ve failed?” someone suggested. These folks are usually *optimists*.

Govcamp is a long-running tradition that began as a guerrilla effort in 2008. At the time, civil servants wanting to harness new technology in the service of government were so thin on the ground they never met until one of them, Jeremy Gould, convened the first Govcamp. These are people who are willing to give up a Saturday in order to do better at their jobs working for us. All hail.

It’s hard to remember now, nearly 15 years on, the excitement in 2010 when David Cameron’s incoming government created the Government Digital Service and embedded it into the Cabinet Office. William Heath immediately ended the Ideal Government blog he’d begun writing in 2004 to press insistently for better use of digital technologies in government. The government had now hired all the people he could have wanted it to, he said, and therefore, “its job is done”.

Some good things followed: tilting government procurement to open the way for smaller British companies, consolidating government publishing, other things less visible but still important. Some data became open. This all has improved processes like applying for concessionary travel passes and other government documents, and made government publishing vastly more usable. The improvement isn’t universal: my application last year to renew my UK driver’s license was sent back because my signature strayed outside the box provided for it.

That’s just one way the business of government doesn’t feel that different. The whole process of developing legislation – green and white papers, public consultations, debates, and amendments – marches on much as it ever has, though with somewhat wider access because the documents are online. Thoughts about how to make it more participatory were the subject of a teacamp in 2013. Eleven years on, civil society is still reading and responding to government consultations in the time-honored way, and policy is still made by the few for the many.

At Govcamp, the conversation spread between the realities of their working lives and the difficulties systems posed for users – that is, the rest of us. “We haven’t removed those little frictions,” one said, evoking the old speed comparisons between Amazon (delivers tomorrow or even today) and the UK government (delivers in weeks, if not months).

“People know what good looks like,” someone else said, in echoing that frustration. That’s 2010-style optimism, from when Amazon product search yielded useful results, search engines weren’t spattered with AI slime and blanketed with ads, today’s algorithms were not yet born, and customer service still had a heartbeat. Here in 2025, we’re all coming up against rampant enshittification, with the result that the next cohort of incoming young civil servants *won’t* know any more what “good” looks like. There will be a whole new layer of necessary education.

Other comments: it’s evolution, not transformation; resistance to change and the requirement to ask permission are embedded throughout the culture; usability is still a problem; trying to change top-down only works in a large organization if it sets up an internal start-up and allows it to cannibalize the existing business; not enough technologists in most departments; the public sector doesn’t have the private sector option of deciding what to ignore; every new government has a new set of priorities. And: the public sector has no competition to push change.

One suggestion was that technological change happens in bursts – punctuated equilibrium. That sort of fits with the history of changing technological trends: computing, the Internet, the web, smartphones, the cloud. Today, that’s “AI”, which prime minister Keir Starmer announced this week he will mainline into the UK’s veins “for everything from spotting potholes to freeing up teachers to teach”.

The person who suggested “punctuated equilibrium” added: “Now is a new moment of change because of AI. It’s a new ‘GDS moment’.” This is plausible in the sense that new paradigms sometimes do bring profound change. Smartphones changed life for homeless people. On the other hand, many don’t do much. Think audio: that was going to be a game-changer, and yet after years of loss-making audio assistants, most of us are still typing.

So is AI one of those opportunities? Many brought up generative AI’s vast consumption of energy and water and rampant inaccuracy. Starmer, like Rishi Sunak before him, seems to think AI can make Britain the envy of other major governments.

Complex systems – such as digital governance – don’t easily change the flow of information or, therefore, the flow of power. It can take longer than most civil servants’ careers. Organizations like Mydex, which seeks to up-end today’s systems to put users in control, have been at work for years now. The upcoming digital identity framework has Mydex chair Alan Mitchell optimistic that the government’s digital identity framework is a breakthrough. We’ll see.

One attendee captured this: “It doesn’t feel like the question has changed from more efficient bureaucracy to things that change lives.” Said another in response, “The technology is the easy bit.”

Illustrations: Sir Humphrey Appleby (Nigel Hawthorne), Bernard Woolley (Derek Fowldes), and Jim Hacker (Paul Eddington) arguing over cultural change in Yes, Minister.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon Bluesky.

The lost Internet

As we open 2025 it would be traditional for an Old Internet Curmudgeon to rhapsodize about the good, old days of the 1990s, when the web was open, snark flourished at sites like suck.com, no one owned social media (that is, Usenet and Internet Relay Chat), and even the spam was relatively harmless.

But that’s not the period I miss right now. By “lost” I mean the late 2000s, when we shifted from an Internet of largely unreliable opinions to an Internet full of fact-based sites you could trust. This was the period during which Wikipedia (created 2001) grew up, and Open Street Map (founded 2004) was born, joining earlier sites like the Internet Archive (founded 1996) and Snopes (1994). In that time, Google produced useful results, blogs flourished, and before it killed them if you asked on Twitter for advice on where to find a post box near a point in Liverpool you’d get correct answers straight to your mobile phone.

Today, so far: I can’t get a weather app to stop showing the location I was at last week and show the location I’m at this week. Basically, the app is punishing me for not turning on location tracking. The TV remote at my friend’s house doesn’t fully work and she doesn’t know why or how to fix it; she works around it with a second remote whose failings are complementary. No calendar app works as well as the software I had 1995-2001 (it synced! without using a cloud server and third-party account!). At the supermarket, the computer checkout system locked up. It all adds up to a constant white noise of frustration.

We still have Wikipedia, Open Street Map, Snopes, and the Internet Archive. But this morning a Mastodon user posted that their ten-year-old says you can’t trust Google any more: “It just returns ‘a bunch of madeup stuff’.” When ten-year-olds know your knowledge product sucks…

If generative AI were a psychic we’d call what it does cold reading.

At his blog, Ed Zitron has published a magnificent, if lengthy, rant on the state ot technology. “The rot economy”, he calls it, and says we’re all victims of constant low-level trauma. Most of his complaints will be familiar: the technologies we use are constantly shifting and mostly for the worse. My favorite line: “We’re not expected to work out ‘the new way to use a toilet’ every few months because somebody decided we were finishing too quickly.”

Pause to remember nostalgically 2018, when a friend observed that technology wasn’t exciting any more and 2019, when many more people thought the Internet was no longer “fun”. Those were happy days. Now we are being overwhelmed with stuff we actively don’t want in our lives. Even hacked Christmas lights sound miserable for the neighbors.

***

I have spent some of these holidays editing a critique of Ofcom’s regulatory plans under the Online Safety Act (we all have our own ideas about holidays), and one thing seems clear: the splintering Internet is only going to get worse.

Yesterday, firing up Chrome because something didn’t work in Firefox, I saw a fleeting popup to the effect that because I may not be over 18 there are search results Google won’t show me. I don’t think age verification is in force in the Commonwealth of Pennsylvania – US states keep passing bills, but hit legal challenges.

Age verification has been “imminent” in the UK for so long – it was originally included in the Digital Economy Act 2017 – that it seems hard to believe it may actually become a reality. But: sites within the Act’s scope will have to complete an “illegal content risk assessment” by March 16. So the fleeting popup felt like a visitation from the Ghost of Christmas Future.

One reason age verification was dropped back then – aside from the distractions of Brexit – was that the mechanisms for implementing it were all badly flawed – privacy-invasive, ineffective, or both. I’m not sure they’ve improved much. In 2022, France’s data protection watchdog checked them out: “CNIL finds that such current systems are circumventable and intrusive, and calls for the implementation of more privacy-friendly models.”

I doubt Ofcom can square this circle, but the costs of trying will include security, privacy, freedom of expression, and constant technological friction. Bah, humbug.

***

Still, one thing is promising: the rise of small, independent media outlets wbo are doing high-quality work. Joining established efforts like nine-year-old The Ferret, ten-year-old Bristol Cable, and five-year-old Rest of World are year-and-a-half-old 404 Media and newcomer London Centric. 404Media, formed by four journalists formerly at Vice’s Motherboard, has been consistently making a splash since its founding; this week Jason Koebler reminds that Elon Musk’s proactive willingness to unlock the blown-up cybertruck in Las Vegas and provide comprehensive data on where it’s been, including video from charging stations, without warrant or court order, could apply to any Tesla customer at any time. Meanwhile, in its first three months London Centric’s founding journalist, Jim Waterson, has published pieces on the ongoing internal mess at Transport for London resulting from the August cyberattack and bicycle theft in the capital. Finally, if you’re looking for high-quality American political news, veteran journalist Dan Gillmore curates it for you every day in his Cornerstone of Democracy newsletter.

The corporate business model of journalism is inarguably in trouble, but journalism continues.

Happy new year.

Illustrations: The Marx Brothers in their 1929 film, The Cocoanuts, newly released into the public domain.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Playing monopoly

If you were going to carve up today’s technology giants to create a more competitive landscape, how would you do it? This time the game’s for real. In August, US District Judge Amit Mehta ruled that, “Google is a monopolist and has acted as one to maintain its monopoly.” A few weeks ago, the Department of Justice filed preliminary proposals (PDF) for remedies. These may change before the parties reassemble in court next April.

Antitrust law traditionally aimed to ensure competition in order to create both a healthy business ecosystem and better serve consumers. “Free” – that is, pay-with-data – online services have been resistant to antitrust analysis through decades of focusing on lowered prices to judge success.

It’s always tempting to think of breaking monopolists up into business units. For example, a key moment in Meta’s march to huge was its purchase of WhatsApp (2014) and Instagram (2012), turning baby competitors into giant subsidiaries. In the EU, that permission was based on a promise, which Meta later broke, not to merge the three companies’ databases. Separating them back out again to create three giant privacy-invading behemoths in place of one is more like the sorceror’s apprentice than a win.

In the late 1990s case against Microsoft, which ended in settlement, many speculated about breaking it up into Baby Bills. The key question: create clones or divide up the Windows and office software?

In 2013, at ComputerWorld Gregg Keizer asked experts to imagine the post-Microsoft-breakup world. Maybe the office software company ported its products onto the iPad. Maybe the clones eventually diverged and one would have dominated search. Keizer’s experts generally agree, though, that the antitrust suit itself had its effects, slowing the company’s forward progress by making it fear provoking further suits, like IBM before it.

In Google’s case, the key turning point was likely the 2007-2008 acquisition of online advertising pioneer DoubleClick. Google was then ten years old and had been a public company for almost four years. At its IPO Wall Street pundits were dismissive, saying it had no customer lock-in and no business model.

Reading Google’s 2008 annual report is an exercise in nostalgia. Amid an explanation of contextual advertising, Google says it has never spent much on marketing because the quality of its products generated word of mouth momentum worldwide. This was all true – then.

At the time, privacy advocates opposed the DoubleClick merger. Both FTC and EU regulators raised concerns, but let it go ahead to become the heart of the advertising business Susan Wojcicki and Sheryl Sandberg built for Google. Despite growing revenues from its cloud services business, most of Google’s revenues still come from advertising.

Since then, Mehta ruled, Google cemented its dominance by paying companies like Apple, Samsung, and Verizon to make its search engine the default on the devices they make and/or sell. Further, Google’s dominance – 90% of search – allows it to charge premium rates for search ads, which in turn enhances its financial advantage. OK, one of those complaining competitors is Microsoft, but others are relative minnows like 15-year-old DuckDuckGo, which competes on privacy, buys TV ads, and hasn’t cracked 1% of the search market. Even Microsoft’s Bing, at number two, has less than 4%. Google can insist that it’s just that good, but complaints that its search results are degrading are everywhere.

Three aspects of the DoJ’s proposals seized the most attention: forcing Google to divest itself of the Chrome browser; second, if that’s not enough, to divest the Android mobile operating system; and third a block on paying other companies to make Google search the default. The latter risks crippling Mozilla and Firefox, and would dent Apple’s revenues, but not really harm Google. Saving $26.3 billion (2021 number) can’t be *all* bad.

At The Verge, Lauren Feiner summarizes the DoJ’s proposals. At the Guardian, Dan Milmo notes that the DoJ also wants Google to be barred from buying or investing in search rivals, query-based AI, or adtech – no more DoubleClicks.

At Google’s blog, chief legal officer Kent Walker calls the proposals “a radical interventionist agenda”. He adds that it would chill Google’s investment in AI like this is a bad thing, when – hello! – a goal is ensuring a competitive market in future technologies. (It could even be a good thing generally.)

Finally, Walker claims divesting Chrome and/or Android would endanger users’ security and privacy and frets that it would expose Americans’ personal search queries to “unknown foreign and domestic companies”. Adapting a line from the 1980 movie Hopscotch, “You mean, Google’s methods of tracking are more humane than the others?” While relaying DuckDuckGo’s senior vice-president’s similar reaction, Ars Technica’s Ashley Belanger dubs the proposals “Google’s nightmare”.

At Techdirt, Mike Masnick favors DuckDuckGo’s idea of forcing Google to provide access to its search results via an API so competitors can build services on top, as his company does with Bing. Masnick wants users to become custodians and exploiters of their own search histories. Finally, at Pluralistic, Cory Doctorow likes spinning out – not selling – Chrome. End adtech surveillance, he writes, don’t democratize it.

It’s too early to know what the DoJ will finally recommend. If nothing is done, however, Google will be too rich to fear future lawsuits.

Illustration: Mickey Mouse as the sorceror’s apprentice in (1940).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Supremacy

Supremacy: AI, ChatGPT, and the Race That Will Change the World
By Parmy Olson
Macmillan Business
ISBN: 978-1035038220

One of the most famous books about the process of writing software is Frederick Brooks’ The Mythical Man Month. The essay that gives the book its title makes the point that you cannot speed up the process by throwing more and more people at it. The more people you have, the more they have to all communicate with each other, and the pathways multiply exponentially. Think of it this way: 500 people can’t read a book faster than five people can.

Brooks’ warning immediately springs to mind when Parmy Olson reports, late in her new book, Supremacy, that Microsoft CEO Sadya Nadella was furious to discover that Microsoft’s 5,000 direct employees working on AI lagged well behind the rapid advances being made by the fewer than 200 working working at OpenAI. Some things just aren’t improved by parallel processing.

The story Olson tells is a sad one: two guys, both eager to develop an artificial general intelligence in order to save, or least help, humanity, who both wind up working for large commercial companies whose primary interests are to 1) make money and 2) outcompete the other guy. For Demis Hassabis, the company was Google, which bought his DeepMind startup in 2014. For Sam Altman, founder of OpenAI, it was Microsoft. Which fits: Hassabis’s approach to “solving AI” was to let them teach themselves by playing games, hoping to drive science discovery; Altman sought to solve real-world problems and bring everyone wealth. Too late for Olson’s book, Hassabis has achieved enough of a piece of his dream to have been one of three awarded the 2024 Nobel Prize in chemistry for using AI to predict how proteins will fold.

For both the reason was the same: the resources they sought to work in AI – data, computing power, and high-priced personnel – are too expensive for either traditional startup venture capital funding or for academia. (Cure Vladen Joler, at this year’s Computers, Privacy, and Data Protection, noting that AI is arriving “pre-monopolized”.) As Olson tells the story, they both tried repeatedly to keep the companies they founded independent. Yet, both have wound up positioned to run the companies whose money they took apparently believing, like many geek founders with more IQ points than sense, that they would not have to give up control.

In comparing and contrasting the two founders, Olson shows where many of today’s problems came from. Allying themselves with Big Tech meant giving up on transparency. The ethicists who are calling out these companies over real and present harms caused by the tools they’ve built, such as bias, discrimination, and exploitation of workers performing tasks like labeling data, have 1% or less of the funding of those pushing safety for superintelligences that may never exist.

Olson does a good job of explaining the technical advances that led to the breakthroughs of recent years, as well as the business and staff realities of their different paths. A key point she pulls out is the extent to which both Google and Microsoft have become the kind of risk-averse, slow-moving, sclerotic company they despised when they were small, nimble newcomers.

Different paths, but ultimately, their story is the same: they fought the money, and the money won.

Blown

“This is a public place. Everyone has the right to be left in peace,” Jane (Vanessa Redgrave) tells Thomas (David Hemmings), whom she’s just spotted photographing her with her lover in the 1966 film Blow-Up, by Michelangelo Antonioni. The movie, set in London, proceeds as a mystery in which Thomas’s only tangible evidence is a grainy, blown-up shot of a blob that may be a murdered body.

Today, Thomas would probably be wielding a latest-model smartphone instead of a single lens reflex film camera. He would not bother to hide behind a tree. And Jane would probably never notice, much less challenge Thomas to explain his clearly-not-illegal, though creepy, behavior. Phones and cameras are everywhere. If you want to meet a lover and be sure no one’s photographing you, you don’t go to a public park, even one as empty as the film finds Maryon Park. Today’s 20-somethings grew up with that reality, and learned early to agree some gatherings are no-photography zones.

Even in the 1960s individuals had cameras, but taking high-quality images at a distance was the province of a small minority of experts; Antonioni’s photographer was a professional with his own darkroom and enlarging equipment. The first CCTV cameras went up in the 1960s; their proliferation became public policy issue in the 1980s, and was propagandized as “for your safety without much thought in the post-9/11 2000s. In the late 2010s, CCTV surveillance became democratized: my neighbor’s Ring camera means no one can leave an anonymous gift on their doorstep – or (without my consent) mine.

I suspect one reason we became largely complacent about ubiquitous cameras is that the images mostly remained unidentifiable, or at least unidentified. Facial recognition – especially the live variant police seem to feel they have the right to set up at will – is changing all that. Which all leads to this week, when Joseph Cox at 404 Media reports ($) (and Ars Technica summarizes) that two Harvard students have mashed up a pair of unremarkable $300 Meta Ray-Bans with the reverse image search service Pimeyes and a large language model to produce I-XRAY, an app that identifies in near-real time most of the people they pass on the street, including their name, home address, and phone number.

The students – AnhPhu Nguyen and Caine Ardayfio – are smart enough to realize the implications, imagining for Cox the scenario of a random male spotting a young woman and following her home. This news is breaking the same week that the San Francisco Standard and others are reporting that two men in San Francisco stood in front of a driverless Waymo taxi to block it from proceeding while demanding that the female passenger inside give them her phone number (we used to give such males the local phone number for time and temperature).

Nguyen and Ardayfio aren’t releasing the code they’ve written, but what two people can do, others with fewer ethics can recreate independently, as 30 years of Black Hat and Def Con have proved. This is a new level of democratizated surveillance. Today, giant databases like Clearview AI are largely only accessible to governments and law enforcement. But the data in them has been scraped from the web, like LLMs’ training data, and merged with commercial sources

This latest prospective threat to privacy has been created by the marriage of three technologies that were developed separately by different actors without regard to one another and, more important, without imagining how one might magnify the privacy risks of the others. A connected car with cameras could also run I-XRAY.

The San Francisco story is a good argument against allowing cars on the roads without steering wheels, pedals, and other controls or *something* to allow a passenger to take charge to protect their own safety. In Manhattan cars waiting at certain traffic lights often used to be approached by people who would wash the windshield and demand payment. Experienced drivers knew to hang back at red lights so they could roll forward past the oncoming would-be washer. How would you do this in a driverless car with no controls?

We’ve long known that people will prank autonomous cars. Coverage focused on the safety of the *cars* and the people and vehicles surrounding them, not the passengers. Calling a remote technical support line for help is never going to get a good enough response.

What ties these two cases together – besides (potentially) providing new ways to harass women – is the collision between new technologies and human nature. Plus, the merger of three decades’ worth of piled-up data and software that can make things happen in the physical world.

Arguably, we should have seen this coming, but the manufacturers of new technology have never been good at predicting what weird things their users will find to do with it. This mattered less when the worst outcome was using spreadsheet software to write letters. Today, that sort of imaginative failure is happening at scale in software that controls physical objects and penetrates the physical world. The risks are vastly greater and far more unsettling. It’s not that we can’t see the forest for the trees; it’s that we can’t see the potential for trees to aggregate into a forest.

Illustrations: Jane (Vanessa Redgrave) and her lover, being photographed by Thomas (David Hemmings) in Michelangelo Antonioni’s 1966 film, Blow-Up.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

This perfect day

To anyone remembering the excitement over DNA testing just a few years ago, this week’s news about 23andMe comes as a surprise. At CNN, Allison Morrow reports that all seven board members have resigned to protest CEO Anne Wojcicki’s plan to take the company private by buying up all the shares she doesn’t already own at 40 cents each (closing price yesterday was 0.3301. The board wanted her to find a buyer offering a better price.

In January, Rolfe Winkler reported at the Wall Street Journal ($) that 23andMe is likely to run out of cash by next year. Its market cap has dropped from $6 billion to under $200 million. He and Morrow catalogue the company’s problems: it’s never made a profit nor had a sustainable business model.

The reasons are fairly simple: few repeat customers. With DNA testing, as Winkler writes, “Customers only need to take the test once, and few test-takers get life-altering health results.” 23andMe’s mooted revolution in health care instead was a fad. Now, the company is pivoting to sell subscriptions to weight loss drugs.

This strikes me as an extraordinarily dangerous moment: the struggling company’s sole unique asset is a pile of more than 10 million DNA samples whose owners have agreed they can be used for research. Many were alarmed when, in December 2023, hackers broke into 1.7 million accounts and gained access to 6.9 million customer profiles<, though. The company said the hacked data did not include DNA records but did include family trees and other links. We don't think of 23andMe as a social network. But the same affordances that enabled Cambridge Analytica to leverage a relatively small number of user profiles to create a mass of data derived from a much larger number of their Friends worked on 23andMe. Given the way genetics works, this risk should have been obvious.

In 2004, the year of Facebook’s birth, the Australian privacy campaigner Roger Clarke warned in Very Black “Little Black Books” that social networks had no business model other than to abuse their users’ data. 23andMe’s terms and conditions promise to protect user privacy. But in a sale what happens to the data?

The same might be asked about the data that would accrue from Oracle CEO Larry Ellison‘s surveillance-embracing proposals this week. Inevitably, commentators invoked George Orwell’s 1984. At Business Insider, Kenneth Niemeyer was first to report: “[Ellison] said AI will usher in a new era of surveillance that he gleefully said will ensure ‘citizens will be on their best behavior.'”

The all-AI-surveillance all-the-time idea could only be embraced “gleefully” by someone who doesn’t believe it will affect him.

Niemeyer:

“Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras.

“We’re going to have supervision,” Ellison said. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.”

Ellison is twenty-six years behind science fiction author David Brin, who proposed radical transparency in his 1998 non-fiction outing, The Transparent Society. But Brin saw reciprocity as an essential feature, believing it would protect privacy by making surveillance visible. Ellison is claiming that *inscrutable* surveillance will guarantee good behavior.

At 404 Media, Jason Koebler debunks Ellison point by point. Research and other evidence shows securing schools is unlikely to make them safer; body cameras don’t appear to improve police behavior; and all the technologies Ellison talks about have problems with accuracy and false positives. Indeed, the mayor of Chicago wants to end the city’s contract with ShotSpotter (now SoundThinking), saying it’s expensive and doesn’t cut crime; some research says it slows police 911 response. Worth noting Simon Spichak at Brain Facts, who finds with AI tools humans make worse decisions. So…not a good idea for police.

More disturbing is Koebler’s main point: most of the technology Ellison calls “future” is already here and failing to lower crime rates or solve its causes – while being very expensive. Ellison is already out of date.

The book Ellison’s fantasy evokes for me is the less-known This Perfect Day, by Ira Levin, written in 1970. The novel’s world is run by a massive computer (“Unicomp”) that decides all aspects of individuals’ lives: their job, spouse, how many children they can have. Enforcing all this are human counselors and permanent electronic bracelets individuals touch to ubiquitous scanners for permission.

Homogeneity rules: everyone is mixed race, there are only four boys’ and girls’ names, they eat “totalcakes”, drink cokes, wear identical clothing. For the rest, regularly administered drugs keep everyone healthy and docile. “Fight” is an abominable curse word. The controlled world over which Unicomp presides is therefore almost entirely benign: there is no war, crime, and little disease. It rains only at night.

Naturally, the novel’s hero rebels, joins a group of outcasts (“the Incurables”), and finds his way to the secret underground luxury bunker where a few “Programmers” help Unicomp’s inventor, Wei Li Chun, run the world to his specification. So to me, Ellison’s plan is all about installing himself as world ruler. Which, I mean, who could object except other billionaires?

Illustrations: The CCTV camera on George Orwell’s Portobello Road house.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

A three-hour tour

It should be easy for the UK’s Competition Authority to shut down the proposed merger of Vodafone and Three, two of the UK’s four major mobile network providers. Remaining as competition post-merger would be EE (owned by BT) and Virgin Media O2 (owned by the Spanish company Telefónica and the US-listed company Liberty Global).

The trade union Unite is correctly calling the likely consequences: higher prices, fewer choices, job losses, and poorer customer service. In response, Vodafone and Three are dangling a shiny object of temptation: investment in building 5G network.

Well, hogwash. I would say “Don’t do this” even if I weren’t a Three customer (who left Vodafone years ago). Let them agree to collaborate on building a sbared network and compete on quality and services, but not merge. See the US broadband market, where prices are high, speeds are low, and frustrated consumers rarely have more than one option and take heed.

***

It’s a relief to see some sanity arriving around generative AI. As a glance at the archives will show, I’ve never been a fan; last year Jon Crowcroft and I predicted the eventual demise of large language models due to model collapse. Now, David Gray Widder and Mar Hicks warn in a paper that although the generative AI bubble is deflating, its damage will persist: “…carbon can’t be put back in the ground, workers continue to need to fend off AI’s disciplining effects, and the poisonous effect on our information commons will be hard to undo.”

This week offers worked examples. Re disinformation, at The Verge Sarah Jeong describes the change in our relationship with photographs arriving with new smartphones’ ability to fake realistic images. At The Register, Dan Robinson reports that data centers and AI are causing a substantial rise in water use in the US state of Virginia.

As evidence of the deflating bubble, Widder and Hicks cite the recent Goldman Sachs report arguing that generative AI is unlikely ever to pay back its investment.

And yet: to exploit generative AI, companies and governments are reversing or delaying programs to lower carbon emissions. Also alarmingly, Widder and Hicks wonder if generative AI was always meant to fail and its promoters’ only real goals were to scoop up profits and use the inevitability narrative to make generative AI a vector for embedding infrastructural dependencies (for example, on cloud computing).

That outcome doesn’t have to have been a plan – or a conspiracy theory, just as robber barons don’t actually need to conspire in order to serve each other’s interests. It could just as well be a circumstances-led pivot. But companies that have put money into generative AI will want to scrounge whatever return they can get. So the idea that we will be left with infrastructure that’s a poor fit for our actual needs is a disturbing – and entirely possible – outcome.

***

It’s fascinating – and an example of how you never know where new technologies will lead – to learn that people are using DNA testing to prove they qualify for citizenship in other countries such as Ireland, where a single grandparent will get you in. In some cases, such as the children of unmarried Irish women who were transported to England, this use of DNA testing rights historic wrongs. For others, it opens new opportunities such as the right to live in the EU. Unfortunately, it’s easy to imagine that in countries where citizenship by birthright is a talking point for the right wing this type of DNA testing could be mooted as a requirement. I’d like to think that rounding up babies for deportation is beyond even the most bigoted authoritarians, but…

***

The controversial British technology entrepreneur Mike Lynch has died a billionaire’s death; his superyacht sank in a tornado off the coast of Sicily. I interviewed him for Salon in 2000, when he was newly Britain’s first software billionaire. It was the first time I heard of the theorem developed by Thomas Bayes, an 18th century minister and mathematician (which now is everywhere), and for a long time afterwards I wasn’t certain I’d correctly understood his comments about perception and philosophy. This was exacerbated by early experience with his software in 1996, when it was still a consumer desktop search product fronted by an annoying cartoon dog – I thought it unusably slow compared to pre-Google search engines. By 2000, Autonomy had pivoted to enterprise software, which seemed a better fit.

In 2011, Sharon Bertsch McGrayne‘s book, The Theory That Would Not Die, explained things more clearly. That year, Lynch hit a business peak by selling Autonomy to Hewlett-Packard for $11 billion. A year later, he left HP, and set up Invoke Capital to invest in companies with fundamental technology ideas that scale.

Soon afterwards, HP wrote down $8.8 billion and accused Lynch of accounting fraud. The last 12 years of his life were spent in courtrooms: first a UK civil case, decided for HP in 2022, which Lynch was appealing, then a fight against extradition, and finally a criminal trial in the US, where former Autonomy CFO Sushovan Hussein had already been sent to jail for five years. Lynch’s fatal yacht trip was to celebrate his acquittal.

Illustrations: A Customs and Border Protection scientist reads a DNA profile to determine the origin of a commodity (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Twenty comedians walk into a bar…

The Internet was, famously, created to withstand a bomb outage. In 1998 Matt Blaze and Steve Bellovin said it, in 2002 it was still true, and it remains true today, after 50 years of development: there are more efficient ways to kill the Internet than dropping a bomb.

Take today. The cybersecurity company Crowdstrike pushed out a buggy update, and half the world is down. Airports, businesses, the NHS appointment booking system, supermarkets, the UK’s train companies, retailers…all showing the Blue Screen of Death. Can we say “central points of failure”? Because there are two: Crowdstrike, whose cybersecurity is widespead, and Microsoft, whose Windows operating system is everywhere.

Note this hasn’t killed the *Internet*. It’s temporarily killed many systems *connected to* the Internet. But if you’re stuck in an airport where nothing’s working and confronted with a sign that says “Cash only” when you only have cards…well, at least you can go online to read the news.

The fix will be slow, because it involves starting the computer in safe mode and manually deleting files. Like Y2K remediation, one computer at a time.

***

Speaking of things that don’t work, three bits from the generative AI bubble. First, last week Goldman Sachs issued a scathing report on generative AI that concluded it is unlikely to ever repay the trillion-odd dollars companies are spending on it, while its energy demands could outstrip available supply. Conclusion: generative AI is a bubble that could nonetheless take a long time to burst.

Second, at 404 Media Emanuel Weiburg reads a report from the Tony Blair Institute that estimates that 40% of tasks performed by public sector workers could be partially automated. Blair himself compares generative AI to the industrial revolution. This comparison is more accurate than he may realize, since the industrial revolution brought climate change, and generative AI pours accelerant on it.

TBI’s estimate conflicts with that provided to Goldman by MIT economist Daron Acemoglu, who believes that AI will impact at most 4.6% of tasks in the next ten years. The source of TBI’s estimate? ChatGPT itself. It’s learned self-promotion from parsing our output?

Finally, in a study presented at ACM FAccT, four DeepMind researchers interviewed 20 comedians who do live shows and use AI to participate in workshops using large language models to help write jokes. “Most participants felt the LLMs did not succeed as a creativity support tool, by producing bland and biased comedy tropes, akin to ‘cruise ship comedy material from the 1950s, but a bit less racist’.” Last year, Julie Seabaugh at the LA Times interviewed 13 professional comedians and got similar responses. Ahmed Ahmed compared AI-generated comedy to eating processed foods and, crucially, it “lacks timing”.

***

Blair, who spent his 1997-2007 premiership pushing ID cards into law, has also been trying to revive this longheld obsession. Two days after Keir Starmer took office, Blair published a letter in the Sunday Times calling for its return. As has been true throughout the history of ID cards (PDF), every new revival presents it as a solution to a different problem. Blair’s 2024 reason is to control immigration (and keep the far-right Reform party at bay). Previously: prevent benefit fraud, combat terorism, streamline access to health, education, and other government services (“the entitlement card”), prevent health tourism.

Starmer promptly shot Blair down: “not part of the government’s plans”. This week Alan West, a home office minister 2007-2010 under Gordon Brown, followed up with a letter to the Guardian calling for ID cards because they would “enhance national security in the areas of terrorism, immigration and policing; facilitate access to online government services for the less well-off; help to stop identity theft; and facilitate international travel”.

Neither Blair (born 1953) nor West (born 1948) seems to realize how old and out of touch they sound. Even back then, the “card” was an obvious decoy. Given pervasive online access, a handheld reader, and the database, anyone’s identity could be checked anywhere at any time with no “card” required.

To sound modern they should call for institutionalizing live facial recognition, which is *already happening* by police fiat. Or sprinkled AI bubble on their ID database.

Databases and giant IT projects that failed – like the Post Office scandal – that was the 1990s way! We’ve moved on, even if they haven’t.

***

If you are not a deposed Conservative, Britain this week is like waking up sequentially from a series of nightmares. Yesterday, Keir Starmer definitively ruled out leaving the European Convention on Human Rights – Starmer’s background as a human rights lawyer to the fore. It’s a relief to hear after 14 years of Tory ministers – David Cameron,, Boris Johnson, Suella Braverman, Liz Truss, Rishi Sunak – whining that human rights law gets in the way of their heart’s desires. Like: building a DNA database, deporting refugees or sending them to Rwanda, a plan to turn back migrants in boats at sea.

Principles have to be supported in law; under the last government’s Public Order Act 2023 curbing “disruptive protest”, yesterday five Just Stop Oil protesters were jailed for four and five years. Still, for that brief moment it was all The Brotherhood of Man.

Illustrations: Windows’ Blue Screen of Death (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.