Review: Careless People

Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism
By Sarah-Wynn-Williams
Macmillan
ISBN: 978-1035065929

In his 2021 book Social Warming, Charles Arthur concludes his study of social media with the observation that the many harms he documented happened because no one cared to stop them. “Nobody meant for this to happen,” he writes to open his final chapter.

In her new book, Careless People, about her time at Facebook, former New Zealand diplomat Sarah Wynn-Williams shows the truth of Arthur’s take. A sad tale of girl-meets-company, girl-loses-company, girl-tells-her-story, it starts with Wynn-Williams stalking Facebook to identify the right person to pitch hiring her to build its international diplomatic relationships. I kept hoping increasing dissent and disillusion would lead her to quit. Instead, she stays until she’s fired after HR dismisses her complaint of sexual harassment.

In 2011, when Wynn-Williams landed her dream job, Facebook’s wild expansion was at an early stage. CEO Mark Zuckerberg is awkward, sweaty, and uncomfortable around world leaders, who are dismissive. By her departure in 2017, presidents of major countries want selfies with him and he’s much more comfortable – but no longer cares. Meanwhile, then-Chief Operating Officer Sheryl Sandberg, wealthy from her time at Google, becomes a celebrity via her book, Lean In, written with the former TV comedy writer Nell Scovell. Sandberg’s public feminism clashes with her employee’s experience. When Wynn-Williams’s first child is a year old, a fellow female employee congratulates her on keeping the child so well-hidden she didn’t know it existed.

The book provides hysterically surreal examples of American corporatism. She is in the delivery room, feet in stirrups, ordered to push, when a text arrives: can she draft talking points for Davos? (She tries!) For an Asian trip, Zuckerberg wants her to arrange a riot or peace rally so he can appear to be “gently mobbed”. When the company fears “Mark” or “Sheryl” might be arrested if they travel to Korea, managers try to identify a “body” who can be sent in as a canary. Wynn-Williams’s husband has to stop her from going. Elsewhere, she uses her diplomatic training to land Zuckerberg a “longer-than-normal handshake” with Xi Jinping.

So when you get to her failure to get her bosses to beef up the two-person content moderation team for Myanmar’s 60 million people, rewrite the section so Burmese characters render correctly, and post country-specific policies, it’s obvious what her bosses will decide. The same is true of internal meetings discussing the tools later revealed to let advertisers target depressed teens. Wynn-Williams hopes for a safe way forward, but warns that company executives’ “lethal carelessness” hasn’t changed.

Cultural clash permeates this book. As a New Zealander, she’s acutely conscious of the attitudes she encounters, and especially of the wealth and class disparity that divide the early employees from later hires. As pregnancies bring serious medical problems and a second child, the very American problem of affording health insurance makes offending her bosses ever riskier.

The most important chapters, whose in-the-room tales fill in gaps in books by Frances Haugen, Sheera Frankel and Cecilia Kang, and Steven Levy, are those in which Wynn-Williams recounts the company’s decision to embrace politics and build its business in China. If, her bosses reason, politicians become dependent on Facebook for electoral success, they will balk at regulating it. Donald Trump’s 2016 election, which Zuckerberg initially denied had been significantly aided by Facebook, awakened these political aspirations. Meanwhile, Zuckerberg leads the company to build a censorship machine to please China. Wynn-Williams abhors all this – and refuses to work on China. Nonetheless, she holds onto the hope that she can change the company from inside.

Apparently having learned little from Internet history, Meta has turned this book into a bestseller by trying to suppress it. Wynn-Williams managed one interview, with Business Insider, before an arbitrator’s injunction stopped her from promoting the book or making any “disparaging, critical or otherwise detrimental comments” related to Meta. This fits the man Wynn-Williams depicts who hates to lose so much that his employees let him win at board games.

The risks of recklessness

In 1997, when the Internet was young and many fields were still an unbroken green, the United States Institute of Peace convened a conference on virtual diplomacy. In my writeup for the Telegraph, I saw that organizer Bob Schmitt had convened two communities – computer and diplomacy – who were both wondering how they could get the other to collaborate but had no common ground.

On balance, the computer folks, who saw a potential market as well as a chance to do some good, were probably more eager than the diplomats, who favored caution and understood that in their discipline speed was often a bad idea. They were also less attracted than one might think to the notion of virtual meetings despite the travel it would save. Sometimes, one told me, it’s the random conversations around the water cooler that make plain what’s really happening. Why is Brazil mad? In a virtual meeting, it may be harder to find out that it’s not the negotiations but the fact that their soccer team lost last night.

I thought at the time that the conference would be the first of many to tackle these issues. But as it’s turned out, I’ve never been at an event anything like it…until now, nearly 30 years later. This week, a group of diplomats and human rights advocates met, similarly, to consider how the cyber world is changing diplomacy and international relations.

The timing is unexpectedly fortuitous. This week’s revelation that someone added Atlantic editor-in-chief Jeffrey Goldberg to a Signal chat in which US cabinet officials discussed plans for an imminent military operation in Yemen shows the kinds of problems you get when you rely too much on computer mediation. In the usual setting, a Sensitive Compartmented Information Facility (SCIF), you can see exactly who’s there, and communications to anyone outside that room are entirely blocked. As a security clearance-carrying friend of mine said, if he’d made such a blunder he’d be in prison.

The Signal blunder was raised by almost every speaker. It highlights something diplomats think about a lot: who is or is not in the room. Today, as in 1997, behavioral cues are important; one diplomat estimated that meeting virtually costs you 50% to 60% of the communication you have when meeting face-to-face. There are benefits, too, of course, such as opening side channels to remote others who can advise on specific questions, or the ability to assemble a virtual team a small country could never afford to send in person.

These concerns have not changed since 1997. But it’s clear that today’s diplomats feel they have less choice about what new technology gets deployed and how than they did then, when the Internet’s most significant predecessor new technology was the global omnipresence of news network CNN, founded in 1980. Now, much of what control they had then is disappearing, both because human behavior overrides their careful, rulebound, friction-filled diplomatic channels and processes via shadow IT, but also because the biggest technology companies own so much of what we call “public” infrastructure.

Another key difference: many people don’t see the need for education to learn facts; it’s a particular problem for diplomats, who rely on historical data to show the world they aspire to build. And another: today a vastly wider array of actors, from private companies to individuals and groups of individuals, can create world events. And finally: in 1997 multinational companies were already challenging the hegemony of governments, but they were not yet richer and more powerful than countries.

Cue for a horror thought: what if Big Tech, which is increasingly interested in military markets, and whose products are increasingly embedded at the hearts of governments decide that peace is bad for business? Already they are allying with politicians to resist human rights principles, most notably privacy.

Which cues another 1997 memory: Nicholas Negroponte absurdly saying that the Internet would bring world peace by breaking down national borders. In 20 years, he said (that would be eight years ago) children would not know what nationalism is. Instead, on top of all today’s wars and internal conflicts, we’re getting virtual infrastructure attacks more powerful than bullets, and proactive agents fueled by large language models. And all fueled by the performative-outrage style of social media, which is becoming just how people speak, publicly and privately.

All this is more salient when you listen to diplomats and human rights activists as they are the ones who see up close the human lives lost. Meta’s name comes up most often, as in Myanmar and Ethiopia.

The mood was especially touchy because a couple of weeks ago a New Zealand diplomat was recalled after questioning US president Donald Trump’s understanding of history during a public panel in London – ironically in Chatham House under the Chatham House rule.

“You say the wrong thing on the wrong platform at the wrong time, and your career is gone,” one observed. Their people perimeter is gone, as it has been for so many of us for a decade or more. But more than most people, diplomats who don’t have trust have nothing. And so: “We’re in a time when a single message can up-end relationships.”

No surprise, then, that the last words reflected 1997’s conclusion: “Diplomacy is still a contact sport.”

Illustrations: Internet meme rewriting Wikipedia’s Alice and Bob page explaining man-in-the-middle attacks with the names Hegseth, Waltz, and Goldberg, referencing the Signal snafu.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Dorothy Parker was wrong

Goldie Hawn squinted into the lights. “I can’t read that,” she said to her co-presenter. “Cataracts.”

It was the 2025 Academy Awards. She was wearing a pale gold gown, and her hair and makeup did their best to evoke the look she’s had ever since she became a star in the 1960s. She is, in fact, 79. But Hollywood 79. Except for the cataracts. I know people who cheered when she said that bit of honesty about her own aging.

Doubtless soon Hawn will join the probably hundreds of millions who’ve had cataract surgery, and at her next awards outing she’ll be able to read the Teleprompter just fine. Because, let’s face it, although the idea of the surgery is scary and although the tabloids painted Hawn’s “condition” as “tragic”, if you’re going to have something wrong with you at 79, cataracts are the least worst. They’re not life-threatening. There’s a good, thoroughly tested treatment that takes less than half an hour. Recovery is short (a few weeks). Side effects, immediate or ongoing, are rare and generally correctable. Treatment vastly improves your quality of life and keeps you independent. Even delaying treatment is largely benign: the cataract may harden and become more complicated to remove, but doesn’t do permanent damage.

Just don’t see the 1929 short experimental film Un Chien Andalou when you’re 18. That famous opening scene with the razor and the eyeball squicks out *everybody*. Thank you, Luis Bunuel and Salvador Dali.

I have cataracts. But: I also have a superpower. Like lots of people with extreme myopia, even at 71 I can read the smallest paragraph on the Jaeger eye test in medium-low lighting conditions. I have to hold it four and a half inches from my face, but close-up has always been the only truly reliable part of my vision.

Eye doctors have a clear, shared understanding of what constitutes normal vision, which involves not needing glasses to see at a distance and needing reading glasses around the time you turn 40. So when it comes time for cataract surgery they see it as an opportunity to give you the vision that normal people have.

In the entertainment world, this attitude was neatly summed up in 1926 by the famed acerbic wisecrack and New Yorker writer Dorothy Parker: “Men seldom make passes at girls who wear glasses.” It’s nonsense. Women who wear glasses know it’s nonsense. There was even a movie – How to Marry a Millionaire (1953) – which tackled this silliness by having Marilyn Monroe’s Pola wander around bumping into walls and getting onto wrong planes until she meets Freddie (David Wayne), who tells her to put her glasses on and that he thinks she looks better wearing them. Of course she does. Restoring the ability to see in focus removes the blank cluelessness from her face.

“They should put on your tombstone ‘She loved myopia’,” joked the technician drawing up a specification for the lens they were going to implant. We all laughed. But it’s incorrect, since what I love is not myopia but the intimate feeling of knowing I can read absolutely anything in most lighting conditions.

But kudos: whatever their preferences, they are doing their best to accommodate mine – all credit to the NHS and Moorfields. The first eye has healed quickly, and while the full outcome is still uncertain (it’s too soon) the results look promising.

So, some pointers, culled by asking widely what people wished they’d known beforehand or asked their surgeon.

– Get a diving mask or swimming goggles to wear in the shower because for the first couple of weeks they don’t want all that water (or soap) to get in your eye. (This was the best tip I got, from my local postmaster.)

– A microwaveable heated mask, which I didn’t try, might help if you’re in discomfort (but ask your doctor).

– Plan to feel frustrated for the first week because your body feels fine but you aren’t supposed to do anything strenuous that might raise the pressure in your eye and disrupt its healing. Don’t do sports, don’t lift weights, don’t power walk, don’t bend over with your eyes below your waist, and avoid cooking or anything else that might irritate your eyes and tempt you to scratch or apply pressure. The bright side: you can squat to reach things. And you can walk gently.

– When you ask people what they wish they’d known, many will say “How easy it was” and “I wish I’d done it years earlier”. In your panicked pre-surgery state, this is not helpful. It is true that the operation didn’t hurt (surgeons are attentive to this, because they don’t want you to twitch). It is true that the lights shining on your eye block sight of what they’re doing. I saw a lot of magenta and blue lights. I heard machine sounds, which my surgeon kindly explained as part of fulfilling my request to talk me through it. Some liquid dripped into my hair.

– Take the time you need to prepare, because there’s no undo button.

Think of it as a very scary dental appointment.

Illustrations: Pola (Marilyn Monroe) finding out that glasses can be an asset in How to Marry a Millionaire (1953).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Lost futures

In early December, the Biden administration’s Department of Justice filed its desired remedies, having won its case that Google is a monopoly. Many foresaw a repeat of 2001, when the incoming Bush administration dropped the Clinton DoJ’s plan to break up Microsoft.

Maybe not this time. In its first filing, Trump’s DoJ still wants Google to divest itself of the Chrome browser and intends to bar it from releasing other browsers. The DoJ also wants to impose some restrictions on Android and Google’s AI investments.

At The Register, Thomas Claburn reports that Mozilla is objecting to the DoJ’s desire to bar Google from paying other companies to promote its search engine by default. Those payments, Mozilla president Mark Surman admits to Claburn, keep small independent browsers afloat.

Despite Mozilla’s market shrinkage and current user complaints, it and its fellow minority browsers remain important in keeping the web open and out of full corporate control. It’s definitely counter-productive if the court, in trying to rein in Google’s monopoly, takes away what viability these small players have left. They are us.

***

On the other hand, it’s certainly not healthy for those small independents to depend for their survival on the good will of companies like Google. The Trump administration’s defunding of – among so many things – scientific research is showing just how dangerous it can be.

Within the US itself, the government has announced cuts to indirect funding, which researchers tell me are crippling to universities; $800 million cut in grants to Johns Hopkins, $400 at Columbia University, and so many more.

But it doesn’t stop in the US or with the cuts to USAID, which have disrupted many types of projects around the world, some of them scientific or medical research. The Trump administration is using its threats to scientific funding across the world to control speech and impose its, um, values. This morning, numerous news sources report that Australian university researchers have been sent questionnaires they must fill out to justify their US-funded grants. Among the questions: their links to China and their compliance with Trump’s gender agenda.

To be fair, using grants and foreign aid to control speech is not a new thing for US administrations. For example, Republican presidents going back to Reagan have denied funding to international groups that advocated abortion rights or provided abortions, limiting what clinicians could say to pregnant patients. (I don’t know if there are Democratic comparables.)

Science is always political to some extent: think the for stating that the earth was not the center of the universe. Or take intelligence: in his 1981 book The Mismeasure of Man, Stephen Jay Gould documented a century or more of research by white, male scientists finding that white, male scientists were the smartest things on the planet. Or say it inBig Tobacco and Big Oil, which spent decades covering up research showing that their products were poisoning us and our planet.

The Trump administration’s effort is, however, a vastly expanded attempt that appears to want to squash anything that disagrees with policy, and it shows the dangers of allowing any one nation to amass too much “soft power”. The consequences can come quickly and stay long. It reminds me of what happened in the UK in the immediate post-EU referendum period, when Britain-based researchers found themselves being dropped from cross-EU projects because they were “too risky”, and many left for jobs in other countries where they could do their work in peace.

The writer Prashant Vaze sometimes imagines a future in which India has become the world’s leading scientific and technical superpower. This imagined future seems more credible by the day.

***

It’s strange to read that the 35-year-old domestic robots pioneer, iRobot, may be dead in a year. It seemed like a sure thing; early robotics researchers say that people were begging for robot vacuum cleaners even in the 1960s, perhaps inspired by Rosie, The Jetsons‘ robot maid.

Many people may have forgotten (or not known) the excitement that attended the first Roombas in 2002. Owners gave them names, took them on vacation, and posted videos. It looked like the start of a huge wave.

I bought a Roomba in 2003, reviewing it so enthusiastically that an email complained that I should have said I had been given it by a PR person. For a few happy months it wandered around cleaning.

Then one day it stopped moving and I discovered that long hair paralyzed it. I gave it away and went back to living with moths.

The Roomba now has many competitors, some highly sophisticated, run by apps, and able to map rooms, identify untouched areas, scrub stains, and clean in corners. Even so, domestic robots have not proliferated as imagined 20 – or 12 – years ago. I visit people’s houses, and while I sometimes encounter Alexas or Google Assistants, robot vacuums seem rare.

So much else of smart homes as imagined by companies like Microsoft and IBM remain dormant. It does seem like – perhaps a reflection on my social circle – the “smart home” is just a series of remote-control apps and outsourced services. Meh.

Illustrations: Rosie, the Jetsons‘ XB-500 robot maid, circa 1962.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Unsafe

The riskiest system is the one you *think* you can trust. Say it in encryption: the least secure encryption is encryption that has unknown flaws. Because, in the belief that your communication or data is protected, you feel it’s safe to indulge in what in other contexts would be obviously risky behavior. Think of it like an unseen hole in a condom.

This has always been the most dangerous aspect of the UK government’s insistence that its technical capability notices remain secret. Whoever alerted the Washington Post to the notice Apple received a month ago commanding it to weaken its Advanced Data Protection performed an important public service. Now, Carly Page reports at TechCrunch based on a blog posting by security expert Alec Muffett, the UK government is recognizing that principle by quietly removing from its web pages advice to use that same encryption that was directed at people whose communications are at high risk – such as barristers and other legal professionals. Apple has since withdrawn ADP in the UK.

More important long-term, at the Financial Times, Tim Bradshaw and Lucy Fisher report that Apple has appealed the government’s order to the Investigatory Powers Tribunal. This will be, as the FT notes, the first time government powers under the Investigatory Powers Act (2016) to compel the weakening of security features will be tested in court. A ruling that the order was unlawful could be an important milestone in the seemingly interminable fight over encryption.

***

I’ve long had the habit of doing minor corrections on Wikipedia – fixing typos, improving syntax – as I find them in the ordinary course of research. But recently I have had occasion to create a couple of new pages, with the gratefully-received assistance of a highly experienced Wikipedian. At one time, I’m sure this was a matter of typing a little text, garlanding it with a few bits of code, and garnishing it with the odd reference, but standards have been rising all along, and now if you want your newly-created page to stay up it needs a cited reference for every statement of fact and a minimum of one per sentence. My modest pages had ten to 20 references, some servicing multiple items. Embedding the page matters, too, so you need to link mentions to all those pages. Even then, some review editor may come along and delete the page if they think the subject is not notable enough or violates someone’s copyright. You can appeal, of course…and fix whatever they’ve said the problem is.

It should be easier!

All of this detailed work is done by volunteers, who discuss the decisions they make in full view on the talk page associated with every content page. Studying the more detailed talk pages is a great way to understand how the encyclopedia, and knowledge in general, is curated.

Granted, Wikipedia is not perfect. Its policy on primary sources can be frustrating, and errors in cited secondary sources can be difficult to correct. The culture can be hostile if you misstep. Its coverage is uneven, But, as Margaret Talbot reports at the New Yorker and Amy Bruckman writes in her 2022 book, Should You Believe Wikipedia?, all those issues are fully documented.

Early on, Wikipedia was often the butt of complaints from people angry that this free encyclopedia made by *amateurs* threatened the sustainability of Encyclopaedia Britannica (which has survived though much changed). Today, it’s under attack by Elon Musk and the Heritage Foundation, as Lila Shroff writes at The Atlantic. The biggest danger isn’t to Wikipedia’s funding; there’s no offer anyone can make that would lead to a sale. The bigger vulnerability is the safety of individual editors. Scold they may, but as a collective they do important work to ensure that facts continue to matter.

***

Firefox users are manifesting more and more unhappiness about the direction Mozilla is taking with Firefox. The open source browser’s historic importance is outsized compared to its worldwide market share, which as of February 2025 is 2.63%, according to Statcounter. A long tail of other browsers are based on it, such as LibreWolf, Waterfox, and the privacy-protecting Tor.

The latest complaint, as Liam Proven and Thomas Claburn write at The Register is that Mozilla has removed its commitment not to sell user data from Firefox’s terms and conditions and privacy policy. Mozilla responded that the company doesn’t sell user data “in the way that most people think about ‘selling data'” but needed to change the language because of jurisdictional variations in what the word “sell” means. Still, the promise is gone.

This follows Mozilla’s September 2024 decision, reported by Richard Speed at The Register, to turn on by default a “privacy-preserving feature” to track users that led the NGO noyb to file a complaint with the Austrian data protection authority. And a month ago, Mark Hachman reported at PC World that Mozilla is building access to third-party generative AI chatbots into Firefox, and there are reports that it’s adding “AI-powered tab grouping.

All of these are basically unwelcome, and of all organizations Mozilla should have been able to foresee that. Go away, AI.

***

Molly White is expertly covering the Trump administration’s proposed “US Crypto Reserve”. Remains only to add Rachel Maddow, who compared it to having a strategic reserve of Beanie Babies.

Illustrations:: Beanie baby pelican.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Optioned

The UK’s public consultation on creating a copyright exception for AI model training closed on Tuesday, and it was profoundly unsatisfying.

Many, many creators and rights holders (who are usually on opposing sides when it comes to contract negotiations) have opposed the government’s proposals. Every national newspaper ran the same Make It Fair front page opposing them; musicians released a silent album. In the Guardian, the peer and independent filmmaaker Beeban Kidron calls the consultation “fixed” in favor of the AI companies. Kidron’s resume includes directing Bridget Jones: The Edge of Reason (2004) and the meticulously researched 2013 study of teens online, InRealLife, and she goes on to call the government’s preferred option a “wholesale transfer of wealth from hugely successful sector that invests hundreds of millions in the UK to a tech industry that extracts profit that is not assured and will accrue largely to the US and indeed China.”

The consultation lists four options: leave the situation as it is; require AI companies to get licenses to use copyrighted work (like everyone else has to); allow AI companies to use copyrighted works however they want; and allow AI companies to use copyrighted works but grant rights holders the right to opt out.

I don’t like any of these options. I do believe that creators will figure out how to use AI tools to produce new and valuable work. I *also* believe that rights holders will go on doing their best to use AI to displace or impoverish creators. That is already happening in journalism and voice acting, and was a factor in the 2023 Hollywood writers’ strike. AI companies have already shown that won’t necessarily abide by arrangements that lack the force of law. The UK government acknowledged this in its consultation document, saying that “more than 50% of AI companies observe the longstanding Internet convention robots.txt.” So almost half of them *don’t*.

At Pluralistic, Cory Doctorow argued in February 2023 that copyright won’t solve the problems facing creators. His logic is simple: after 40 years of expanding copyright terms (from a maximum of 56 years in 1975 to “author’s life plus 70” now), creators are being paid *less* than they were then. Yes, I know Taylor Swift has broken records for tour revenues and famously took back control of her own work. but millions of others need, as Doctorow writes, structural market changes. Doctorow highlights what happened with sampling: the copyright maximalists won, and now musicians are required to sign away sampling rights to their labels, who pocket the resulting royalties.

For this sort of reason, the status quo, which the consultation calls “option 0”, seems likely to open the way to lots more court cases and conflicting decisions, but provide little benefit to anyone. A licensing regime (“option 1”) will likely go the way of sampling. If you think of AI companies as inevitably giant “pre-monopolized” outfits, like Vladen Joler at last year’s Computers, Privacy, and Data Protection conference, “Option 2” looks like simply making them richer and more powerful at the expense of everyone else in the world. But so does “option 3”, since that *also* gives AI companies the ability to use anything they want. Large rights holders will opt out and demand licensing fees, which they will keep, and small ones will struggle to exercise their rights.

As Kidron said, the government’s willingness to take chances with the country’s creators’ rights is odd, since intellectual property is a sector in which Britain really *is* a world leader. On the other hand, as Moody says, all of it together is an anthill compared to the technology sector.

None of these choices is a win for creators or the public. The government’s preferred option 3 seems unlikely to achieve its twin goals of making Britain a world leader in AI and mainlining AI into the veins of the nation, as the government put it last month.

China and the US both have complete technology stacks *and* gigantic piles of data. The UK is likely better able to matter in AI development than many countries – see for example DeepMind, which was founded here in 2010. On the other hand, also see DeepMind for the probable future: Google bought it in 2014, and now its technology and profits belong to that giant US company.

At Walled Culture, Glyn Moody argued last May that requiring the AI companies to pay copyright industries makes no sense; he regards using creative material for training purposes as “just a matter of analysis” that should not require permission. And, he says correctly, there aren’t enough such materials anyway. Instead, he and Mike Masnick at Techdirt propose that the generative AI companies should pay creators of all types – journalists, musicians, artists, filmmakers, book authors – to provide them with material they can use to train their models, and the material so created should be placed in the public domain. In turn it could become new building blocks the public can use to produce even more new material. As a model for supporting artists, patronage is old.

I like this effort to think differently a lot better than any of the government’s options.

Illustrations:: Tuesday’s papers, unprecedentedly united to oppose the government’s copyright plan.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Cognitive dissonance

The annual State of the Net, in Washington, DC, always attracts politically diverse viewpoints. This year was especially divided.

Three elements stood out: the divergence between the only remaining member of the Privacy and Civil Liberties Oversight Board (PCLOB) and a recently-fired colleague; a contentious panel on content moderation; and the yay, American innovation! approach to regulation.

As noted previously, on January 29 the days-old Trump administration fired PCLOB members Travis LeBlanc, Ed Felten, and chair Sharon Bradford Franklin; the remaining seat was already empty.

Not to worry, remaining member Beth Williams, said. “We are open for business. Our work conducting important independent oversight of the intelligence community has not ended just because we’re currently sub-quorum.” Flying solo she can greenlight publication, direct work, and review new procedures and policies; she can’t start new projects. A review is ongoing of the EU-US Privacy Framework under Executive Order 14086 (2022). Williams seemed more interested in restricting government censorship and abuse of financial data in the name of combating domestic terrorism.

Soon afterwards, LeBlanc, whose firing has him considering “legal options”, told Brian Fung that the outcome of next year’s reauthorization of Section 702, which covers foreign surveillance programs, keeps him awake at night. Earlier, Williams noted that she and Richard E. DeZinno, who left in 2023, wrote a “minority report” recommending “major” structural change within the FBI to prevent weaponization of S702.

LeBlanc is also concerned that agencies at the border are coordinating with the FBI to surveil US persons as well as migrants. More broadly, he said, gutting the PCLOB costs it independence, expertise, trustworthiness, and credibility and limits public options for redress. He thinks the EU-US data privacy framework could indeed be at risk.

A friend called the panel on content moderation “surreal” in its divisions. Yael Eisenstat and Joel Thayer tried valiantly to disentangle questions of accountability and transparency from free speech. To little avail: Jacob Mchangama and Ari Cohn kept tangling them back up again.

This largely reflects Congressional debates. As in the UK, there is bipartisan concern about child safety – see also the proposed Kids Online Safety Act – but Republicans also separately push hard on “free speech”, claiming that conservative voices are being disproportionately silenced. Meanwhile, organizations that study online speech patterns and could perhaps establish whether that’s true are being attacked and silenced.

Eisenstat tried to draw boundaries between speech and companies’ actions. She can still find on Facebook the sme Telegram ads containing illegal child sexual abuse material that she found when Telegram CEO Pavel Durov was arrested. Despite violating the terms and conditions, they bring Meta profits. “How is that a free speech debate as opposed to a company responsibility debate?”

Thayer seconded her: “What speech interests do these companies have other than to collect data and keep you on their platforms?”

By contrast, Mchangama complained that overblocking – that is, restricting legal speech – is seen across EU countries. “The better solution is to empower users.” Cohn also disliked the UK and European push to hold platforms responsible for fulfilling their own terms and conditions. “When you get to whether platforms are living up to their content moderation standards, that puts the government and courts in the position of having to second-guess platforms’ editorial decisions.”

But Cohn was talking legal content; Eisenstat was talking illegal activity: “We’re talking about distribution mechanisms.” In the end, she said, “We are a democracy, and part of that is having the right to understand how companies affect our health and lives.” Instead, these debates persist because we lack factual knowledge of what goes on inside. If we can’t figure out accountability for these platforms, “This will be the only industry above the law while becoming the richest companies in the world.”

Twenty-five years after data protection became a fundamental right in Europe, the DC crowd still seem to see it as a regulation in search of a deal. Representative Kat Cammack (R-FL), who described herself as the “designated IT person” on the energy and commerce committee, was particularly excited that policy surrounding emerging technologies could be industry-driven, because “Congress is *old*!” and DC is designed to move slowly. “There will always be concerns about data and privacy, but we can navigate that. We can’t deter innovation and expect to flourish.”

Others also expressed enthusiasm for “the great opportunities in front of our country”, compared the EU’s Digital Markets Act to a toll plaza congesting I-95. Samir Jain, on the AI governance panel, suggested the EU may be “reconsidering its approach”. US senator Marsha Blackburn (R-TN) highlighted China’s threat to US cybersecurity without noting the US’s own goal, CALEA.

On that same AI panel, Olivia Zhu, the Assistant Director for AI Policy for the White House Office of Science and Technology Policy, seemed more realistic: “Companies operate globally, and have to do so under the EU AI Act. The reality is they are racing to comply with [it]. Disengaging from that risks a cacophony of regulations worldwide.”

Shortly before, Johnny Ryan, a Senior Fellow at the Irish Council for Civil Liberties posted: “EU Commission has dumped the AI Liability Directive. Presumably for “innovation”. But China, which has the toughest AI law in the world, is out innovating everyone.”

Illustrations: Kat Cammack (R-FL) at State of the Net 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Isolate

Yesterday, the Global Encryption Coalition published a joint letter calling on the UK to rescind its demand that Apple undermine (“backdoor”) the end-to-end encryption on its services. The Internet Society is taking signatures until February 20.

The background: on February 7, Joseph Menn reported at the Washington Post (followed by Dominic Preston at The Verge) that in January the office of the Home Secretary sent Apple a technical capability notice under the Investigatory Powers Act (2018) ordering it to provide access to content that anyone anywhere in the world has uploaded to iCloud and encrypted with Apple’s Advanced Data Protection.

Technical capability notices are supposed to be secret. It’s a criminal offense to reveal that you’ve been sent one. Apple can’t even tell users that their data may be compromised. (This kind of thing is why people publish warrant canaries.) Menn notes that even if Apple withdraws ADP in the UK, British authorities will still demand access to encrypted data everywhere *else*. So it appears that if the Home Office doesn’t back down and Apple is unwilling to cripple its encryption, the company will either have to withdraw ADP across the world or exit the UK market entirely. At his Odds and Ends of History blog, James O’Malley calls the Uk’s demand stupid, counter-productive, and unworkable. At TechRadar, Chiara Castro asks who’s next, and quotes Big Brother Watch director Silkie Carlo: “unprecedented for a government in any democracy”.

When the UK first began demanding extraterritorial jurisdiction for its interception rules, most people wondered how the country thought it would be able to impose it. That was 11 years ago; it was one of the new powers codified in the Data Retention and Investigatory Powers Act (2014) and kept in its replacement, the IPA in 2016.

Governments haven’t changed – they’ve been trying to undermine strong encryption in the hands of the masses since 1991, when Phil Zinmmermann launched PGP – but the technology has, as Graham Smith recounted at Ars Technica in 2017. Smartphones are everywhere. People store their whole lives on them for everything and giant technology companies encrypt both the device itself and the cloud backups. Government demands have changed to reflect that, from focusing on the individual with key escrow and key lengths to focusing on the technology provider with client-side scanning, encrypted messaging (see also the EU) and now cloud storage.

At one time, a government could install a secret wiretap by making a deal with a legacy telco. The Internet’s proliferation of communications providers changed that for a while. During the resulting panic the US passed the Communications Assistance for Law Enforcement Act (1994), which requires Internet service providers and telecommunications companies to install wiretap-ready equipment – originally for telephone calls, later broadband and VOIP traffic as well.

This is where the UK government’s refusal to learn from others’ mistakes is staggering. Just four months ago, the US discovered Salt Typhoon, a giant Chinese hack into its core telecommunications networks that was specifically facilitated by…by…CALEA. To repeat: there is no such thing as a magic hole that only “good guys” can use. If you undermine everyone’s privacy and security to facilitate law enforcement, you will get an insecure world where everyone is vulnerable. The hack has led US authorities to promote encrypted messaging.

Joseph Cox’s recent book, Dark Wire touches on this. It’s a worked example of what law enforcement internationally can do if given open access to all messages criminals send across a network when they think they are operating in complete safety. Yes, the results were impressive: hundreds of arrests, dozens of tons of drugs seized, masses if firearms impounded. But, Cox writes, all that success was merely a rounding error in global drug trade. Universal loss of privacy and security versus a rounding error: it’s the definition of “disproportionate”.

It remains to be seen what Apple decides to do and whether we can trust what the company tells us. At his blog, Alec Muffett is collecting ongoing coverage of events. The Future of Privacy Forum celebrated Safer Internet Day, February 11, with an infographic showing how encryption protects children and teens.

But set aside for a moment all the usual arguments about encryption, which really haven’t changed in over 30 years because mathematical reality hasn’t.

In the wider context, Britain risks making itself a technological backwater. First, there’s the backdoored encryption demand, which threatens every encrypted service. Second, there’s the impact of the onrushing Online Safety Act, which comes into force in March. Ofcom, the regulator charged with enforcing it, is issuing thousands of pages of guidance that make it plain that only large platforms will have the resources to comply. Small sites, whether businesses, volunteer-run Fediverse instances, blogs, established communities, or web boards, will struggle even if Ofcom starts to do a better job of helping them understand their legal obligations. Many will likely either shut down or exit the UK, leaving the British Internet poorer and more isolated as a result. Ofcom seems to see this as success.

It’s not hard to predict the outcome if these laws converge in the worst possible timeline: a second Brexit, this one online.

Illustrations: T-shirt (gift from Jen Persson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

What we talk about when we talk about computers

The climax of Nathan Englander‘s very funny play What We Talk About When We Talk About Anne Frank sees the four main characters play a game – the “Anne Frank game” – that two of them invented as children. The play is on at the Marylebone Theatre until February 15.

The plot: two estranged former best friends in a New York yeshiva have arranged a reunion for themselves and their husbands. Debbie (Caroline Catz), has let her religious attachment lapse in the secular environs of Miami, Florida, where her husband, Phil (Joshua Malina), is an attorney. Their college-age son, Trevor (Gabriel Howell), calls the action.

They host Hasidic Shosh (Dorothea Myer-Bennett) and Yuri (Simon Yadoo), formerly Lauren and Mark, whose lives in Israel and traditional black dress and, in Shosh’s case, hair-covering wig, have left them unprepared for the bare arms and legs of Floridians. Having spent her adult life in a cramped apartment with Yuri and their eight daughters, Shosh is astonished at the size of Debbie’s house.

They talk. They share life stories. They eat. And they fight: what is the right way to be Jewish? Trevor asks: given climate change, does it matter?

So, the Anne Frank game: who among your friends would hide you when the Nazis are coming? The rule that you must tell the truth reveals the characters’ moral and emotional cores.

I couldn’t avoid up-ending this question. There are people I trust and who I *think* would hide me, but it would often be better not to ask them. Some have exceptionally vulnerable families who can’t afford additional risk. Some I’m not sure could stand up to intensive questioning. Most have no functional hiding place. My own home offers nowhere that a searcher for stray humans wouldn’t think to look, and no opportunities to create one. With the best will in the world, I couldn’t make anyone safe, though possibly I could make them temporarily safer.

But practical considerations are not the game. The game is to think about whether you would risk your life for someone else, and why or why not. It’s a thought experiment. Debbie calls it “a game of ultimate truth”.

However, the game is also a cheat, in that the characters have full information about all parts of the story. We know the Nazis coming for the Frank family are unquestionably bent on evil, because we know the Franks’ fates when they were eventually found. It may be hard to tell the truth to your fellow players, but the game is easy to think about because it’s replete with moral clarity.

Things are fuzzier in real life, even for comparatively tiny decisions. In 2012, the late film critic Roger Ebert mulled what he would do if he were a Transport Security Administration agent suddenly required to give intimate patdowns to airline passengers unwilling to go through the scanner. Ebert considered the conflict between moral and personal distaste and TSA officers’ need to keep their reasonably well-paid jobs with health insurance benefits. He concluded that he hoped he’d quit rather than do the patdowns. Today, such qualms are ancient history; both scanners and patdowns have become normalized.

Moral and practical clarity is exactly what’s missing as the Department of Government Efficiency arrives in US government departments and agencies to demand access to their computer systems. Their motives and plans are unclear, as is their authority for the access they’re demanding. The outcome is unknown.

So, instead of a vulnerable 13-year-old girl and her family, what if the thing under threat is a computer? Not the sentient emotional robot/AI of techie fantasy but an ordinary computer system holding boring old databases. Or putting through boring old payments. Or underpinning the boring old air traffic control system. Do you see a computer or the millions of people whose lives depend on it? How much will you risk to protect it? What are you protecting it from? Hinder, help, quit?

Meanwhile, DOGE is demanding that staff allow its young coders to attach unauthorized servers, take control of websites. In addition: mass firings, and a plan to do some sort of inside-government AI startup.

DOGE itself appears to be thinking ahead; it’s told staff to avoid Slack while awaiting a technology that won’t be subject to FOIA requests.

The more you know about computers the scarier this all is. Computer systems of the complexity and accuracy of those the US government has built over decades are not easily understood by incoming non-experts who have apparently been visited by the Knowledge Fairy. After so much time and effort on security and protecting against shadowy hackers, the biggest attack – as Mike Masnick calls it – on government systems is coming from inside the house in full view.

Even if “all” DOGE has is read-only access as Treasury claims – though Wired and Talking Points Memo have evidence otherwise – those systems hold comprehensive sensitive information on most of the US population. Being able to read – and copy? – is plenty bad enough. In both fiction (Margaret Atwood’s The Handmaid’s Tale) and fact (IBM), computers have been used to select populations to victimize. Americans are about to find out they trusted their government more than they thought.

Illustration: Changing a tube in the early computer ENIAC (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.

The Gulf of Google

In 1945, the then mayor of New York City, Fiorello La Guardia signed a bill renaming Sixth Avenue. Eighty years later, even with street signs that include the new name, the vast majority of New Yorkers still say things like, “I’ll meet you at the southwest corner of 51st and Sixth”. You can lead a horse to Avenue of the Americas, but you can’t make him say it.

US president Donald Trump’s order renaming the Gulf of Mexico offers a rarely discussed way to splinter the Internet (at the application layer, anyway; geography matters!), and on Tuesday Google announced it would change the name for US users of its Maps app. As many have noted, this contravenes Google’s 2008 policy on naming bodies of water in Google Earth: “primary local usage”. A day later, reports came that Google has placed the US on its short list of sensitive countries – that is, ones whose rulers dispute the names and ownership of various territories: China, Russia, Israel, Saudi Arabia, Iraq.

Sharpieing a new name on a map is less brutal than invading, but it’s a game anyone can play. Seen on Mastodon: the bay, now labeled “Gulf of Fragile Masculinity”.

***

Ed Zitron has been expecting the generative AI bubble to collapse disastrously. Last week provided an “Is this it?” moment when the Chinese company DeepSeek released reasoning models that outperform the best of the west at a fraction of the cost and computing power. US stock market investors: “Let’s panic!”

The code, though not the training data, is open source, as is the relevant research. In Zitron’s analysis, the biggest loser here is OpenAI, though it didn’t seem like that to investors in other companies, especially Nvidia, whose share price dropped 17% on Tuesday alone. In an entertaining sideshow, OpenAI complains that DeepSeek stole its code – ironic given the history.

On Monday, Jon Stewart quipped that Chinese AI had taken American AI’s job. From there the countdown started until someone invoked national security.

Nvidia’s chips have been the picks and shovels of generative AI, just as they were for cryptocurrency mining. In the latter case, Nvidia’s fortunes waned when cryptocurrency prices crashed, ethercoin, among others, switched to proof of stake, and miners shifted to more efficient, lower-cost application-specific integrated circuits. All of these lowered computational needs. So it’s easy to believe the pattern is repeating with generative AI.

There are several ironies here. The first is that the potential for small language models to outshine large ones has been known since at least 2020, when Timnit Gebru, Emily Bender, Margaret Mitchell, and Angelina McMillan-Major published their stochastic parrots paper. Google soon fired Gebru, who told Bloomberg this week that AI development is being driven by FOMO rather than interesting questions. Second, as an AI researcher friend points out, Hugging Face, which is trying to replicate DeepSeek’s model from scratch, said the same thing two years ago. Imagine if someone had listened.

***

A work commitment forced me to slog through Ross Douthat’s lengthy interview with Marc Andreessen at the New York Times. Tl;dr: Andreessen says Silicon Valley turned right because Democrats broke The Deal under which Silicon Valley supported liberal democracy and the Democrats didn’t regulate them. In his whiny victimhood, Andreessen has no recognition that changes in Silicon Valley’s behavior – and the scale at which it operates – are *why* Democrats’ attitudes changed. If Silicon Valley wants its Deal back, it should stop doing things that are obviously exploitive. Random case in point: Hannah Ziegler reports at the Washington Post that a $1,700 bassinet called a “Snoo” suddenly started demanding $20 per month to keep rocking a baby all night. I mean, for that kind of money I pretty much expect the bassinet to make its own breast milk.

***

Almost exactly eight years ago, Donald Trump celebrated his installation in the US presidency by issuing an executive order that risked up-ending the legal basis for data flows between the EU, which has strict data protection laws, and the US, which doesn’t. This week, he did it again.

In 2017, Executive Order 13768 dominated Computers, Privacy, and Data Protection. The deal in place at the time, Privacy Shield, eventually survived until 2020, when it was struck down in lawyer Max Schrems’s second such case. It was replaced by the Transatlantic Data Privacy Framework, which established the five-member Privacy and Civil Liberties Oversight Board to oversee surveillance and, as Politico explains, handle complaints from Europeans about misuse of their data.

This week, Trump rendered the board non-operational by firing its three Democrats, leaving just one Republican-member in place.*

At Techdirt, Mike Masnick warns the framework could collapse, costing Facebook, Instagram, WhatsApp, YouTube, exTwitter, and other US-based services (including Truth Social) their European customers. At his NGO, noyb, Schrems himself takes note: “This deal was always built on sand.”

Schrems adds that another Trump Executive Order gives 45 days to review and possibly scrap predecessor Joe Biden’s national security decisions, including some the framework also relies on. Few things ought to scare US – and, in a slew of new complaints, Chinese – businesses more than knowing Schrems is watching.

Illustrations: The Gulf of Mexico (NASA, via Wikimedia).

*Corrected to reflect that the three departing board members are described as Democrats, not Democrat-appointed. In fact, two of them, Ed Felten and Travis LeBlanc, were appointed by Trump in his original term.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.