Dorothy Parker was wrong

Goldie Hawn squinted into the lights. “I can’t read that,” she said to her co-presenter. “Cataracts.”

It was the 2025 Academy Awards. She was wearing a pale gold gown, and her hair and makeup did their best to evoke the look she’s had ever since she became a star in the 1960s. She is, in fact, 79. But Hollywood 79. Except for the cataracts. I know people who cheered when she said that bit of honesty about her own aging.

Doubtless soon Hawn will join the probably hundreds of millions who’ve had cataract surgery, and at her next awards outing she’ll be able to read the Teleprompter just fine. Because, let’s face it, although the idea of the surgery is scary and although the tabloids painted Hawn’s “condition” as “tragic”, if you’re going to have something wrong with you at 79, cataracts are the least worst. They’re not life-threatening. There’s a good, thoroughly tested treatment that takes less than half an hour. Recovery is short (a few weeks). Side effects, immediate or ongoing, are rare and generally correctable. Treatment vastly improves your quality of life and keeps you independent. Even delaying treatment is largely benign: the cataract may harden and become more complicated to remove, but doesn’t do permanent damage.

Just don’t see the 1929 short experimental film Un Chien Andalou when you’re 18. That famous opening scene with the razor and the eyeball squicks out *everybody*. Thank you, Luis Bunuel and Salvador Dali.

I have cataracts. But: I also have a superpower. Like lots of people with extreme myopia, even at 71 I can read the smallest paragraph on the Jaeger eye test in medium-low lighting conditions. I have to hold it four and a half inches from my face, but close-up has always been the only truly reliable part of my vision.

Eye doctors have a clear, shared understanding of what constitutes normal vision, which involves not needing glasses to see at a distance and needing reading glasses around the time you turn 40. So when it comes time for cataract surgery they see it as an opportunity to give you the vision that normal people have.

In the entertainment world, this attitude was neatly summed up in 1926 by the famed acerbic wisecrack and New Yorker writer Dorothy Parker: “Men seldom make passes at girls who wear glasses.” It’s nonsense. Women who wear glasses know it’s nonsense. There was even a movie – How to Marry a Millionaire (1953) – which tackled this silliness by having Marilyn Monroe’s Pola wander around bumping into walls and getting onto wrong planes until she meets Freddie (David Wayne), who tells her to put her glasses on and that he thinks she looks better wearing them. Of course she does. Restoring the ability to see in focus removes the blank cluelessness from her face.

“They should put on your tombstone ‘She loved myopia’,” joked the technician drawing up a specification for the lens they were going to implant. We all laughed. But it’s incorrect, since what I love is not myopia but the intimate feeling of knowing I can read absolutely anything in most lighting conditions.

But kudos: whatever their preferences, they are doing their best to accommodate mine – all credit to the NHS and Moorfields. The first eye has healed quickly, and while the full outcome is still uncertain (it’s too soon) the results look promising.

So, some pointers, culled by asking widely what people wished they’d known beforehand or asked their surgeon.

– Get a diving mask or swimming goggles to wear in the shower because for the first couple of weeks they don’t want all that water (or soap) to get in your eye. (This was the best tip I got, from my local postmaster.)

– A microwaveable heated mask, which I didn’t try, might help if you’re in discomfort (but ask your doctor).

– Plan to feel frustrated for the first week because your body feels fine but you aren’t supposed to do anything strenuous that might raise the pressure in your eye and disrupt its healing. Don’t do sports, don’t lift weights, don’t power walk, don’t bend over with your eyes below your waist, and avoid cooking or anything else that might irritate your eyes and tempt you to scratch or apply pressure. The bright side: you can squat to reach things. And you can walk gently.

– When you ask people what they wish they’d known, many will say “How easy it was” and “I wish I’d done it years earlier”. In your panicked pre-surgery state, this is not helpful. It is true that the operation didn’t hurt (surgeons are attentive to this, because they don’t want you to twitch). It is true that the lights shining on your eye block sight of what they’re doing. I saw a lot of magenta and blue lights. I heard machine sounds, which my surgeon kindly explained as part of fulfilling my request to talk me through it. Some liquid dripped into my hair.

– Take the time you need to prepare, because there’s no undo button.

Think of it as a very scary dental appointment.

Illustrations: Pola (Marilyn Monroe) finding out that glasses can be an asset in How to Marry a Millionaire (1953).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Lost futures

In early December, the Biden administration’s Department of Justice filed its desired remedies, having won its case that Google is a monopoly. Many foresaw a repeat of 2001, when the incoming Bush administration dropped the Clinton DoJ’s plan to break up Microsoft.

Maybe not this time. In its first filing, Trump’s DoJ still wants Google to divest itself of the Chrome browser and intends to bar it from releasing other browsers. The DoJ also wants to impose some restrictions on Android and Google’s AI investments.

At The Register, Thomas Claburn reports that Mozilla is objecting to the DoJ’s desire to bar Google from paying other companies to promote its search engine by default. Those payments, Mozilla president Mark Surman admits to Claburn, keep small independent browsers afloat.

Despite Mozilla’s market shrinkage and current user complaints, it and its fellow minority browsers remain important in keeping the web open and out of full corporate control. It’s definitely counter-productive if the court, in trying to rein in Google’s monopoly, takes away what viability these small players have left. They are us.

***

On the other hand, it’s certainly not healthy for those small independents to depend for their survival on the good will of companies like Google. The Trump administration’s defunding of – among so many things – scientific research is showing just how dangerous it can be.

Within the US itself, the government has announced cuts to indirect funding, which researchers tell me are crippling to universities; $800 million cut in grants to Johns Hopkins, $400 at Columbia University, and so many more.

But it doesn’t stop in the US or with the cuts to USAID, which have disrupted many types of projects around the world, some of them scientific or medical research. The Trump administration is using its threats to scientific funding across the world to control speech and impose its, um, values. This morning, numerous news sources report that Australian university researchers have been sent questionnaires they must fill out to justify their US-funded grants. Among the questions: their links to China and their compliance with Trump’s gender agenda.

To be fair, using grants and foreign aid to control speech is not a new thing for US administrations. For example, Republican presidents going back to Reagan have denied funding to international groups that advocated abortion rights or provided abortions, limiting what clinicians could say to pregnant patients. (I don’t know if there are Democratic comparables.)

Science is always political to some extent: think the for stating that the earth was not the center of the universe. Or take intelligence: in his 1981 book The Mismeasure of Man, Stephen Jay Gould documented a century or more of research by white, male scientists finding that white, male scientists were the smartest things on the planet. Or say it inBig Tobacco and Big Oil, which spent decades covering up research showing that their products were poisoning us and our planet.

The Trump administration’s effort is, however, a vastly expanded attempt that appears to want to squash anything that disagrees with policy, and it shows the dangers of allowing any one nation to amass too much “soft power”. The consequences can come quickly and stay long. It reminds me of what happened in the UK in the immediate post-EU referendum period, when Britain-based researchers found themselves being dropped from cross-EU projects because they were “too risky”, and many left for jobs in other countries where they could do their work in peace.

The writer Prashant Vaze sometimes imagines a future in which India has become the world’s leading scientific and technical superpower. This imagined future seems more credible by the day.

***

It’s strange to read that the 35-year-old domestic robots pioneer, iRobot, may be dead in a year. It seemed like a sure thing; early robotics researchers say that people were begging for robot vacuum cleaners even in the 1960s, perhaps inspired by Rosie, The Jetsons‘ robot maid.

Many people may have forgotten (or not known) the excitement that attended the first Roombas in 2002. Owners gave them names, took them on vacation, and posted videos. It looked like the start of a huge wave.

I bought a Roomba in 2003, reviewing it so enthusiastically that an email complained that I should have said I had been given it by a PR person. For a few happy months it wandered around cleaning.

Then one day it stopped moving and I discovered that long hair paralyzed it. I gave it away and went back to living with moths.

The Roomba now has many competitors, some highly sophisticated, run by apps, and able to map rooms, identify untouched areas, scrub stains, and clean in corners. Even so, domestic robots have not proliferated as imagined 20 – or 12 – years ago. I visit people’s houses, and while I sometimes encounter Alexas or Google Assistants, robot vacuums seem rare.

So much else of smart homes as imagined by companies like Microsoft and IBM remain dormant. It does seem like – perhaps a reflection on my social circle – the “smart home” is just a series of remote-control apps and outsourced services. Meh.

Illustrations: Rosie, the Jetsons‘ XB-500 robot maid, circa 1962.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Unsafe

The riskiest system is the one you *think* you can trust. Say it in encryption: the least secure encryption is encryption that has unknown flaws. Because, in the belief that your communication or data is protected, you feel it’s safe to indulge in what in other contexts would be obviously risky behavior. Think of it like an unseen hole in a condom.

This has always been the most dangerous aspect of the UK government’s insistence that its technical capability notices remain secret. Whoever alerted the Washington Post to the notice Apple received a month ago commanding it to weaken its Advanced Data Protection performed an important public service. Now, Carly Page reports at TechCrunch based on a blog posting by security expert Alec Muffett, the UK government is recognizing that principle by quietly removing from its web pages advice to use that same encryption that was directed at people whose communications are at high risk – such as barristers and other legal professionals. Apple has since withdrawn ADP in the UK.

More important long-term, at the Financial Times, Tim Bradshaw and Lucy Fisher report that Apple has appealed the government’s order to the Investigatory Powers Tribunal. This will be, as the FT notes, the first time government powers under the Investigatory Powers Act (2016) to compel the weakening of security features will be tested in court. A ruling that the order was unlawful could be an important milestone in the seemingly interminable fight over encryption.

***

I’ve long had the habit of doing minor corrections on Wikipedia – fixing typos, improving syntax – as I find them in the ordinary course of research. But recently I have had occasion to create a couple of new pages, with the gratefully-received assistance of a highly experienced Wikipedian. At one time, I’m sure this was a matter of typing a little text, garlanding it with a few bits of code, and garnishing it with the odd reference, but standards have been rising all along, and now if you want your newly-created page to stay up it needs a cited reference for every statement of fact and a minimum of one per sentence. My modest pages had ten to 20 references, some servicing multiple items. Embedding the page matters, too, so you need to link mentions to all those pages. Even then, some review editor may come along and delete the page if they think the subject is not notable enough or violates someone’s copyright. You can appeal, of course…and fix whatever they’ve said the problem is.

It should be easier!

All of this detailed work is done by volunteers, who discuss the decisions they make in full view on the talk page associated with every content page. Studying the more detailed talk pages is a great way to understand how the encyclopedia, and knowledge in general, is curated.

Granted, Wikipedia is not perfect. Its policy on primary sources can be frustrating, and errors in cited secondary sources can be difficult to correct. The culture can be hostile if you misstep. Its coverage is uneven, But, as Margaret Talbot reports at the New Yorker and Amy Bruckman writes in her 2022 book, Should You Believe Wikipedia?, all those issues are fully documented.

Early on, Wikipedia was often the butt of complaints from people angry that this free encyclopedia made by *amateurs* threatened the sustainability of Encyclopaedia Britannica (which has survived though much changed). Today, it’s under attack by Elon Musk and the Heritage Foundation, as Lila Shroff writes at The Atlantic. The biggest danger isn’t to Wikipedia’s funding; there’s no offer anyone can make that would lead to a sale. The bigger vulnerability is the safety of individual editors. Scold they may, but as a collective they do important work to ensure that facts continue to matter.

***

Firefox users are manifesting more and more unhappiness about the direction Mozilla is taking with Firefox. The open source browser’s historic importance is outsized compared to its worldwide market share, which as of February 2025 is 2.63%, according to Statcounter. A long tail of other browsers are based on it, such as LibreWolf, Waterfox, and the privacy-protecting Tor.

The latest complaint, as Liam Proven and Thomas Claburn write at The Register is that Mozilla has removed its commitment not to sell user data from Firefox’s terms and conditions and privacy policy. Mozilla responded that the company doesn’t sell user data “in the way that most people think about ‘selling data'” but needed to change the language because of jurisdictional variations in what the word “sell” means. Still, the promise is gone.

This follows Mozilla’s September 2024 decision, reported by Richard Speed at The Register, to turn on by default a “privacy-preserving feature” to track users that led the NGO noyb to file a complaint with the Austrian data protection authority. And a month ago, Mark Hachman reported at PC World that Mozilla is building access to third-party generative AI chatbots into Firefox, and there are reports that it’s adding “AI-powered tab grouping.

All of these are basically unwelcome, and of all organizations Mozilla should have been able to foresee that. Go away, AI.

***

Molly White is expertly covering the Trump administration’s proposed “US Crypto Reserve”. Remains only to add Rachel Maddow, who compared it to having a strategic reserve of Beanie Babies.

Illustrations:: Beanie baby pelican.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Optioned

The UK’s public consultation on creating a copyright exception for AI model training closed on Tuesday, and it was profoundly unsatisfying.

Many, many creators and rights holders (who are usually on opposing sides when it comes to contract negotiations) have opposed the government’s proposals. Every national newspaper ran the same Make It Fair front page opposing them; musicians released a silent album. In the Guardian, the peer and independent filmmaaker Beeban Kidron calls the consultation “fixed” in favor of the AI companies. Kidron’s resume includes directing Bridget Jones: The Edge of Reason (2004) and the meticulously researched 2013 study of teens online, InRealLife, and she goes on to call the government’s preferred option a “wholesale transfer of wealth from hugely successful sector that invests hundreds of millions in the UK to a tech industry that extracts profit that is not assured and will accrue largely to the US and indeed China.”

The consultation lists four options: leave the situation as it is; require AI companies to get licenses to use copyrighted work (like everyone else has to); allow AI companies to use copyrighted works however they want; and allow AI companies to use copyrighted works but grant rights holders the right to opt out.

I don’t like any of these options. I do believe that creators will figure out how to use AI tools to produce new and valuable work. I *also* believe that rights holders will go on doing their best to use AI to displace or impoverish creators. That is already happening in journalism and voice acting, and was a factor in the 2023 Hollywood writers’ strike. AI companies have already shown that won’t necessarily abide by arrangements that lack the force of law. The UK government acknowledged this in its consultation document, saying that “more than 50% of AI companies observe the longstanding Internet convention robots.txt.” So almost half of them *don’t*.

At Pluralistic, Cory Doctorow argued in February 2023 that copyright won’t solve the problems facing creators. His logic is simple: after 40 years of expanding copyright terms (from a maximum of 56 years in 1975 to “author’s life plus 70” now), creators are being paid *less* than they were then. Yes, I know Taylor Swift has broken records for tour revenues and famously took back control of her own work. but millions of others need, as Doctorow writes, structural market changes. Doctorow highlights what happened with sampling: the copyright maximalists won, and now musicians are required to sign away sampling rights to their labels, who pocket the resulting royalties.

For this sort of reason, the status quo, which the consultation calls “option 0”, seems likely to open the way to lots more court cases and conflicting decisions, but provide little benefit to anyone. A licensing regime (“option 1”) will likely go the way of sampling. If you think of AI companies as inevitably giant “pre-monopolized” outfits, like Vladen Joler at last year’s Computers, Privacy, and Data Protection conference, “Option 2” looks like simply making them richer and more powerful at the expense of everyone else in the world. But so does “option 3”, since that *also* gives AI companies the ability to use anything they want. Large rights holders will opt out and demand licensing fees, which they will keep, and small ones will struggle to exercise their rights.

As Kidron said, the government’s willingness to take chances with the country’s creators’ rights is odd, since intellectual property is a sector in which Britain really *is* a world leader. On the other hand, as Moody says, all of it together is an anthill compared to the technology sector.

None of these choices is a win for creators or the public. The government’s preferred option 3 seems unlikely to achieve its twin goals of making Britain a world leader in AI and mainlining AI into the veins of the nation, as the government put it last month.

China and the US both have complete technology stacks *and* gigantic piles of data. The UK is likely better able to matter in AI development than many countries – see for example DeepMind, which was founded here in 2010. On the other hand, also see DeepMind for the probable future: Google bought it in 2014, and now its technology and profits belong to that giant US company.

At Walled Culture, Glyn Moody argued last May that requiring the AI companies to pay copyright industries makes no sense; he regards using creative material for training purposes as “just a matter of analysis” that should not require permission. And, he says correctly, there aren’t enough such materials anyway. Instead, he and Mike Masnick at Techdirt propose that the generative AI companies should pay creators of all types – journalists, musicians, artists, filmmakers, book authors – to provide them with material they can use to train their models, and the material so created should be placed in the public domain. In turn it could become new building blocks the public can use to produce even more new material. As a model for supporting artists, patronage is old.

I like this effort to think differently a lot better than any of the government’s options.

Illustrations:: Tuesday’s papers, unprecedentedly united to oppose the government’s copyright plan.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Cognitive dissonance

The annual State of the Net, in Washington, DC, always attracts politically diverse viewpoints. This year was especially divided.

Three elements stood out: the divergence between the only remaining member of the Privacy and Civil Liberties Oversight Board (PCLOB) and a recently-fired colleague; a contentious panel on content moderation; and the yay, American innovation! approach to regulation.

As noted previously, on January 29 the days-old Trump administration fired PCLOB members Travis LeBlanc, Ed Felten, and chair Sharon Bradford Franklin; the remaining seat was already empty.

Not to worry, remaining member Beth Williams, said. “We are open for business. Our work conducting important independent oversight of the intelligence community has not ended just because we’re currently sub-quorum.” Flying solo she can greenlight publication, direct work, and review new procedures and policies; she can’t start new projects. A review is ongoing of the EU-US Privacy Framework under Executive Order 14086 (2022). Williams seemed more interested in restricting government censorship and abuse of financial data in the name of combating domestic terrorism.

Soon afterwards, LeBlanc, whose firing has him considering “legal options”, told Brian Fung that the outcome of next year’s reauthorization of Section 702, which covers foreign surveillance programs, keeps him awake at night. Earlier, Williams noted that she and Richard E. DeZinno, who left in 2023, wrote a “minority report” recommending “major” structural change within the FBI to prevent weaponization of S702.

LeBlanc is also concerned that agencies at the border are coordinating with the FBI to surveil US persons as well as migrants. More broadly, he said, gutting the PCLOB costs it independence, expertise, trustworthiness, and credibility and limits public options for redress. He thinks the EU-US data privacy framework could indeed be at risk.

A friend called the panel on content moderation “surreal” in its divisions. Yael Eisenstat and Joel Thayer tried valiantly to disentangle questions of accountability and transparency from free speech. To little avail: Jacob Mchangama and Ari Cohn kept tangling them back up again.

This largely reflects Congressional debates. As in the UK, there is bipartisan concern about child safety – see also the proposed Kids Online Safety Act – but Republicans also separately push hard on “free speech”, claiming that conservative voices are being disproportionately silenced. Meanwhile, organizations that study online speech patterns and could perhaps establish whether that’s true are being attacked and silenced.

Eisenstat tried to draw boundaries between speech and companies’ actions. She can still find on Facebook the sme Telegram ads containing illegal child sexual abuse material that she found when Telegram CEO Pavel Durov was arrested. Despite violating the terms and conditions, they bring Meta profits. “How is that a free speech debate as opposed to a company responsibility debate?”

Thayer seconded her: “What speech interests do these companies have other than to collect data and keep you on their platforms?”

By contrast, Mchangama complained that overblocking – that is, restricting legal speech – is seen across EU countries. “The better solution is to empower users.” Cohn also disliked the UK and European push to hold platforms responsible for fulfilling their own terms and conditions. “When you get to whether platforms are living up to their content moderation standards, that puts the government and courts in the position of having to second-guess platforms’ editorial decisions.”

But Cohn was talking legal content; Eisenstat was talking illegal activity: “We’re talking about distribution mechanisms.” In the end, she said, “We are a democracy, and part of that is having the right to understand how companies affect our health and lives.” Instead, these debates persist because we lack factual knowledge of what goes on inside. If we can’t figure out accountability for these platforms, “This will be the only industry above the law while becoming the richest companies in the world.”

Twenty-five years after data protection became a fundamental right in Europe, the DC crowd still seem to see it as a regulation in search of a deal. Representative Kat Cammack (R-FL), who described herself as the “designated IT person” on the energy and commerce committee, was particularly excited that policy surrounding emerging technologies could be industry-driven, because “Congress is *old*!” and DC is designed to move slowly. “There will always be concerns about data and privacy, but we can navigate that. We can’t deter innovation and expect to flourish.”

Others also expressed enthusiasm for “the great opportunities in front of our country”, compared the EU’s Digital Markets Act to a toll plaza congesting I-95. Samir Jain, on the AI governance panel, suggested the EU may be “reconsidering its approach”. US senator Marsha Blackburn (R-TN) highlighted China’s threat to US cybersecurity without noting the US’s own goal, CALEA.

On that same AI panel, Olivia Zhu, the Assistant Director for AI Policy for the White House Office of Science and Technology Policy, seemed more realistic: “Companies operate globally, and have to do so under the EU AI Act. The reality is they are racing to comply with [it]. Disengaging from that risks a cacophony of regulations worldwide.”

Shortly before, Johnny Ryan, a Senior Fellow at the Irish Council for Civil Liberties posted: “EU Commission has dumped the AI Liability Directive. Presumably for “innovation”. But China, which has the toughest AI law in the world, is out innovating everyone.”

Illustrations: Kat Cammack (R-FL) at State of the Net 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Isolate

Yesterday, the Global Encryption Coalition published a joint letter calling on the UK to rescind its demand that Apple undermine (“backdoor”) the end-to-end encryption on its services. The Internet Society is taking signatures until February 20.

The background: on February 7, Joseph Menn reported at the Washington Post (followed by Dominic Preston at The Verge) that in January the office of the Home Secretary sent Apple a technical capability notice under the Investigatory Powers Act (2018) ordering it to provide access to content that anyone anywhere in the world has uploaded to iCloud and encrypted with Apple’s Advanced Data Protection.

Technical capability notices are supposed to be secret. It’s a criminal offense to reveal that you’ve been sent one. Apple can’t even tell users that their data may be compromised. (This kind of thing is why people publish warrant canaries.) Menn notes that even if Apple withdraws ADP in the UK, British authorities will still demand access to encrypted data everywhere *else*. So it appears that if the Home Office doesn’t back down and Apple is unwilling to cripple its encryption, the company will either have to withdraw ADP across the world or exit the UK market entirely. At his Odds and Ends of History blog, James O’Malley calls the Uk’s demand stupid, counter-productive, and unworkable. At TechRadar, Chiara Castro asks who’s next, and quotes Big Brother Watch director Silkie Carlo: “unprecedented for a government in any democracy”.

When the UK first began demanding extraterritorial jurisdiction for its interception rules, most people wondered how the country thought it would be able to impose it. That was 11 years ago; it was one of the new powers codified in the Data Retention and Investigatory Powers Act (2014) and kept in its replacement, the IPA in 2016.

Governments haven’t changed – they’ve been trying to undermine strong encryption in the hands of the masses since 1991, when Phil Zinmmermann launched PGP – but the technology has, as Graham Smith recounted at Ars Technica in 2017. Smartphones are everywhere. People store their whole lives on them for everything and giant technology companies encrypt both the device itself and the cloud backups. Government demands have changed to reflect that, from focusing on the individual with key escrow and key lengths to focusing on the technology provider with client-side scanning, encrypted messaging (see also the EU) and now cloud storage.

At one time, a government could install a secret wiretap by making a deal with a legacy telco. The Internet’s proliferation of communications providers changed that for a while. During the resulting panic the US passed the Communications Assistance for Law Enforcement Act (1994), which requires Internet service providers and telecommunications companies to install wiretap-ready equipment – originally for telephone calls, later broadband and VOIP traffic as well.

This is where the UK government’s refusal to learn from others’ mistakes is staggering. Just four months ago, the US discovered Salt Typhoon, a giant Chinese hack into its core telecommunications networks that was specifically facilitated by…by…CALEA. To repeat: there is no such thing as a magic hole that only “good guys” can use. If you undermine everyone’s privacy and security to facilitate law enforcement, you will get an insecure world where everyone is vulnerable. The hack has led US authorities to promote encrypted messaging.

Joseph Cox’s recent book, Dark Wire touches on this. It’s a worked example of what law enforcement internationally can do if given open access to all messages criminals send across a network when they think they are operating in complete safety. Yes, the results were impressive: hundreds of arrests, dozens of tons of drugs seized, masses if firearms impounded. But, Cox writes, all that success was merely a rounding error in global drug trade. Universal loss of privacy and security versus a rounding error: it’s the definition of “disproportionate”.

It remains to be seen what Apple decides to do and whether we can trust what the company tells us. At his blog, Alec Muffett is collecting ongoing coverage of events. The Future of Privacy Forum celebrated Safer Internet Day, February 11, with an infographic showing how encryption protects children and teens.

But set aside for a moment all the usual arguments about encryption, which really haven’t changed in over 30 years because mathematical reality hasn’t.

In the wider context, Britain risks making itself a technological backwater. First, there’s the backdoored encryption demand, which threatens every encrypted service. Second, there’s the impact of the onrushing Online Safety Act, which comes into force in March. Ofcom, the regulator charged with enforcing it, is issuing thousands of pages of guidance that make it plain that only large platforms will have the resources to comply. Small sites, whether businesses, volunteer-run Fediverse instances, blogs, established communities, or web boards, will struggle even if Ofcom starts to do a better job of helping them understand their legal obligations. Many will likely either shut down or exit the UK, leaving the British Internet poorer and more isolated as a result. Ofcom seems to see this as success.

It’s not hard to predict the outcome if these laws converge in the worst possible timeline: a second Brexit, this one online.

Illustrations: T-shirt (gift from Jen Persson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

What we talk about when we talk about computers

The climax of Nathan Englander‘s very funny play What We Talk About When We Talk About Anne Frank sees the four main characters play a game – the “Anne Frank game” – that two of them invented as children. The play is on at the Marylebone Theatre until February 15.

The plot: two estranged former best friends in a New York yeshiva have arranged a reunion for themselves and their husbands. Debbie (Caroline Catz), has let her religious attachment lapse in the secular environs of Miami, Florida, where her husband, Phil (Joshua Malina), is an attorney. Their college-age son, Trevor (Gabriel Howell), calls the action.

They host Hasidic Shosh (Dorothea Myer-Bennett) and Yuri (Simon Yadoo), formerly Lauren and Mark, whose lives in Israel and traditional black dress and, in Shosh’s case, hair-covering wig, have left them unprepared for the bare arms and legs of Floridians. Having spent her adult life in a cramped apartment with Yuri and their eight daughters, Shosh is astonished at the size of Debbie’s house.

They talk. They share life stories. They eat. And they fight: what is the right way to be Jewish? Trevor asks: given climate change, does it matter?

So, the Anne Frank game: who among your friends would hide you when the Nazis are coming? The rule that you must tell the truth reveals the characters’ moral and emotional cores.

I couldn’t avoid up-ending this question. There are people I trust and who I *think* would hide me, but it would often be better not to ask them. Some have exceptionally vulnerable families who can’t afford additional risk. Some I’m not sure could stand up to intensive questioning. Most have no functional hiding place. My own home offers nowhere that a searcher for stray humans wouldn’t think to look, and no opportunities to create one. With the best will in the world, I couldn’t make anyone safe, though possibly I could make them temporarily safer.

But practical considerations are not the game. The game is to think about whether you would risk your life for someone else, and why or why not. It’s a thought experiment. Debbie calls it “a game of ultimate truth”.

However, the game is also a cheat, in that the characters have full information about all parts of the story. We know the Nazis coming for the Frank family are unquestionably bent on evil, because we know the Franks’ fates when they were eventually found. It may be hard to tell the truth to your fellow players, but the game is easy to think about because it’s replete with moral clarity.

Things are fuzzier in real life, even for comparatively tiny decisions. In 2012, the late film critic Roger Ebert mulled what he would do if he were a Transport Security Administration agent suddenly required to give intimate patdowns to airline passengers unwilling to go through the scanner. Ebert considered the conflict between moral and personal distaste and TSA officers’ need to keep their reasonably well-paid jobs with health insurance benefits. He concluded that he hoped he’d quit rather than do the patdowns. Today, such qualms are ancient history; both scanners and patdowns have become normalized.

Moral and practical clarity is exactly what’s missing as the Department of Government Efficiency arrives in US government departments and agencies to demand access to their computer systems. Their motives and plans are unclear, as is their authority for the access they’re demanding. The outcome is unknown.

So, instead of a vulnerable 13-year-old girl and her family, what if the thing under threat is a computer? Not the sentient emotional robot/AI of techie fantasy but an ordinary computer system holding boring old databases. Or putting through boring old payments. Or underpinning the boring old air traffic control system. Do you see a computer or the millions of people whose lives depend on it? How much will you risk to protect it? What are you protecting it from? Hinder, help, quit?

Meanwhile, DOGE is demanding that staff allow its young coders to attach unauthorized servers, take control of websites. In addition: mass firings, and a plan to do some sort of inside-government AI startup.

DOGE itself appears to be thinking ahead; it’s told staff to avoid Slack while awaiting a technology that won’t be subject to FOIA requests.

The more you know about computers the scarier this all is. Computer systems of the complexity and accuracy of those the US government has built over decades are not easily understood by incoming non-experts who have apparently been visited by the Knowledge Fairy. After so much time and effort on security and protecting against shadowy hackers, the biggest attack – as Mike Masnick calls it – on government systems is coming from inside the house in full view.

Even if “all” DOGE has is read-only access as Treasury claims – though Wired and Talking Points Memo have evidence otherwise – those systems hold comprehensive sensitive information on most of the US population. Being able to read – and copy? – is plenty bad enough. In both fiction (Margaret Atwood’s The Handmaid’s Tale) and fact (IBM), computers have been used to select populations to victimize. Americans are about to find out they trusted their government more than they thought.

Illustration: Changing a tube in the early computer ENIAC (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. Stories about the border wars between cyberspace and real life are posted occasionally during the week at the net.wars Pinboard – or follow on Twitter.

The Gulf of Google

In 1945, the then mayor of New York City, Fiorello La Guardia signed a bill renaming Sixth Avenue. Eighty years later, even with street signs that include the new name, the vast majority of New Yorkers still say things like, “I’ll meet you at the southwest corner of 51st and Sixth”. You can lead a horse to Avenue of the Americas, but you can’t make him say it.

US president Donald Trump’s order renaming the Gulf of Mexico offers a rarely discussed way to splinter the Internet (at the application layer, anyway; geography matters!), and on Tuesday Google announced it would change the name for US users of its Maps app. As many have noted, this contravenes Google’s 2008 policy on naming bodies of water in Google Earth: “primary local usage”. A day later, reports came that Google has placed the US on its short list of sensitive countries – that is, ones whose rulers dispute the names and ownership of various territories: China, Russia, Israel, Saudi Arabia, Iraq.

Sharpieing a new name on a map is less brutal than invading, but it’s a game anyone can play. Seen on Mastodon: the bay, now labeled “Gulf of Fragile Masculinity”.

***

Ed Zitron has been expecting the generative AI bubble to collapse disastrously. Last week provided an “Is this it?” moment when the Chinese company DeepSeek released reasoning models that outperform the best of the west at a fraction of the cost and computing power. US stock market investors: “Let’s panic!”

The code, though not the training data, is open source, as is the relevant research. In Zitron’s analysis, the biggest loser here is OpenAI, though it didn’t seem like that to investors in other companies, especially Nvidia, whose share price dropped 17% on Tuesday alone. In an entertaining sideshow, OpenAI complains that DeepSeek stole its code – ironic given the history.

On Monday, Jon Stewart quipped that Chinese AI had taken American AI’s job. From there the countdown started until someone invoked national security.

Nvidia’s chips have been the picks and shovels of generative AI, just as they were for cryptocurrency mining. In the latter case, Nvidia’s fortunes waned when cryptocurrency prices crashed, ethercoin, among others, switched to proof of stake, and miners shifted to more efficient, lower-cost application-specific integrated circuits. All of these lowered computational needs. So it’s easy to believe the pattern is repeating with generative AI.

There are several ironies here. The first is that the potential for small language models to outshine large ones has been known since at least 2020, when Timnit Gebru, Emily Bender, Margaret Mitchell, and Angelina McMillan-Major published their stochastic parrots paper. Google soon fired Gebru, who told Bloomberg this week that AI development is being driven by FOMO rather than interesting questions. Second, as an AI researcher friend points out, Hugging Face, which is trying to replicate DeepSeek’s model from scratch, said the same thing two years ago. Imagine if someone had listened.

***

A work commitment forced me to slog through Ross Douthat’s lengthy interview with Marc Andreessen at the New York Times. Tl;dr: Andreessen says Silicon Valley turned right because Democrats broke The Deal under which Silicon Valley supported liberal democracy and the Democrats didn’t regulate them. In his whiny victimhood, Andreessen has no recognition that changes in Silicon Valley’s behavior – and the scale at which it operates – are *why* Democrats’ attitudes changed. If Silicon Valley wants its Deal back, it should stop doing things that are obviously exploitive. Random case in point: Hannah Ziegler reports at the Washington Post that a $1,700 bassinet called a “Snoo” suddenly started demanding $20 per month to keep rocking a baby all night. I mean, for that kind of money I pretty much expect the bassinet to make its own breast milk.

***

Almost exactly eight years ago, Donald Trump celebrated his installation in the US presidency by issuing an executive order that risked up-ending the legal basis for data flows between the EU, which has strict data protection laws, and the US, which doesn’t. This week, he did it again.

In 2017, Executive Order 13768 dominated Computers, Privacy, and Data Protection. The deal in place at the time, Privacy Shield, eventually survived until 2020, when it was struck down in lawyer Max Schrems’s second such case. It was replaced by the Transatlantic Data Privacy Framework, which established the five-member Privacy and Civil Liberties Oversight Board to oversee surveillance and, as Politico explains, handle complaints from Europeans about misuse of their data.

This week, Trump rendered the board non-operational by firing its three Democrats, leaving just one Republican-member in place.*

At Techdirt, Mike Masnick warns the framework could collapse, costing Facebook, Instagram, WhatsApp, YouTube, exTwitter, and other US-based services (including Truth Social) their European customers. At his NGO, noyb, Schrems himself takes note: “This deal was always built on sand.”

Schrems adds that another Trump Executive Order gives 45 days to review and possibly scrap predecessor Joe Biden’s national security decisions, including some the framework also relies on. Few things ought to scare US – and, in a slew of new complaints, Chinese – businesses more than knowing Schrems is watching.

Illustrations: The Gulf of Mexico (NASA, via Wikimedia).

*Corrected to reflect that the three departing board members are described as Democrats, not Democrat-appointed. In fact, two of them, Ed Felten and Travis LeBlanc, were appointed by Trump in his original term.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Dark Wire

Dark Wire
by Joseph Cox
PublicAffairs (Hachette Group)
ISBNs: 9781541702691 (hardcover), 9781541702714 (ebook)

One of the basic principles that emerged as soon as encryption software became available to ordinary individuals on home computers was this: everyone should encrypt their email so the people who really need the protection don’t stick out as targets. Also at that same time, the authorities were constantly warning that if encryption weren’t controlled by key escrow, an implanted back door, or restrictions on its strength, it would help hide the activities of drug traffickers, organized crime, pedophiles, and terrorists. This same argument continues today.

Today, billions of people have access to encrypted messaging via WhatsApp, Signal, and other services. Governments still hate it, but they *use* it; the UK government is all over WhatsApp, as multiple public inquiries have shown.

In Dark Wire: The Incredible True Story of the Largest Sting Operation Ever, Joseph Cox, one of the four founders of 404 Media, takes us on a trip through law enforcement’s adventures in encryption, as police try to identify and track down serious criminals making and distributing illegal drugs by the ton.

The story begins with PhantomSecure, a scheme that stripped down Blackberry devices and installed PGP to encrypt emails and systems to ensure the devices could exchange emails only with other Phantom Secure devices. The service became popular among all sorts of celebrities, politicians, and other non-criminals who value privacy – but not *only* them. All perfectly legal.

One of my favorite moments comes early,when a criminal debating whether to trust a new contact decides he can because he has one of these secure Blackberries. The criminal trusted the supply chain; surely no one would have sold him one of these things without thoroughly checking that he wasn’t a cop. Spoiler alert: he was a cop. That sale helped said cop and his colleagues in the United States, Australia, Canada, and the Netherlands infiltrate the network, arrest a bunch of criminals, and shut it down – eventually, after setbacks, and with the non-US forces frustrated and amazed by US Constitutional law limiting what agents were allowed to look at.

PhantomSecure’s closure made a hole in the market while security-conscious criminals scrambled to find alternatives. It was rapidly filled by competitors working with modified phones: Encrochat and Sky ECC. As users migrated to these services and law enforcement worked to infiltrate and shut them down as well, former PhantomSecure salesman “Afgoo” had a bright idea, which he offered to the FBI: why not build their own encrypted app and take over the market?

The result was Anom, From the sounds of it, some of its features were quite cool. For example, the app itself hid behind an innocent-looking calculator, which acted as a password gateway. Type in the right code, and the messaging app appeared. The thing sold itself.

Of course, the FBI insisted on some modifications. Behind the scenes, Anom devices sent copies of every message to the FBI’s servers. Eventually, the floods of data the agencies harvested this way led to 500 arrests on one day alone, and the seizure of hundreds of firearms and dozens of tons of illegal drugs and precursor chemicals.

Some of the techniques the criminals use are fascinating in their own right. One method of in-person authentication involved using the unique serial number on a bank note, sending it in advance; the mule delivering the money would simply have to show they had the bank note, a physical one-time pad. Banks themselves were rarely used. Instead, cash would be stored in safe houses in various countries and the money would never have to cross borders. So: no records, no transfers to monitor. All of this spilled open for law enforcement because of Anom.

And yet. Cox waits until the end to voice reservations. All those seizures and arrests barely made a dent in the world’s drug trade – a “rounding error”, Cox calls it.

The AI moment

“Why are we still talking about digital transformation?” The speaker was convening a session at last weekend’s UK Govcamp, an event organized by and for civil servants with an interest in digital stuff.

“Because we’ve failed?” someone suggested. These folks are usually *optimists*.

Govcamp is a long-running tradition that began as a guerrilla effort in 2008. At the time, civil servants wanting to harness new technology in the service of government were so thin on the ground they never met until one of them, Jeremy Gould, convened the first Govcamp. These are people who are willing to give up a Saturday in order to do better at their jobs working for us. All hail.

It’s hard to remember now, nearly 15 years on, the excitement in 2010 when David Cameron’s incoming government created the Government Digital Service and embedded it into the Cabinet Office. William Heath immediately ended the Ideal Government blog he’d begun writing in 2004 to press insistently for better use of digital technologies in government. The government had now hired all the people he could have wanted it to, he said, and therefore, “its job is done”.

Some good things followed: tilting government procurement to open the way for smaller British companies, consolidating government publishing, other things less visible but still important. Some data became open. This all has improved processes like applying for concessionary travel passes and other government documents, and made government publishing vastly more usable. The improvement isn’t universal: my application last year to renew my UK driver’s license was sent back because my signature strayed outside the box provided for it.

That’s just one way the business of government doesn’t feel that different. The whole process of developing legislation – green and white papers, public consultations, debates, and amendments – marches on much as it ever has, though with somewhat wider access because the documents are online. Thoughts about how to make it more participatory were the subject of a teacamp in 2013. Eleven years on, civil society is still reading and responding to government consultations in the time-honored way, and policy is still made by the few for the many.

At Govcamp, the conversation spread between the realities of their working lives and the difficulties systems posed for users – that is, the rest of us. “We haven’t removed those little frictions,” one said, evoking the old speed comparisons between Amazon (delivers tomorrow or even today) and the UK government (delivers in weeks, if not months).

“People know what good looks like,” someone else said, in echoing that frustration. That’s 2010-style optimism, from when Amazon product search yielded useful results, search engines weren’t spattered with AI slime and blanketed with ads, today’s algorithms were not yet born, and customer service still had a heartbeat. Here in 2025, we’re all coming up against rampant enshittification, with the result that the next cohort of incoming young civil servants *won’t* know any more what “good” looks like. There will be a whole new layer of necessary education.

Other comments: it’s evolution, not transformation; resistance to change and the requirement to ask permission are embedded throughout the culture; usability is still a problem; trying to change top-down only works in a large organization if it sets up an internal start-up and allows it to cannibalize the existing business; not enough technologists in most departments; the public sector doesn’t have the private sector option of deciding what to ignore; every new government has a new set of priorities. And: the public sector has no competition to push change.

One suggestion was that technological change happens in bursts – punctuated equilibrium. That sort of fits with the history of changing technological trends: computing, the Internet, the web, smartphones, the cloud. Today, that’s “AI”, which prime minister Keir Starmer announced this week he will mainline into the UK’s veins “for everything from spotting potholes to freeing up teachers to teach”.

The person who suggested “punctuated equilibrium” added: “Now is a new moment of change because of AI. It’s a new ‘GDS moment’.” This is plausible in the sense that new paradigms sometimes do bring profound change. Smartphones changed life for homeless people. On the other hand, many don’t do much. Think audio: that was going to be a game-changer, and yet after years of loss-making audio assistants, most of us are still typing.

So is AI one of those opportunities? Many brought up generative AI’s vast consumption of energy and water and rampant inaccuracy. Starmer, like Rishi Sunak before him, seems to think AI can make Britain the envy of other major governments.

Complex systems – such as digital governance – don’t easily change the flow of information or, therefore, the flow of power. It can take longer than most civil servants’ careers. Organizations like Mydex, which seeks to up-end today’s systems to put users in control, have been at work for years now. The upcoming digital identity framework has Mydex chair Alan Mitchell optimistic that the government’s digital identity framework is a breakthrough. We’ll see.

One attendee captured this: “It doesn’t feel like the question has changed from more efficient bureaucracy to things that change lives.” Said another in response, “The technology is the easy bit.”

Illustrations: Sir Humphrey Appleby (Nigel Hawthorne), Bernard Woolley (Derek Fowldes), and Jim Hacker (Paul Eddington) arguing over cultural change in Yes, Minister.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon Bluesky.