Bedroom eyes

We’ve long known that much of today’s “AI” is humans all the way down. This week underlines this: in an investigation, Svenska Dagbladet and Göteborgs-Posten learn that Meta’s Ray-Ban smart glasses are capturing intimate details of people’s lives and sending them to Nairobi, Kenya. There, employees at Meta subcontractor Sama label and annotate the data for use in training models. Brings a new meaning to “bedroom eyes”.

This sort of violation is easily imposed on other people without their knowledge or consent. We worry about the police using live facial recognition, but what about being captured by random people on the street? In January’s episode of the TechGrumps podcast, we called the news of Meta’s new product “Return of the Glasshole“.

Two 2018 books, Mary L. Gray and Siddharth Suri’s Ghost Work and Sarah T. Roberts’ Behind the Screen made it clear that “machine learning” and “AI” depend on poorly-paid unseen laborers. Dataveillance is a stowaway in every “smart” device. But this is a whole new level: the Kenyans report glimpses of bank cards, bedroom intimacy, even bathroom visits. The journalists were able to establish that the glasses’ AI requires a connection to Meta’s servers to answer questions, and there’s no opt out.

The UK’s Information Commissioner’s Office is investigating, and at Ars Technica Sarah Perez reports that a US lawsuit has been filed.

As the original Swedish report goes on to say, the EU has no adequacy agreement with Kenya. More disturbing is the fact that probably hundreds of people within Meta worked on this without seeing a problem.

In 1974, the Watergate-related revelation that US president Richard Nixon had recorded everything taking place in his office inspired folksinger Bill Steele to write the song The Walls Have Ears (MP3). What struck him particularly was that everyone saw it as unremarkable. “Unfortunately still current,” he commented in his 1977 liner notes. Nearly 50 years later, ditto.

***

A lot of (especially younger) people don’t remember that before 9/11 you could walk into most buildings without showing ID. Many authorities – the EU in particular – have long been unhappy with anonymity online, and one conspiratorial theory about age gating and the digital ID infrastructure being built in many places is that the goal is complete and pervasive identification. In the UK, requiring ID for all Internet access has occasionally popped up as a child safety idea, even though security experts recommend lying about birth dates and other personal data in the interests of self-protection against identity theft.

Now we have generative AI, and along comes a new paper that finds that large language models can be used to deanonymize people online at large scale by analyzing profiles and conversations. In one exercise, they matched Hacker News posts to LinkedIn profiles. In another, they linked users across subReddit communities. In a third, they split Reddit profiles to mimic the use of pseudonymous posting. Pseudonymity doesn’t offer meaningful protection (though I’m not sure how much it ever did), and preventing this type of attack is difficult. They also suggest platforms should reconsider their data access policies in line with their findings.

It’s hard to imagine most platforms will care much; users have long been expected to assess their own risk. Even smaller communities with a more concerned administration will not be in a position to know how many other services their users access, what they post there, or how it can be cross-linked. The difficulty of remaining anonymous online has been growing ever since 2000, when Latanya Sweeney showed it was possible to identify 87% of the population recorded in census data given just Zip code, date of birth, and gender. As psychics know, most people don’t really remember what they’ve said and how it can be linked and exploited by someone who’s paying attention. The paper concludes: we need a new threat model for privacy online.

***

The Internet, famously, was designed to support communications in the face of a bomb outage.

Building it required physical links – undersea cables, fiber connections, data centers, routers. For younger folks who have grown up with wifi and mobile phone connections, that physical layer may be invisible. But it matters no less than it did twenty-five years ago, when experts agreed that ten backhoes (among other things) could do more effective damage than bombs.

This week’s horrible, spreading war in the Middle East has seen the closure of the Strait of Hormuz and the Red see to commercial traffic. Indranil Ghosh reports at Rest of World that that 17 undersea cables pass through the Red Sea alone, and billions, soon trillions, of dollars in US technology investment depends on fiber optic cables running through war zones. There’s been reporting before now about the links between various Middle Eastern countries and Silicon Valley (see for example the recent book Gilded Rage, by Jacob Silverman), but until now much less about the technological interdependence put in jeopardy by the conflict. Ghosh also reports that drones have struck two Amazon Web Services data centers in UAE and one in Bahrein.

The issue is not so much direct damage to the cables as the impossibility of repairing them as long as access is closed. The Internet, designed with war in mind, is a product of peace.

Illustrations: Monument to Anonymous, by Meredith Bergmann.

Also this week: At the Plutopia podcast, we interview Kate Devlin, who studies human-AI interaction.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Saving no one

In the early 2010s, after “nano” and before “AI”, 3D printing was the technology that was going to change everything. Then it seemed to go quiet except for guns.

“First we will gain control over the shape of physical things. Then we will gain new levels of control over their composition, the materials they’re made of. Finally, we will gain control over the behavior of physical things,” Hod Lipson and Melba Kurman wrote in their 2013 book, Fabricated. As far as I can tell, we’re still pretty much in the era of making physical things that could be made by traditional methods rather than weird new shapes that could *only* be produced by additive manufacturing. More than 15 years after a fellow technology conference attendee excitedly lectured me that 3D printing was going to change everything, its growth remains largely hidden from most of us.

Until this past week, when I attended an event awash in puzzle makers and discovered that it’s been a godsend to them for making not only prototypes but also small runs of copies or published designs, freeing them from having to find space and capital for the kind of quantities required by traditional production. It’s good to see a formerly hyped technology supporting clever and entertaining human invention.

Exploding egg, anyone?

***

In one of the biggest fines in its history, the UK Information Commissioner’s Office has announced it is fining Reddit £14.5 million for failing to put in place an effective age verification mechanism to block under-13s from using the site under Reddit’s stated terms of service. The story is somewhat confused by timing: the fine is under data protection law and relates to the period before the arrival of the Online Safety Act, but the OSA’s requirement for age verification brought the changes that sparked the fine. Reddit says it will appeal.

In the UK terms and conditions Reddit announced in June 2025, the company says that “by using the services, you state that…you are at least 13 years old”. But Reddit didn’t require proof, and the ICO says that many under-13s use(d) the platform.

In July, when the Online Safety Act came into effect, Reddit added an age gate of 18 for “mature” content. Unlike many other social media sites that are just giant pools of content sorted by curation or algorithm, Reddit is a large set of distinct subReddits. Each of these communities has its own rules, social norms, and, most important, human moderators. Because of this, it’s comparatively easy to mark a particular subReddit as “for adults only”. After the July change, anyone in the UK wishing to access one of those subReddits was asked to submit a selfie or an image of a government-issued ID in order to prove their age.

The ICO’s findings state that Reddit failed to protect under-13s from accessing content that placed them at risk; that it processed under-13s’ data unlawfully (because they are too young to meaningfully consent); and that a simple statement is not a sufficient age verification mechanism (which is made clear in the OSA).

A Reddit spokesperson told the Guardian: “The ICO’s insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users’ online privacy and safety.”

I take their point; I’d rather skip the “mature” content than bear the privacy risk of uploading personal data to whatever third-party company Reddit is using for age verification. Last July, I decided I would just be a child. (Although: my Reddit account dates to 2015, so they could just do the math.)

Turns out, it may have been a wise decision. Reddit, saying it didn’t want to hold users’ personal data, chose the age verification provider Persona.

Persona deserves a look. Last week, Discord announced it would begin treating all users as teens until they’d been verified, also using Persona. The result as Ashley Belanger reports at Ars Technica was a user backlash. First, because last time Discord tried this, its now-former age verification provider’s pile of 70,000 users’ age check information was hacked.

Second, because The Rage reports that a group of security researchers found a Persona front end exposed to the open Internet on a US government server. On examination, that code shows that Persona performs 269 different verification checks and scours the Internet and government sources using your selfie and facial recognition. Discord has now announced it will delay introducing age verification – and won’t be using Persona after an apparently unsatisfactory trial in the UK last year.In a blog posting, Discord says that, like Reddit, it does not want to know its users’ identification details. It is adding more verification options.

If the world had already had a set of established trustworthy companies that specialized in age verification when the OSA came into effect, then it would make sense to turn to them to provide that service. But we aren’t in that situation. Instead, although providers have been working for more than a decade to build such systems, their deployment at scale is new.

Part of keeping children – and the rest of us – safe is protecting security and privacy – and child safety campaigners’ refusal to accept this has been an issue for decades. Creating new privacy risks doesn’t keep anyone safer – including children.

Illustrations: Six-panel early 1970s cartoon strip, “What the User Wanted”.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Software is still forever

On October 14, a few months after the tenth anniversary of its launch, Microsoft will end support for Windows 10. That is, Microsoft will no longer issue feature or security updates or provide technical support, and everyone is supposed to either upgrade their computers to Windows 11 or, if Microsoft’s installer deems the hardware inadequate, replace them with newer models. People who “need more time”, in the company’s phrasing, can buy a year’s worth of security updates. Either way, Microsoft profits at our expense.

In 2014, Microsoft similarly end-of-lifed 13-year-old Windows XP. Then, many were unsympathetic to complaints about it; many thought it unreasonable to expect a company to maintain software for that long. Yet it was obvious even then that software lives on with or without support for far longer than people expect, and also that trashing millions of functional computers was stupidly wasteful. Microsoft is giving Windows 10 a *shorter* life, which is rather obviously the wrong direction for a planet drowning in electronic waste.

XP’s end came at a time when the computer industry was transitioning from adolescence to maturity. As long as personal computing was being constrained by the limited capabilities of hardware and research and development was improving them at a fast pace, a software company like Microsoft could count on frequent new sales. By 2014, that happy time had ended, and although computers continue to add power and speed, it’s not coming back. The same pattern has been repeated with phones, which no longer improve on an 18-month cycle as in the 2010s, and cameras.

For the vast majority, there’s no reason to replace their old machine unless a non-replaceable part is failing – and there should be less of that as manufacturers are forced to embrace repairability. Significantly, there’s less and less difference for many of us if we keep the old hardware and switch to Linux, eliminating Microsoft entirely.

Those fast-moving days were real obsolescence. What we have now is what we used to call “planned obsolescence”. That is, *forced* obsolescence that companies impose on us because it’s convenient and profitable for *them*.

This time round, people are more critical, not least because of the vast amounts of ewaste being generated. The Public Interest Research Group has written an open letter asking people to petition Microsoft to extend free support for Windows 10. As Ed Bott explains at ZDNet, you do have the option of kicking the can down the road by paying for updates for another three years.

The other antisocial side of terminating free security updates is that millions of those still-functional machines will remain in use, and will be increasingly insecure as new vulnerabilities are discovered and left unpatched.

Simultaneously, Windows is enshittifying; it’s harder to run Windows without a Microsoft login; avoid stupid gewgaws and unwanted news headlines, and turn off its “Copilot AI”. Tom Warren reports at The Verge that Microsoft wants to turn Copilot into an agent that can book restaurants and control its Edge browser. There are, it appears, ways to defeat all this in Windows 11, but for how long?

In a piece on solar technology, Doctorow outlines the process by which technology companies seize control once they can no longer rely on consumer demand to drive sales. They lock down their technology if they can, lock in customers, add advertising and block market entry claiming safety and/or security make it necessary. They write and lobby for legislation that enshrines their advantage. And they use technological changes to render past products obsolete. Many think this is the real story behind the insistence on forcing unwanted “AI” features into everything: it’s the one thing they can do to make their offerings sound new.

Seen in that light, the rush to build “AI” into everything becomes a rush to find a way to force people to buy new stuff. The problem is that – it feels like – most people don’t see much benefit in it, and go around turning off the AI features that are forced on them. Microsoft’s Recall feature, which takes a screen snapshot every few seconds, was so controversial at launch that the company rolled it back – for a while, anyway.

Carelessness about ewaste is everywhere, particularly with respect to the Internet of Things. This week: Logitech’s Pop smart home buttons. At least when Google ended support for older Nest thermostats they could go on working as “dumb” thermostats (which honestly seems like the best kind).

Ewaste is getting a whole lot worse when it desperately needs to be getting a whole lot better.

***

In the ongoing rollout of the Online Safety Act and age verification update, at 404 Media, Joseph Cox reports that Discord has become the first site reporting a hack of age verification data. Hackers have collected data pertaining to 70,000 users, including selfies, identity documents, email addresses, approximate residences, and so on, and are trying to extort Discord, which says the hackers breached one of its third-party vendors that handles age-related appeals. Security practitioners warned about this from the beginning.

In addition, Ofcom has launched a new consultation for the next round of Online Safety Act enforcement. Up next are livestreaming and algorithmic recommendations; the Open Rights Group has an explainer, as does lawyer Graham Smith. The consultation closes on October 20.

Illustrations: One use for old computers – movie stardom, as here in Brazil.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Email to Ofgem

So, the US has claimed victory against the UK.

Regular readers may recall that in February the UK’s Home Office secretly asked Apple to put a backdoor in the Advanced Data Protection encryption it offers as a feature for iCloud users. In March, Apple challenged the order. The US objected to the requirement that the backdoor should apply to all users worldwide. How dare the Home Office demand the ability to spy on Americans?

On Tuesday, US director of national intelligence Tulsi Gabbard announced the UK is dropping its demand for the backdoor in Apple’s encryption “that would have enabled access to the protected encrypted data of American citizens”. The key here is “American citizens”. The announcement – which the Home Office is refusing to comment on – ignores everyone else and also the requirement for secrecy. It’s safe to say that few other countries would succeed in pressuring the UK in this way.

As Bll Goodwin reports at Computer Weekly, the US deal does nothing to change the situation for people in Britain or elsewhere. The Investigatory Powers Act (2016) is unchanged. As Parmy Olson writes at Bloomberg, the Home Office can go on issuing Technical Capability Notices to Apple and other companies demanding information on their users that the criminalization of disclosure will keep the companies silent. The Home Office can still order technology companies operating in the UK to weaken their security. And we will not know they’ve done it. Surprisingly, support for this point of view comes from the Federal Trade Commission, which has posted a letter to companies deploring foreign anti-encryption policy (ignoring how often undermining encryption has been US policy, too) and foreign censorship of Americans’ speech. This is far from over, even in the US.

Within the UK, the situation remains as dangerously uncertain as ever. With all countries interconnected, the UK’s policy risks the security of everyone everywhere. And, although US media may have forgotten, the US has long spied on its citizens by getting another country to do it.

Apple has remained silent, but so far has not withdrawn its legal challenge. Also continuing is the case filed by Privacy International, Liberty, and two individuals. In a recent update, PI says both legal cases will be heard over seven days in 2026 as much as possible in the open.

***

For non-UK folk: The Office of Gas and Electricity Markets (Ofgem) is the regulator for Britain’s energy market. Its job is to protect consumers.

To Ofgem:

Today’s Guardian (and many others) carries the news that Tesla EMEA has filed an application to supply British homes and businesses with energy.

Please do not approve this application.

I am a journalist who has covered the Internet and computer industries for 35 years. As we all know, Tesla is owned by Elon Musk. Quite apart from his controversial politics and actions within the US government, Elon Musk has shown himself to be an unstable personality who runs his companies recklessly. Many who have Tesla cars love them – but the cars have higher rates of quality control problems than those from other manufacturers, and Musk’s insistence on marketing the “Full Self Drive” feature has cost lives according to the US National Highway and Transportation Safety Agency, which launched yet another investigation into the company just yesterday. In many cases, when individuals have sought data from Tesla to understand why their relatives died in car fires or crashes the company has refused to help them. During the covid emergency, thousands of Tesla workers got covid because Musk insisted on reopening the Tesla factory. This is not a company people should trust with their homes.

With Starlink, Musk has exercised his considerable global power by turning off communications in Ukraine while it was fighting back Russian attacks. SpaceX launches continue to crash. According to the children’s commissioner’s latest report, far more children encounter pornography online on Musk’s X than on pornography sites, a problem that has gotten far worse since Musk took it over.

More generally, he is an enemy of workers’ rights. Misinformation on X helped fuel the Southport riots, and Musk himself has considered trying to oust Keir Starmer as prime minister.

Many are understandably awed by his technological ideas. But he uses these to garner government subsidies and undermine public infrastructure, which he then is able to wield as a weapon to suit his latest whims.

Musk is already far too powerful in the world. His actions in the White House have shown he is either unable to understand or entirely uninterested in the concerns and challenges that face people living on sums that to him seem negligible. He is even less interested in – and often actively opposes – social justice, fairness, and equity. No amount of separation between him and Tesla EMEA will be sufficient to counter his control of and influence over his company. Tesla’s board, just weeks ago, voted to award him $30 billion in shares to “energise and focus” him.

Please do not grant him a foothold in Britain’s public infrastructure. Whatever his company is planning, it does not have British interests at heart.

Ofgem is accepting public comments on Tesla’s application until close of business on Friday, August 22, 2025.

Illustration: Artist Dominic Wilcox’s Stained Glass Driverless Sleeper Car..

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Nephology

For an hour yesterday (June 5, 2025), we were treated to the spectacle of the US House Judiciary Committee, both Republicans and Democrats, listening – really listening, it seemed – to four experts defending strong encryption. The four: technical expert Susan Landau and lawyers Caroline Wilson-Palow, Richard Salgado, and Gregory Nejeim.

The occasion was a hearing on the operation of the Clarifying Lawful Overseas Use of Data Act (2018), better known as the CLOUD Act. It was framed as collecting testimony on “foreign influence on Americans’ data”. More precisely, the inciting incident was a February 2025 Washington Post article revealing that the UK’s Home Office had issued Apple with a secret demand that it provide backdoor law enforcement access to user data stored using the Advanced Data Protection encryption feature it offers for iCloud. This type of demand, issued under S253 of the Investigatory Powers Act (2016), is known as a “technical capability notice”, and disclosing its existence is a crime.

The four were clear, unambiguous, and concise, incorporating the main points made repeatedly over the last the last 35 years. Backdoors, they all agreed, imperil everyone’s security; there is no such thing as a hole only “good guys” can use. Landau invoked Salt Typhoon and, without ever saying “I warned you at the time”, reminded lawmakers that the holes in the telecommunications infrastructure that they mandated in 1994 became a cybersecurity nightmare in 2024. All four agreed that with so much data being generated by all of us every day, encryption is a matter of both national security as well as privacy. Referencing the FBI’s frequent claim that its investigations are going dark because of encryption, Nojeim dissented: “This is the golden age of surveillance.”

The lawyers jointly warned that other countries such as Canada and Australia have similar provisions in national legislation that they could similarly invoke. They made sensible suggestions for updating the CLOUD Act to set higher standards for nations signing up to data sharing: set criteria for laws and practices that they must meet; set criteria for what orders can and cannot do; and specify additional elements countries must include. The Act could be amended to include protecting encryption, on which it is currently silent.

The lawmakers reserved particular outrage for the UK’s audacity in demanding that Apple provide that backdoor access for *all* users worldwide. In other words, *Americans*.

Within the UK, a lot has happened since that February article. Privacy advocates and other civil liberties campaigners spoke up in defense of encryption. Apple soon withdrew ADP in the UK. In early March, the UK government and security services removed advice to use Apple encryption from their websites – a responsible move, but indicative of the risks Apple was being told to impose on its users. A closed-to-the-public hearing was scheduled for March 14. Shortly before it, Privacy International, Liberty, and two individual claimants filed a complaint with the Investigatory Powers Tribunal seeking for the hearing to be held in public, and disputing the lawfulness, necessity, and secrecy of TCNs in general. Separately, Apple appealed against the TCN.

On April 7, the IPT released a public judgment summarizing the more detailed ruling it provided only to the UK government and Apple. Short version: it rejected the government’s claim that disclosing the basic details of the case will harm the public interest. Both this case and Apple’s appeal continue.

As far as the US is concerned, however, that’s all background noise. The UK’s claim to be able to compel the company to provide backdoor access worldwide seems to have taken Congress by surprise, but a day like this has been on its way ever since 2014, when the UK included extraterritorial power in the Data Retention and Investigatory Powers Act (2014). At the time, no one could imagine how they would enforce this novel claim, but it was clearly something other governments were going to want, too.

This Judiciary Committee hearing was therefore a festival of ironies. For one thing, the US’s own current administration is hatching plans to merge government departments’ carefully separated databases into one giant profiling machine for US citizens. Second, the US has always regarded foreigners as less deserving of human rights than its own citizens; the notion that another country similarly privileges itself went down hard.

More germane, subsidiaries of US companies remain subject to the PATRIOT Act, under which, as the late Caspar Bowden pointed out long ago, the US claims the right to compel them to hand over foreign users’ data. The CLOUD Act itself was passed in response to Microsoft’s refusal to violate Irish data protection law by fulfilling a New York district judge’s warrant for data relating to an Irish user. US intelligence access to European users’ data under the PATRIOT Act has been the big sticking point that activist lawyer Max Schrems has used to scuttle a succession of US-EU data sharing arrangements under GDPR. Another may follow soon: in January, the incoming Trump administration fired most of the Privacy and Civil Liberties Oversight board tasked to protect Europeans’ rights under the latest such deal.

But, no mind. Feast, for a moment, on the thought of US lawmakers hearing, and possibly willing to believe, that encryption is a necessity that needs protection.

Illustrations: Gregory Nejeim, Richard Salgado, Caroline Wilson-Palow, and Susan Landau facing the Judiciary Committee on June 5, 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Unsafe

The riskiest system is the one you *think* you can trust. Say it in encryption: the least secure encryption is encryption that has unknown flaws. Because, in the belief that your communication or data is protected, you feel it’s safe to indulge in what in other contexts would be obviously risky behavior. Think of it like an unseen hole in a condom.

This has always been the most dangerous aspect of the UK government’s insistence that its technical capability notices remain secret. Whoever alerted the Washington Post to the notice Apple received a month ago commanding it to weaken its Advanced Data Protection performed an important public service. Now, Carly Page reports at TechCrunch based on a blog posting by security expert Alec Muffett, the UK government is recognizing that principle by quietly removing from its web pages advice to use that same encryption that was directed at people whose communications are at high risk – such as barristers and other legal professionals. Apple has since withdrawn ADP in the UK.

More important long-term, at the Financial Times, Tim Bradshaw and Lucy Fisher report that Apple has appealed the government’s order to the Investigatory Powers Tribunal. This will be, as the FT notes, the first time government powers under the Investigatory Powers Act (2016) to compel the weakening of security features will be tested in court. A ruling that the order was unlawful could be an important milestone in the seemingly interminable fight over encryption.

***

I’ve long had the habit of doing minor corrections on Wikipedia – fixing typos, improving syntax – as I find them in the ordinary course of research. But recently I have had occasion to create a couple of new pages, with the gratefully-received assistance of a highly experienced Wikipedian. At one time, I’m sure this was a matter of typing a little text, garlanding it with a few bits of code, and garnishing it with the odd reference, but standards have been rising all along, and now if you want your newly-created page to stay up it needs a cited reference for every statement of fact and a minimum of one per sentence. My modest pages had ten to 20 references, some servicing multiple items. Embedding the page matters, too, so you need to link mentions to all those pages. Even then, some review editor may come along and delete the page if they think the subject is not notable enough or violates someone’s copyright. You can appeal, of course…and fix whatever they’ve said the problem is.

It should be easier!

All of this detailed work is done by volunteers, who discuss the decisions they make in full view on the talk page associated with every content page. Studying the more detailed talk pages is a great way to understand how the encyclopedia, and knowledge in general, is curated.

Granted, Wikipedia is not perfect. Its policy on primary sources can be frustrating, and errors in cited secondary sources can be difficult to correct. The culture can be hostile if you misstep. Its coverage is uneven, But, as Margaret Talbot reports at the New Yorker and Amy Bruckman writes in her 2022 book, Should You Believe Wikipedia?, all those issues are fully documented.

Early on, Wikipedia was often the butt of complaints from people angry that this free encyclopedia made by *amateurs* threatened the sustainability of Encyclopaedia Britannica (which has survived though much changed). Today, it’s under attack by Elon Musk and the Heritage Foundation, as Lila Shroff writes at The Atlantic. The biggest danger isn’t to Wikipedia’s funding; there’s no offer anyone can make that would lead to a sale. The bigger vulnerability is the safety of individual editors. Scold they may, but as a collective they do important work to ensure that facts continue to matter.

***

Firefox users are manifesting more and more unhappiness about the direction Mozilla is taking with Firefox. The open source browser’s historic importance is outsized compared to its worldwide market share, which as of February 2025 is 2.63%, according to Statcounter. A long tail of other browsers are based on it, such as LibreWolf, Waterfox, and the privacy-protecting Tor.

The latest complaint, as Liam Proven and Thomas Claburn write at The Register is that Mozilla has removed its commitment not to sell user data from Firefox’s terms and conditions and privacy policy. Mozilla responded that the company doesn’t sell user data “in the way that most people think about ‘selling data'” but needed to change the language because of jurisdictional variations in what the word “sell” means. Still, the promise is gone.

This follows Mozilla’s September 2024 decision, reported by Richard Speed at The Register, to turn on by default a “privacy-preserving feature” to track users that led the NGO noyb to file a complaint with the Austrian data protection authority. And a month ago, Mark Hachman reported at PC World that Mozilla is building access to third-party generative AI chatbots into Firefox, and there are reports that it’s adding “AI-powered tab grouping.

All of these are basically unwelcome, and of all organizations Mozilla should have been able to foresee that. Go away, AI.

***

Molly White is expertly covering the Trump administration’s proposed “US Crypto Reserve”. Remains only to add Rachel Maddow, who compared it to having a strategic reserve of Beanie Babies.

Illustrations:: Beanie baby pelican.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Isolate

Yesterday, the Global Encryption Coalition published a joint letter calling on the UK to rescind its demand that Apple undermine (“backdoor”) the end-to-end encryption on its services. The Internet Society is taking signatures until February 20.

The background: on February 7, Joseph Menn reported at the Washington Post (followed by Dominic Preston at The Verge) that in January the office of the Home Secretary sent Apple a technical capability notice under the Investigatory Powers Act (2018) ordering it to provide access to content that anyone anywhere in the world has uploaded to iCloud and encrypted with Apple’s Advanced Data Protection.

Technical capability notices are supposed to be secret. It’s a criminal offense to reveal that you’ve been sent one. Apple can’t even tell users that their data may be compromised. (This kind of thing is why people publish warrant canaries.) Menn notes that even if Apple withdraws ADP in the UK, British authorities will still demand access to encrypted data everywhere *else*. So it appears that if the Home Office doesn’t back down and Apple is unwilling to cripple its encryption, the company will either have to withdraw ADP across the world or exit the UK market entirely. At his Odds and Ends of History blog, James O’Malley calls the Uk’s demand stupid, counter-productive, and unworkable. At TechRadar, Chiara Castro asks who’s next, and quotes Big Brother Watch director Silkie Carlo: “unprecedented for a government in any democracy”.

When the UK first began demanding extraterritorial jurisdiction for its interception rules, most people wondered how the country thought it would be able to impose it. That was 11 years ago; it was one of the new powers codified in the Data Retention and Investigatory Powers Act (2014) and kept in its replacement, the IPA in 2016.

Governments haven’t changed – they’ve been trying to undermine strong encryption in the hands of the masses since 1991, when Phil Zinmmermann launched PGP – but the technology has, as Graham Smith recounted at Ars Technica in 2017. Smartphones are everywhere. People store their whole lives on them for everything and giant technology companies encrypt both the device itself and the cloud backups. Government demands have changed to reflect that, from focusing on the individual with key escrow and key lengths to focusing on the technology provider with client-side scanning, encrypted messaging (see also the EU) and now cloud storage.

At one time, a government could install a secret wiretap by making a deal with a legacy telco. The Internet’s proliferation of communications providers changed that for a while. During the resulting panic the US passed the Communications Assistance for Law Enforcement Act (1994), which requires Internet service providers and telecommunications companies to install wiretap-ready equipment – originally for telephone calls, later broadband and VOIP traffic as well.

This is where the UK government’s refusal to learn from others’ mistakes is staggering. Just four months ago, the US discovered Salt Typhoon, a giant Chinese hack into its core telecommunications networks that was specifically facilitated by…by…CALEA. To repeat: there is no such thing as a magic hole that only “good guys” can use. If you undermine everyone’s privacy and security to facilitate law enforcement, you will get an insecure world where everyone is vulnerable. The hack has led US authorities to promote encrypted messaging.

Joseph Cox’s recent book, Dark Wire touches on this. It’s a worked example of what law enforcement internationally can do if given open access to all messages criminals send across a network when they think they are operating in complete safety. Yes, the results were impressive: hundreds of arrests, dozens of tons of drugs seized, masses if firearms impounded. But, Cox writes, all that success was merely a rounding error in global drug trade. Universal loss of privacy and security versus a rounding error: it’s the definition of “disproportionate”.

It remains to be seen what Apple decides to do and whether we can trust what the company tells us. At his blog, Alec Muffett is collecting ongoing coverage of events. The Future of Privacy Forum celebrated Safer Internet Day, February 11, with an infographic showing how encryption protects children and teens.

But set aside for a moment all the usual arguments about encryption, which really haven’t changed in over 30 years because mathematical reality hasn’t.

In the wider context, Britain risks making itself a technological backwater. First, there’s the backdoored encryption demand, which threatens every encrypted service. Second, there’s the impact of the onrushing Online Safety Act, which comes into force in March. Ofcom, the regulator charged with enforcing it, is issuing thousands of pages of guidance that make it plain that only large platforms will have the resources to comply. Small sites, whether businesses, volunteer-run Fediverse instances, blogs, established communities, or web boards, will struggle even if Ofcom starts to do a better job of helping them understand their legal obligations. Many will likely either shut down or exit the UK, leaving the British Internet poorer and more isolated as a result. Ofcom seems to see this as success.

It’s not hard to predict the outcome if these laws converge in the worst possible timeline: a second Brexit, this one online.

Illustrations: T-shirt (gift from Jen Persson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: Dark Wire

Dark Wire
by Joseph Cox
PublicAffairs (Hachette Group)
ISBNs: 9781541702691 (hardcover), 9781541702714 (ebook)

One of the basic principles that emerged as soon as encryption software became available to ordinary individuals on home computers was this: everyone should encrypt their email so the people who really need the protection don’t stick out as targets. Also at that same time, the authorities were constantly warning that if encryption weren’t controlled by key escrow, an implanted back door, or restrictions on its strength, it would help hide the activities of drug traffickers, organized crime, pedophiles, and terrorists. This same argument continues today.

Today, billions of people have access to encrypted messaging via WhatsApp, Signal, and other services. Governments still hate it, but they *use* it; the UK government is all over WhatsApp, as multiple public inquiries have shown.

In Dark Wire: The Incredible True Story of the Largest Sting Operation Ever, Joseph Cox, one of the four founders of 404 Media, takes us on a trip through law enforcement’s adventures in encryption, as police try to identify and track down serious criminals making and distributing illegal drugs by the ton.

The story begins with PhantomSecure, a scheme that stripped down Blackberry devices and installed PGP to encrypt emails and systems to ensure the devices could exchange emails only with other Phantom Secure devices. The service became popular among all sorts of celebrities, politicians, and other non-criminals who value privacy – but not *only* them. All perfectly legal.

One of my favorite moments comes early,when a criminal debating whether to trust a new contact decides he can because he has one of these secure Blackberries. The criminal trusted the supply chain; surely no one would have sold him one of these things without thoroughly checking that he wasn’t a cop. Spoiler alert: he was a cop. That sale helped said cop and his colleagues in the United States, Australia, Canada, and the Netherlands infiltrate the network, arrest a bunch of criminals, and shut it down – eventually, after setbacks, and with the non-US forces frustrated and amazed by US Constitutional law limiting what agents were allowed to look at.

PhantomSecure’s closure made a hole in the market while security-conscious criminals scrambled to find alternatives. It was rapidly filled by competitors working with modified phones: Encrochat and Sky ECC. As users migrated to these services and law enforcement worked to infiltrate and shut them down as well, former PhantomSecure salesman “Afgoo” had a bright idea, which he offered to the FBI: why not build their own encrypted app and take over the market?

The result was Anom, From the sounds of it, some of its features were quite cool. For example, the app itself hid behind an innocent-looking calculator, which acted as a password gateway. Type in the right code, and the messaging app appeared. The thing sold itself.

Of course, the FBI insisted on some modifications. Behind the scenes, Anom devices sent copies of every message to the FBI’s servers. Eventually, the floods of data the agencies harvested this way led to 500 arrests on one day alone, and the seizure of hundreds of firearms and dozens of tons of illegal drugs and precursor chemicals.

Some of the techniques the criminals use are fascinating in their own right. One method of in-person authentication involved using the unique serial number on a bank note, sending it in advance; the mule delivering the money would simply have to show they had the bank note, a physical one-time pad. Banks themselves were rarely used. Instead, cash would be stored in safe houses in various countries and the money would never have to cross borders. So: no records, no transfers to monitor. All of this spilled open for law enforcement because of Anom.

And yet. Cox waits until the end to voice reservations. All those seizures and arrests barely made a dent in the world’s drug trade – a “rounding error”, Cox calls it.

Pass the password

So this week I had to provide a new password for an online banking account. This is always fraught, even if you use a password manager. Half the time whatever you pick fails the site’s (often hidden) requirements – you didn’t stand on your head and spit a nickel, or they don’t allow llamas, or some other damn thing. This time, the specified failures startled me: it was too long, and special characters and spaces were not allowed. This is their *new* site, created in 2022.

They would have done well to wait for the just-released proposed password guidelines from the US National Institute for Standards and Technology. Most of these rules should have been standard long ago – like only requiring users to change passwords if there’s a breach. We can, however, hope they will be adopted now and set a consistent standard. To date, the fact that everyone has a different set of restrictions means that no secure strategy is universally valid – neither the strong passwords software generates nor the three random words the UK’s National Cyber Security Centre has long recommended.

The banking site we began with fails at least four of the nine proposed new rules: the minimum password length (six) is too short – it should be eight, preferably 15; all Unicode characters and printing ASCII characters should be acceptable, including spaces; maximum length should be at least 64 characters (the site says 16); there should be no composition rules mandating upper and lower case, numerals, and so on (which the site does). At least the site’s rules mean it won’t invisibly truncate the password so it’s impossible to guess how much of it has actually been recorded. On the plus side, a little surprisingly, the site did require me to choose my own password hint question. The fact that most sites use the same limited set of questions opens the way for reused answers across the web, effectively undoing the good of not reusing passwords in the first place.

This is another of those cases where the future was better in the past: for at least 20 years passwords have been about to be superseded by…something – smart cards, dongles, biometrics, picklists and typing patterns, images, lately, passkeys and authenticator apps. All have problems limiting their reach. Single signons are still offered by Facebook, Google, and others, but the privacy risks are (I hope) widely understood. The “this is the next password” crowd have always underestimated the complexity of replacing deeply embedded habits. We all hate passwords – but they are simple to understand and work across multiple devices. And we’re used to them. Encryption didn’t succeed with mainstream users until it became invisibly embedded inside popular services. I’ll guess that password replacements will only succeed if they are similarly invisible. Most are too complicated.

In all these years, despite some improvements, passwords have only gotten more frustrating. Most rules are still devised with desktop/laptop computers in mind – and yield passwords that are impossible to type on phones. No one takes a broad view of the many contexts in which we might have to enter passwords. Outmoded threat models are everywhere. Decades after the cybercafe era, sites, operating systems, and other software all still code as if shoulder surfing were still an important threat model. So we end up with sillinesses like struggling to type masked wifi passwords while sitting in coffee shops where they’re prominently displayed, and masks for one-time codes sent for two-factor authentication. So you fire up a text editor to check your typing and then copy and paste…

Meanwhile, the number of passwords we have to manage continues to escalate. In a recent extended Mastodon thread, digital services expert Jonathan Kamens documented his adventures updating the email addresses attached to 1,200-plus accounts. He explained: using a password manager makes it easy to create and forget new accounts by the hundred.

In a 2017 presentation, Columbia professor Steve Bellovin provides a rethink of all this, much of it drawn from his his 2016 book Thinking Security. Most of what we think we know about good password security, he argues, is based on outmoded threat models and assumptions – outmoded because the computing world has changed, but also because attackers have adapted to those security practices. Even the focus on creating *strong* passwords is outdated: password strength is entirely irrelevant to modern phishing attacks, to compromised servers and client hosts, and to attackers who can intercept communications or control a machine at either end (for example, via a keylogger). A lot depends if you’re a high-risk individual likely to be targeted personally by a well-resourced attacker, a political target, or, like most of us, a random target for ordinary criminals.

The most important thing, Bellovin says, is not to reuse passwords, so that if a site is cracked the damage is contained to that one site. Securing the email address used to reset passwords is also crucial. To that end, it can be helpful to use a separate and unpublished email address for the most sensitive sites and services – anything to do with money or health, for example.

So NIST’s new rules make some real sense, and if organizations can be persuaded to adopt them consistently all our lives might be a little bit less fraught. NIST is accepting comments through October 7.

Illustrations: XKCD on password strength.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Microsoft can remember it for you wholesale

A new theory: somewhere in the Silicon Valley universe there’s a cadre of techies who have eidetic memories and they’re feeling them start to slip. Panic time.

That’s my best explanation for Microsoft’s latest wheeze, a new feature for its Copilot assistant that will take what’s variously called a “snapshot” or a “screenshot” of your computer (all three monitors?) every five seconds and store it for future reference. Microsoft hasn’t explained much about Recall’s inner technical workings, but according to the announcement, the data will be stored locally and will be searchable via semantic associations and some sort of “AI”. Microsoft also says the data will not be used to train AI models.

The general anger and dismay at this plan brings back, almost nostalgically, memories of the 1990s, when Microsoft was near-universally hated as the evil monopolist dominating computing. In 2008, when Google was ten years old, a BBC presenter asked me if I thought Google would ever be hated as much as Microsoft was (not then, no). In 2012, veteran journalist Charles Arthur published the book Digital Wars about how Microsoft had stagnated and lost its lead. And then suddenly, in the last few years, it’s back on top.

Possibilities occur that Microsoft doesn’t mention. For example: could software might be embedded into Windows to draw inferences from the data Recall saves? And could those inferences be forwarded to the company or used to target you with ads? That seems like a far more efficient way to invade users’ privacy than copying the data itself, if that’s what the company ultimately wants to do.

Lots of things on our computers already retain a “memory” of what we’ve been doing. Operating systems generate logs to help debug problems. Word processors retain a changelog, which powers the ability to undo mistakes. Web browsers have user-configurable histories; email software has archives; media players retain playlists. All of those are useful – but part of that usefulness is that they are contextual, limited, and either easily terminated by closing the relevant application or relatively easily edited to remove items that shouldn’t be kept.

It’s hard for almost everyone who isn’t Microsoft to understand the point of keeping everything by default. It seems like a feature only developers could love. I certainly would like Windows to be better at searching for stored files or my (Firefox) browser to be better at reloading that article I was reading yesterday. I have even longed for a personal version of Vannevar Bush’s Memex. As part of that, I might welcome a feature that let me hit a button to record the last five useful minutes of a meeting, or save a social media post to a local archive. But the key to that sort of memory expansion is curation, not remembering everything promiscuously. For most people, selective forgetting is how we survive the torrents of irrelevance hurled at us every day.

What Recall sounds most like is the lifelog science fiction writer Charlie Stross imagined in 2007 might be our future. Plummeting storage costs and expanding capacity, he reasoned, would make it possible to store *everything* in your pocket. Even then, there were (a very few) people doing that sort of thing, most notably Steve Mann, a University of Toronto professor who started wearing devices to comprhensively capture his life as a 1990s graduate student. Over the years, Mann has shrunk his personal gadget array from a laptop and peripherals to glasses and pocket devices. Many more people capture their surroundings now – but they do it on their phones. If Apple or Google were proposing a Recall feature for iOS or Android, the idea would seem a lot less weird.

The real issue is that there are many people who would like to be able to know what somone *else* has been doing on their computer at all times. Helicopter parents. Schools and teachers under government compulsion (see for example Prevent (PDF)). Employers. Border guards. Corporate spies. The Department of Work and Pensions. Authoritarian governments. Law enforcement and security agencies. Criminals. Domestic abusers… So developing any feature like this must include considering how to protect it against these threats. This does not appear to have happened.

Many others have written about the privacy issues in all this – the UK’s Information Commission’s Office is already investigating. At The Register, Richard Speed does a particularly good job of looking at some of the fine details. On Mastodon, Kevin Beaumont says inspection of the Copilot+ software suggests that Recall stores the text it extracts from all those snapshots into an easily copiable SQlite database.

But there’s still more. The kind of archive Recall appears to construct can teach an attacker how the target thinks: not just what passwords they choose but how they devise them.Those patterns can be highly valuable. Granted, few targets are worth that level of attention, but it happens, as Peter Davies, a technical director at eThales, has often warned.

Recall is not the only move – see also flawed-AI-with-everything – that suggests that the computer industry, like some politicians and governments, is badly losing touch with the public. Increasingly, what they want to do seems unrelated to what the rest of us want. If they think things like Recall are a good idea they need to read more Philip K. Dick. And then don’t invent the Torment Nexus.

Illustrations: Arnold Schwarzenegger seeking better memories in the 1990 film Total Recall.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon..