The panopticon in your home

In a series of stories, Lisa O’Carroll at the Guardian finds that His Majesty’s Revenue and Customs has had its hand in the cookie jar of airline passenger records. In hot pursuit of its goal of finding £350 million in benefit fraud, it’s been scouring these records to find people who have left the country for more than a month and not returned, so are no longer eligible.

In one case, a family was turned away at the gate when one of the children had an epileptic seizure; their child benefit was stopped because they had “emigrated” though they’d never left. A similar accusation was leveled at a women who booked a flight to Oslo even though she never checked in or flew.

These families can provide documentation proving they remained in the UK, but as one points out, the onus is on them to clean up an error they didn’t make. There are many others. Many simply traveled and returned by different routes. As of November 1, HMRC had reinstated 1,979 of the families affected but sticks to its belief that the rest have been correctly identified. HMRC also says it will check its PAYE records first for evidence someone is still here and working. This would help, but it’s not the only issue.

It’s unclear whether HMRC has the right to use this data in this way. The Guardian reports that the Information Commissioner’s Office, the data protection authority, has contacted HMRC to ask questions.

For privacy advocates, the case is disturbing. It is a clear example of the way data can mislead when it’s moved to a new context. For the people involved, it’s a hostage situation: there is no choice about providing the data siphoned from airlines to Home Office nor the financial information held by HMRC and no control over what happens next.

The essayist and former software engineer Ellen Ullman warned 20 years ago that she had never seen an owner of multiple databases who didn’t want to link them together. So this sort of “sharing” is happening all over the place.

In the US, Pro Publica reported this week that individual states have begun using a system provided by the Department of Homeland Security to check their voter rolls for non-citizens that has incorporated information from the Social Security Administration. Here again, data collected by one agency for one purpose is being shared with another for an entirely different one.

In both cases, data is being used for a purpose that wasn’t envisioned when it was collected. An airline collecting booking data isn’t checking it for errors or omissions that might cost a passenger their benefits. Similarly, the Social Security Administration isn’t normally concerned with whether you’re a citizen for voting purposes, just whether you qualify for one or another program – as it should be. Both changes of use fail to recognize the change in the impact of errors that goes along with them, especially at national scale.

I assume that in this age of AI-for-government-efficiency the goal for the future is to automate these systems even further while pulling in more sources of data.

Privacy advocates are used to encountering pushback that takes this form: “They know everything about me anyway.” I would dispute that. “They” certainly *can* collect a lot of uncorrelated data points about you if “they” aggregate the many available sources of data. But until recently, doing that was effortful enough that it didn’t happen unless you were suspected of something. Now, we’re talking sharing data and mining at scale as a matter of routine.

***

One of the most important lessons learned from 14 years of We, Robot conferences is that when someone shows a video clip of a robot doing something one should always ask how much it’s been speeded up.

This probably matters less in a home robot doing chores, as long as you don’t have to supervise. Leave a robot to fold laundry, and it can’t possibly matter if it takes all night.

From reports by Erik Kain at Forbes and Nilesh Christopher at the LA Times, it appears that 1X’s new Neo robot is indeed slow, even in its promotional video clips. The company says it has layers of security to prevent it from turning “murderous”, which seems an absurd bit of customer reassurance. However, 1X also calls it “lightweight”. The Neo is five foot six and weighs 66 pounds (30 kilos), which seems quite enough to hurt someone if it falls on them, even with padding. Granting the contributory design issues, Lime bikes weigh 50 pounds and break people’s legs. 1X’s website shows the Neo hugged by an avuncular taller man; imagine it instead with a five-foot 90-year-old woman.

Can we ask about hacking risks? And what happens if, like so many others, 1X shuts it down?

More incredibly, in buying one you must agree to allow a remote human operatorto drive the robot, along the way peering into your home. This is close to the original design of the panopticon, which chilled because those under surveillance never know whether they are being watched or not.

And it can be yours for the low, low price of $20,000 or $500 a month.

Illustrations: Jeremy Bentham’s original drawing of his design for the panopticon (via Wikimedia).

Also this week:
The Plutopia podcast interviews Sophie Nightingale on her research into deepfakes and the future of disinformation.
TechGrumps 3.33 podcast, The Final Step is Removing the Consumer, discusses AI web brorwsers, the Amazon outage, Python Foundation and DEI.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Magic math balls

So many ironies, so little time. According to the Financial Times (and syndicated at Ars Technica), the US government, which itself has traditionally demanded law enforcement access to encrypted messages and data, is pushing the UK to drop its demand that Apple weaken its encryption. Normally, you want to say, Look here, countries are entitled to have their own laws whether the US likes it or not. But this is not a law we like!

This all began in February, when the Washington Post reported that the UK’s Home Office had issued Apple with a Technical Capability Notice. Issued under the Investigatory Powers Act (2016) and supposed to be kept secret, the TCN demanded that Apple undermine the end-to-end encryption used for iCloud’s Advanced Data Protection feature. Much protest ensued, followed by two legal cases in front of the Investigatory Powers Tribunal, one brought by Apple, the other by Privacy International and Liberty. WhatsApp has joined Apple’s legal challenge.

Meanwhile, Apple withdrew ADP in the UK. Some people argued this didn’t really matter, as few used it, which I’d call a failure of user experience design rather than an indication that people didn’t care about it. More of us saw it as setting a dangerous precedent for both encryption and the use of secret notices undermining cybersecurity.

The secrecy of TCNs is clearly wrong and presents a moral hazard for governments that may prefer to keep vulnerabilities secret so they can take advantage for surveillance purposes. Hopefully, the Tribunal will eventually agree and force a change in the law. The Foundation for Information Policy Research (obDisclosure: I’m a FIPR board member) has published a statement explaining the issues.

According to the Financial Times, the US government is applying a sufficiently potent threat of tariffs to lead the UK government to mull how to back down. Even without that particular threat, it’s not clear how much the UK can resist. As Angus Hanton documented last year in the book Vassal State, the US has many well-established ways of exerting its influence here. And the vectors are growing; Keir Starmer’s Labour government seems intent on embedding US technology and companies into the heart of government infrastructure despite the obvious and increasing risks of doing so. When I read Hanton’s book earlier this year, I thought remaining in the EU might have provided some protection, but Caroline Donnelly warns at Computer Weekly that they, too, are becoming dangerously dependent on US technology, specifically Microsoft.

It’s tempting to blame everything on the present administration, but the reality is that the US has long used trade policy and treaties to push other countries into adopting laws regardless of their citizens’ preferences.

***

As if things couldn’t get any more surreal, this week the Trump administration *also* issued an executive order banning “woke AI” in the federal government. AI models are in future supposed to be “politically neutral”. So, as Kevin Roose writes at the New York Times, the culture wars are coming for AI.

The US president is accusing chatbots of “Marxist lunacy”, where the rest of the world calls them inaccurate, biased toward repeating and expanding historical prejudices, and inconsistent. We hear plenty about chatbots adopting Nazi tropes; I haven’t heard of one promoting workers’ and migrants’ rights.

If we know one thing about AI models it’s that they’re full of crap all the way down. The big problem is that people are deploying them anyway. At the Canary, Steve Topple reports that the UK’s Department of Work and Pensions admits in a newly-published report that its algorithm for assessing whether benefit claimants might commit fraud is ageist and and racist. A helpful executive order would set must-meet standards for *accuracy*. But we do not live in those times.

The Guardian reports that two more Trump EOs expedite building new data centers, promote exports of American AI models, expand the use of AI in the federal government, and intend to solidify US dominance in the field. Oh, and Trump would really like if it people would stop calling it “artificial” and find a new name. Seven years ago, aspirational intelligence” seemed like a good idea. But that was back when we heard a lot about incorporating ethics. So…”magic math ball”?

These days, development seems to proceed ethics-free. DWP’s report, for example, advocates retraining its flawed algorithm but says continuing to operate it is “reasonable and proportionate”. In 2021, for European Digital Rights Initiative, Agathe Balayn and Seda Gürses found, “Debiasing locates the problems and solutions in algorithmic inputs and outputs, shifting political problems into the domain of design, dominated by commercial actors.” In other words, no matter what you think is “neutral”, training data, model, and algorithms are only as “neutral” as their wider context allows them to be.

Meanwhile, nothing to curb the escalating waste. At 404 Media, Emanuel Maiberg finds that Spotify is publishing AI-generated songs from dead artists without anyone’s’ permission. On Monday, MSNBC’s Rachel Maddow told viewers that there’s so much “AI slop ” about her that they’ve posted Is That Really Rachel? to catalog and debunk them.

As Ed Zitron writes, the opportunity costs are enormous.

In the UK, the US, and many other places, data centers are threatening the water supply.

But sure, let’s make more of that.

Illustrations: Magic 8 ball toy (via frankieleon at Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her website has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Negative externalities

A sheriff’s office in Texas searched a giant nationwide database of license plate numbers captured by automatic cameras to look for a woman they suspected of self-managing an abortion. As Rindala Alajazi writes at EFF, that’s 83,000 cameras in 6,809 networks belonging to Flock Safety, many of them in states where abortion is legal or protected as a fundamental right until viability.

We’ve known something like this was coming ever since 2022, when the US Supreme Court overturned Roe v. Wade and returned the power to regulate abortion to the individual US states. The resulting unevenness made it predictable that the strongest opponents to legal abortion would turn their attention to interstate travel.

The Electronic Frontier Foundation has been warning for some time about Flock’s database of camera-captured license plates. Recently, Jason Koebler reported at 404 Media that US Immigration and Customs Enforcement has been using Flock’s database to find prospects for deportation. Since ICE does not itself have a contract with Flock, it’s been getting local law enforcement to perform search on its behalf. “Local” refers only to the law enforcement personnel; they have access to camera data that’s shared nationally.

The point is that once the data has been collected it’s very hard to stop mission creep. On its website, Flock says its technology is intended to “solve and eliminate crime” and “protect your community”. That might have worked when we all agreed what was a crime.

***

A new MCTD Cambridge report makes a similar point about menstrual data, when sold at scale. Now, I’m from the generation that managed fertility with a paper calendar, but time has moved on, and fertility tracking apps allow a lot more of the self-quantification that can be helpful in many situations. As Stephanie Felsberger writes in introducing the report, menstrual data is highly revealing of all sorts of sensitive information. Privacy International has studied period-tracking apps, and found that they’ve improved but still pose serious privacy risks.

On the other hand, I’m not so sure about the MCTD report’s third recommendation – that government build a public tracker app within the NHS. The UK doesn’t have anything like the kind of divisive rhetoric around abortion that the US does, but the fact remains that legal abortion is a 1967 carve-out from an 1861 law. In the UK, procuring an abortion is criminal *except* during the first 24 weeks, or if the mother’s life is in danger, or if the fetus has a serious abnormality. And even then, sign-off is required from two doctors.

Investigations and prosecutions of women under that 1861 law have been rising, as Shanti Das reported at the Guardian in January. Pressure in the other direction from US-based anti-choice groups such as the Alliance for Defending Freedom has also been rising. For years it’s seemed like this was a topic no one really wanted to reopen. Now, health care providers are calling for decriminalization, and, as Hannah Al-Oham reported this week, there are two such proposals currently in front of Parliament.

Also relevant: a month ago, Phoebe Davis reported at the Observer that in January the National Police Chiefs’ Council quietly issued guidance advising officers to search homes for drugs that can cause abortions in cases of stillbirths and to seize and examine devices to check Internet searches, messages, and health apps to “establish a woman’s knowledge and intention in relation to the pregnancy”. There was even advice on how to bypass the requirement for a court order to access women’s medical records.

In this context, it’s not clear to me that a publicly owned app is much safer or more private than a commercial one. What’s needed is open source code that can be thoroughly examined that keeps all data on the device itself, encrypted, in a segregated storage space over which the user has control. And even then…you know, paper had a lot of benefits.

***

This week the UK Parliament passed the Data (Use and Access) bill, which now just needs a royal signature to become law. At its site, the Open Rights Group summarizes the worst provisions, mostly a list of ways the bill weakens citizens’ rights over their data.

Brexit was sold to the public on the basis of taking back national sovereignty. But, as then-MEP Felix Reda said the morning after the vote, national sovereignty is a fantasy in a globalized world. Decisions about data privacy can’t be made imagining they are only about *us*.

As ORG notes, the bill has led European Digital Rights to write to the European Commission asking for a review of the UK’s adequacy status. This decision, granted in 2020, was due to expire in June 2025, but the Commission granted a six-month extension to allow the bill’s passage to complete. In 2019, when the UK was at peak Brexit chaos, it seemed possible that the Conservative then-government would allow the UK to leave the EU with no deal in place, net.wars noted the risk to data flows. The current Labour government, with its AI and tech policy ambitions, ought to be more aware of the catastrophe losing adequacy would present. And yet.

Illustrations: Map from the Center for Reproductive Rights showing the current state of abortion rights across the US.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast and a regular guest on the TechGrumps podcast. Follow on Mastodon or Bluesky.

Nephology

For an hour yesterday (June 5, 2025), we were treated to the spectacle of the US House Judiciary Committee, both Republicans and Democrats, listening – really listening, it seemed – to four experts defending strong encryption. The four: technical expert Susan Landau and lawyers Caroline Wilson-Palow, Richard Salgado, and Gregory Nejeim.

The occasion was a hearing on the operation of the Clarifying Lawful Overseas Use of Data Act (2018), better known as the CLOUD Act. It was framed as collecting testimony on “foreign influence on Americans’ data”. More precisely, the inciting incident was a February 2025 Washington Post article revealing that the UK’s Home Office had issued Apple with a secret demand that it provide backdoor law enforcement access to user data stored using the Advanced Data Protection encryption feature it offers for iCloud. This type of demand, issued under S253 of the Investigatory Powers Act (2016), is known as a “technical capability notice”, and disclosing its existence is a crime.

The four were clear, unambiguous, and concise, incorporating the main points made repeatedly over the last the last 35 years. Backdoors, they all agreed, imperil everyone’s security; there is no such thing as a hole only “good guys” can use. Landau invoked Salt Typhoon and, without ever saying “I warned you at the time”, reminded lawmakers that the holes in the telecommunications infrastructure that they mandated in 1994 became a cybersecurity nightmare in 2024. All four agreed that with so much data being generated by all of us every day, encryption is a matter of both national security as well as privacy. Referencing the FBI’s frequent claim that its investigations are going dark because of encryption, Nojeim dissented: “This is the golden age of surveillance.”

The lawyers jointly warned that other countries such as Canada and Australia have similar provisions in national legislation that they could similarly invoke. They made sensible suggestions for updating the CLOUD Act to set higher standards for nations signing up to data sharing: set criteria for laws and practices that they must meet; set criteria for what orders can and cannot do; and specify additional elements countries must include. The Act could be amended to include protecting encryption, on which it is currently silent.

The lawmakers reserved particular outrage for the UK’s audacity in demanding that Apple provide that backdoor access for *all* users worldwide. In other words, *Americans*.

Within the UK, a lot has happened since that February article. Privacy advocates and other civil liberties campaigners spoke up in defense of encryption. Apple soon withdrew ADP in the UK. In early March, the UK government and security services removed advice to use Apple encryption from their websites – a responsible move, but indicative of the risks Apple was being told to impose on its users. A closed-to-the-public hearing was scheduled for March 14. Shortly before it, Privacy International, Liberty, and two individual claimants filed a complaint with the Investigatory Powers Tribunal seeking for the hearing to be held in public, and disputing the lawfulness, necessity, and secrecy of TCNs in general. Separately, Apple appealed against the TCN.

On April 7, the IPT released a public judgment summarizing the more detailed ruling it provided only to the UK government and Apple. Short version: it rejected the government’s claim that disclosing the basic details of the case will harm the public interest. Both this case and Apple’s appeal continue.

As far as the US is concerned, however, that’s all background noise. The UK’s claim to be able to compel the company to provide backdoor access worldwide seems to have taken Congress by surprise, but a day like this has been on its way ever since 2014, when the UK included extraterritorial power in the Data Retention and Investigatory Powers Act (2014). At the time, no one could imagine how they would enforce this novel claim, but it was clearly something other governments were going to want, too.

This Judiciary Committee hearing was therefore a festival of ironies. For one thing, the US’s own current administration is hatching plans to merge government departments’ carefully separated databases into one giant profiling machine for US citizens. Second, the US has always regarded foreigners as less deserving of human rights than its own citizens; the notion that another country similarly privileges itself went down hard.

More germane, subsidiaries of US companies remain subject to the PATRIOT Act, under which, as the late Caspar Bowden pointed out long ago, the US claims the right to compel them to hand over foreign users’ data. The CLOUD Act itself was passed in response to Microsoft’s refusal to violate Irish data protection law by fulfilling a New York district judge’s warrant for data relating to an Irish user. US intelligence access to European users’ data under the PATRIOT Act has been the big sticking point that activist lawyer Max Schrems has used to scuttle a succession of US-EU data sharing arrangements under GDPR. Another may follow soon: in January, the incoming Trump administration fired most of the Privacy and Civil Liberties Oversight board tasked to protect Europeans’ rights under the latest such deal.

But, no mind. Feast, for a moment, on the thought of US lawmakers hearing, and possibly willing to believe, that encryption is a necessity that needs protection.

Illustrations: Gregory Nejeim, Richard Salgado, Caroline Wilson-Palow, and Susan Landau facing the Judiciary Committee on June 5, 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Unsafe

The riskiest system is the one you *think* you can trust. Say it in encryption: the least secure encryption is encryption that has unknown flaws. Because, in the belief that your communication or data is protected, you feel it’s safe to indulge in what in other contexts would be obviously risky behavior. Think of it like an unseen hole in a condom.

This has always been the most dangerous aspect of the UK government’s insistence that its technical capability notices remain secret. Whoever alerted the Washington Post to the notice Apple received a month ago commanding it to weaken its Advanced Data Protection performed an important public service. Now, Carly Page reports at TechCrunch based on a blog posting by security expert Alec Muffett, the UK government is recognizing that principle by quietly removing from its web pages advice to use that same encryption that was directed at people whose communications are at high risk – such as barristers and other legal professionals. Apple has since withdrawn ADP in the UK.

More important long-term, at the Financial Times, Tim Bradshaw and Lucy Fisher report that Apple has appealed the government’s order to the Investigatory Powers Tribunal. This will be, as the FT notes, the first time government powers under the Investigatory Powers Act (2016) to compel the weakening of security features will be tested in court. A ruling that the order was unlawful could be an important milestone in the seemingly interminable fight over encryption.

***

I’ve long had the habit of doing minor corrections on Wikipedia – fixing typos, improving syntax – as I find them in the ordinary course of research. But recently I have had occasion to create a couple of new pages, with the gratefully-received assistance of a highly experienced Wikipedian. At one time, I’m sure this was a matter of typing a little text, garlanding it with a few bits of code, and garnishing it with the odd reference, but standards have been rising all along, and now if you want your newly-created page to stay up it needs a cited reference for every statement of fact and a minimum of one per sentence. My modest pages had ten to 20 references, some servicing multiple items. Embedding the page matters, too, so you need to link mentions to all those pages. Even then, some review editor may come along and delete the page if they think the subject is not notable enough or violates someone’s copyright. You can appeal, of course…and fix whatever they’ve said the problem is.

It should be easier!

All of this detailed work is done by volunteers, who discuss the decisions they make in full view on the talk page associated with every content page. Studying the more detailed talk pages is a great way to understand how the encyclopedia, and knowledge in general, is curated.

Granted, Wikipedia is not perfect. Its policy on primary sources can be frustrating, and errors in cited secondary sources can be difficult to correct. The culture can be hostile if you misstep. Its coverage is uneven, But, as Margaret Talbot reports at the New Yorker and Amy Bruckman writes in her 2022 book, Should You Believe Wikipedia?, all those issues are fully documented.

Early on, Wikipedia was often the butt of complaints from people angry that this free encyclopedia made by *amateurs* threatened the sustainability of Encyclopaedia Britannica (which has survived though much changed). Today, it’s under attack by Elon Musk and the Heritage Foundation, as Lila Shroff writes at The Atlantic. The biggest danger isn’t to Wikipedia’s funding; there’s no offer anyone can make that would lead to a sale. The bigger vulnerability is the safety of individual editors. Scold they may, but as a collective they do important work to ensure that facts continue to matter.

***

Firefox users are manifesting more and more unhappiness about the direction Mozilla is taking with Firefox. The open source browser’s historic importance is outsized compared to its worldwide market share, which as of February 2025 is 2.63%, according to Statcounter. A long tail of other browsers are based on it, such as LibreWolf, Waterfox, and the privacy-protecting Tor.

The latest complaint, as Liam Proven and Thomas Claburn write at The Register is that Mozilla has removed its commitment not to sell user data from Firefox’s terms and conditions and privacy policy. Mozilla responded that the company doesn’t sell user data “in the way that most people think about ‘selling data'” but needed to change the language because of jurisdictional variations in what the word “sell” means. Still, the promise is gone.

This follows Mozilla’s September 2024 decision, reported by Richard Speed at The Register, to turn on by default a “privacy-preserving feature” to track users that led the NGO noyb to file a complaint with the Austrian data protection authority. And a month ago, Mark Hachman reported at PC World that Mozilla is building access to third-party generative AI chatbots into Firefox, and there are reports that it’s adding “AI-powered tab grouping.

All of these are basically unwelcome, and of all organizations Mozilla should have been able to foresee that. Go away, AI.

***

Molly White is expertly covering the Trump administration’s proposed “US Crypto Reserve”. Remains only to add Rachel Maddow, who compared it to having a strategic reserve of Beanie Babies.

Illustrations:: Beanie baby pelican.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Cognitive dissonance

The annual State of the Net, in Washington, DC, always attracts politically diverse viewpoints. This year was especially divided.

Three elements stood out: the divergence between the only remaining member of the Privacy and Civil Liberties Oversight Board (PCLOB) and a recently-fired colleague; a contentious panel on content moderation; and the yay, American innovation! approach to regulation.

As noted previously, on January 29 the days-old Trump administration fired PCLOB members Travis LeBlanc, Ed Felten, and chair Sharon Bradford Franklin; the remaining seat was already empty.

Not to worry, remaining member Beth Williams, said. “We are open for business. Our work conducting important independent oversight of the intelligence community has not ended just because we’re currently sub-quorum.” Flying solo she can greenlight publication, direct work, and review new procedures and policies; she can’t start new projects. A review is ongoing of the EU-US Privacy Framework under Executive Order 14086 (2022). Williams seemed more interested in restricting government censorship and abuse of financial data in the name of combating domestic terrorism.

Soon afterwards, LeBlanc, whose firing has him considering “legal options”, told Brian Fung that the outcome of next year’s reauthorization of Section 702, which covers foreign surveillance programs, keeps him awake at night. Earlier, Williams noted that she and Richard E. DeZinno, who left in 2023, wrote a “minority report” recommending “major” structural change within the FBI to prevent weaponization of S702.

LeBlanc is also concerned that agencies at the border are coordinating with the FBI to surveil US persons as well as migrants. More broadly, he said, gutting the PCLOB costs it independence, expertise, trustworthiness, and credibility and limits public options for redress. He thinks the EU-US data privacy framework could indeed be at risk.

A friend called the panel on content moderation “surreal” in its divisions. Yael Eisenstat and Joel Thayer tried valiantly to disentangle questions of accountability and transparency from free speech. To little avail: Jacob Mchangama and Ari Cohn kept tangling them back up again.

This largely reflects Congressional debates. As in the UK, there is bipartisan concern about child safety – see also the proposed Kids Online Safety Act – but Republicans also separately push hard on “free speech”, claiming that conservative voices are being disproportionately silenced. Meanwhile, organizations that study online speech patterns and could perhaps establish whether that’s true are being attacked and silenced.

Eisenstat tried to draw boundaries between speech and companies’ actions. She can still find on Facebook the sme Telegram ads containing illegal child sexual abuse material that she found when Telegram CEO Pavel Durov was arrested. Despite violating the terms and conditions, they bring Meta profits. “How is that a free speech debate as opposed to a company responsibility debate?”

Thayer seconded her: “What speech interests do these companies have other than to collect data and keep you on their platforms?”

By contrast, Mchangama complained that overblocking – that is, restricting legal speech – is seen across EU countries. “The better solution is to empower users.” Cohn also disliked the UK and European push to hold platforms responsible for fulfilling their own terms and conditions. “When you get to whether platforms are living up to their content moderation standards, that puts the government and courts in the position of having to second-guess platforms’ editorial decisions.”

But Cohn was talking legal content; Eisenstat was talking illegal activity: “We’re talking about distribution mechanisms.” In the end, she said, “We are a democracy, and part of that is having the right to understand how companies affect our health and lives.” Instead, these debates persist because we lack factual knowledge of what goes on inside. If we can’t figure out accountability for these platforms, “This will be the only industry above the law while becoming the richest companies in the world.”

Twenty-five years after data protection became a fundamental right in Europe, the DC crowd still seem to see it as a regulation in search of a deal. Representative Kat Cammack (R-FL), who described herself as the “designated IT person” on the energy and commerce committee, was particularly excited that policy surrounding emerging technologies could be industry-driven, because “Congress is *old*!” and DC is designed to move slowly. “There will always be concerns about data and privacy, but we can navigate that. We can’t deter innovation and expect to flourish.”

Others also expressed enthusiasm for “the great opportunities in front of our country”, compared the EU’s Digital Markets Act to a toll plaza congesting I-95. Samir Jain, on the AI governance panel, suggested the EU may be “reconsidering its approach”. US senator Marsha Blackburn (R-TN) highlighted China’s threat to US cybersecurity without noting the US’s own goal, CALEA.

On that same AI panel, Olivia Zhu, the Assistant Director for AI Policy for the White House Office of Science and Technology Policy, seemed more realistic: “Companies operate globally, and have to do so under the EU AI Act. The reality is they are racing to comply with [it]. Disengaging from that risks a cacophony of regulations worldwide.”

Shortly before, Johnny Ryan, a Senior Fellow at the Irish Council for Civil Liberties posted: “EU Commission has dumped the AI Liability Directive. Presumably for “innovation”. But China, which has the toughest AI law in the world, is out innovating everyone.”

Illustrations: Kat Cammack (R-FL) at State of the Net 2025.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Isolate

Yesterday, the Global Encryption Coalition published a joint letter calling on the UK to rescind its demand that Apple undermine (“backdoor”) the end-to-end encryption on its services. The Internet Society is taking signatures until February 20.

The background: on February 7, Joseph Menn reported at the Washington Post (followed by Dominic Preston at The Verge) that in January the office of the Home Secretary sent Apple a technical capability notice under the Investigatory Powers Act (2018) ordering it to provide access to content that anyone anywhere in the world has uploaded to iCloud and encrypted with Apple’s Advanced Data Protection.

Technical capability notices are supposed to be secret. It’s a criminal offense to reveal that you’ve been sent one. Apple can’t even tell users that their data may be compromised. (This kind of thing is why people publish warrant canaries.) Menn notes that even if Apple withdraws ADP in the UK, British authorities will still demand access to encrypted data everywhere *else*. So it appears that if the Home Office doesn’t back down and Apple is unwilling to cripple its encryption, the company will either have to withdraw ADP across the world or exit the UK market entirely. At his Odds and Ends of History blog, James O’Malley calls the Uk’s demand stupid, counter-productive, and unworkable. At TechRadar, Chiara Castro asks who’s next, and quotes Big Brother Watch director Silkie Carlo: “unprecedented for a government in any democracy”.

When the UK first began demanding extraterritorial jurisdiction for its interception rules, most people wondered how the country thought it would be able to impose it. That was 11 years ago; it was one of the new powers codified in the Data Retention and Investigatory Powers Act (2014) and kept in its replacement, the IPA in 2016.

Governments haven’t changed – they’ve been trying to undermine strong encryption in the hands of the masses since 1991, when Phil Zinmmermann launched PGP – but the technology has, as Graham Smith recounted at Ars Technica in 2017. Smartphones are everywhere. People store their whole lives on them for everything and giant technology companies encrypt both the device itself and the cloud backups. Government demands have changed to reflect that, from focusing on the individual with key escrow and key lengths to focusing on the technology provider with client-side scanning, encrypted messaging (see also the EU) and now cloud storage.

At one time, a government could install a secret wiretap by making a deal with a legacy telco. The Internet’s proliferation of communications providers changed that for a while. During the resulting panic the US passed the Communications Assistance for Law Enforcement Act (1994), which requires Internet service providers and telecommunications companies to install wiretap-ready equipment – originally for telephone calls, later broadband and VOIP traffic as well.

This is where the UK government’s refusal to learn from others’ mistakes is staggering. Just four months ago, the US discovered Salt Typhoon, a giant Chinese hack into its core telecommunications networks that was specifically facilitated by…by…CALEA. To repeat: there is no such thing as a magic hole that only “good guys” can use. If you undermine everyone’s privacy and security to facilitate law enforcement, you will get an insecure world where everyone is vulnerable. The hack has led US authorities to promote encrypted messaging.

Joseph Cox’s recent book, Dark Wire touches on this. It’s a worked example of what law enforcement internationally can do if given open access to all messages criminals send across a network when they think they are operating in complete safety. Yes, the results were impressive: hundreds of arrests, dozens of tons of drugs seized, masses if firearms impounded. But, Cox writes, all that success was merely a rounding error in global drug trade. Universal loss of privacy and security versus a rounding error: it’s the definition of “disproportionate”.

It remains to be seen what Apple decides to do and whether we can trust what the company tells us. At his blog, Alec Muffett is collecting ongoing coverage of events. The Future of Privacy Forum celebrated Safer Internet Day, February 11, with an infographic showing how encryption protects children and teens.

But set aside for a moment all the usual arguments about encryption, which really haven’t changed in over 30 years because mathematical reality hasn’t.

In the wider context, Britain risks making itself a technological backwater. First, there’s the backdoored encryption demand, which threatens every encrypted service. Second, there’s the impact of the onrushing Online Safety Act, which comes into force in March. Ofcom, the regulator charged with enforcing it, is issuing thousands of pages of guidance that make it plain that only large platforms will have the resources to comply. Small sites, whether businesses, volunteer-run Fediverse instances, blogs, established communities, or web boards, will struggle even if Ofcom starts to do a better job of helping them understand their legal obligations. Many will likely either shut down or exit the UK, leaving the British Internet poorer and more isolated as a result. Ofcom seems to see this as success.

It’s not hard to predict the outcome if these laws converge in the worst possible timeline: a second Brexit, this one online.

Illustrations: T-shirt (gift from Jen Persson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Blown

“This is a public place. Everyone has the right to be left in peace,” Jane (Vanessa Redgrave) tells Thomas (David Hemmings), whom she’s just spotted photographing her with her lover in the 1966 film Blow-Up, by Michelangelo Antonioni. The movie, set in London, proceeds as a mystery in which Thomas’s only tangible evidence is a grainy, blown-up shot of a blob that may be a murdered body.

Today, Thomas would probably be wielding a latest-model smartphone instead of a single lens reflex film camera. He would not bother to hide behind a tree. And Jane would probably never notice, much less challenge Thomas to explain his clearly-not-illegal, though creepy, behavior. Phones and cameras are everywhere. If you want to meet a lover and be sure no one’s photographing you, you don’t go to a public park, even one as empty as the film finds Maryon Park. Today’s 20-somethings grew up with that reality, and learned early to agree some gatherings are no-photography zones.

Even in the 1960s individuals had cameras, but taking high-quality images at a distance was the province of a small minority of experts; Antonioni’s photographer was a professional with his own darkroom and enlarging equipment. The first CCTV cameras went up in the 1960s; their proliferation became public policy issue in the 1980s, and was propagandized as “for your safety without much thought in the post-9/11 2000s. In the late 2010s, CCTV surveillance became democratized: my neighbor’s Ring camera means no one can leave an anonymous gift on their doorstep – or (without my consent) mine.

I suspect one reason we became largely complacent about ubiquitous cameras is that the images mostly remained unidentifiable, or at least unidentified. Facial recognition – especially the live variant police seem to feel they have the right to set up at will – is changing all that. Which all leads to this week, when Joseph Cox at 404 Media reports ($) (and Ars Technica summarizes) that two Harvard students have mashed up a pair of unremarkable $300 Meta Ray-Bans with the reverse image search service Pimeyes and a large language model to produce I-XRAY, an app that identifies in near-real time most of the people they pass on the street, including their name, home address, and phone number.

The students – AnhPhu Nguyen and Caine Ardayfio – are smart enough to realize the implications, imagining for Cox the scenario of a random male spotting a young woman and following her home. This news is breaking the same week that the San Francisco Standard and others are reporting that two men in San Francisco stood in front of a driverless Waymo taxi to block it from proceeding while demanding that the female passenger inside give them her phone number (we used to give such males the local phone number for time and temperature).

Nguyen and Ardayfio aren’t releasing the code they’ve written, but what two people can do, others with fewer ethics can recreate independently, as 30 years of Black Hat and Def Con have proved. This is a new level of democratizated surveillance. Today, giant databases like Clearview AI are largely only accessible to governments and law enforcement. But the data in them has been scraped from the web, like LLMs’ training data, and merged with commercial sources

This latest prospective threat to privacy has been created by the marriage of three technologies that were developed separately by different actors without regard to one another and, more important, without imagining how one might magnify the privacy risks of the others. A connected car with cameras could also run I-XRAY.

The San Francisco story is a good argument against allowing cars on the roads without steering wheels, pedals, and other controls or *something* to allow a passenger to take charge to protect their own safety. In Manhattan cars waiting at certain traffic lights often used to be approached by people who would wash the windshield and demand payment. Experienced drivers knew to hang back at red lights so they could roll forward past the oncoming would-be washer. How would you do this in a driverless car with no controls?

We’ve long known that people will prank autonomous cars. Coverage focused on the safety of the *cars* and the people and vehicles surrounding them, not the passengers. Calling a remote technical support line for help is never going to get a good enough response.

What ties these two cases together – besides (potentially) providing new ways to harass women – is the collision between new technologies and human nature. Plus, the merger of three decades’ worth of piled-up data and software that can make things happen in the physical world.

Arguably, we should have seen this coming, but the manufacturers of new technology have never been good at predicting what weird things their users will find to do with it. This mattered less when the worst outcome was using spreadsheet software to write letters. Today, that sort of imaginative failure is happening at scale in software that controls physical objects and penetrates the physical world. The risks are vastly greater and far more unsettling. It’s not that we can’t see the forest for the trees; it’s that we can’t see the potential for trees to aggregate into a forest.

Illustrations: Jane (Vanessa Redgrave) and her lover, being photographed by Thomas (David Hemmings) in Michelangelo Antonioni’s 1966 film, Blow-Up.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

This perfect day

To anyone remembering the excitement over DNA testing just a few years ago, this week’s news about 23andMe comes as a surprise. At CNN, Allison Morrow reports that all seven board members have resigned to protest CEO Anne Wojcicki’s plan to take the company private by buying up all the shares she doesn’t already own at 40 cents each (closing price yesterday was 0.3301. The board wanted her to find a buyer offering a better price.

In January, Rolfe Winkler reported at the Wall Street Journal ($) that 23andMe is likely to run out of cash by next year. Its market cap has dropped from $6 billion to under $200 million. He and Morrow catalogue the company’s problems: it’s never made a profit nor had a sustainable business model.

The reasons are fairly simple: few repeat customers. With DNA testing, as Winkler writes, “Customers only need to take the test once, and few test-takers get life-altering health results.” 23andMe’s mooted revolution in health care instead was a fad. Now, the company is pivoting to sell subscriptions to weight loss drugs.

This strikes me as an extraordinarily dangerous moment: the struggling company’s sole unique asset is a pile of more than 10 million DNA samples whose owners have agreed they can be used for research. Many were alarmed when, in December 2023, hackers broke into 1.7 million accounts and gained access to 6.9 million customer profiles<, though. The company said the hacked data did not include DNA records but did include family trees and other links. We don't think of 23andMe as a social network. But the same affordances that enabled Cambridge Analytica to leverage a relatively small number of user profiles to create a mass of data derived from a much larger number of their Friends worked on 23andMe. Given the way genetics works, this risk should have been obvious.

In 2004, the year of Facebook’s birth, the Australian privacy campaigner Roger Clarke warned in Very Black “Little Black Books” that social networks had no business model other than to abuse their users’ data. 23andMe’s terms and conditions promise to protect user privacy. But in a sale what happens to the data?

The same might be asked about the data that would accrue from Oracle CEO Larry Ellison‘s surveillance-embracing proposals this week. Inevitably, commentators invoked George Orwell’s 1984. At Business Insider, Kenneth Niemeyer was first to report: “[Ellison] said AI will usher in a new era of surveillance that he gleefully said will ensure ‘citizens will be on their best behavior.'”

The all-AI-surveillance all-the-time idea could only be embraced “gleefully” by someone who doesn’t believe it will affect him.

Niemeyer:

“Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras.

“We’re going to have supervision,” Ellison said. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.”

Ellison is twenty-six years behind science fiction author David Brin, who proposed radical transparency in his 1998 non-fiction outing, The Transparent Society. But Brin saw reciprocity as an essential feature, believing it would protect privacy by making surveillance visible. Ellison is claiming that *inscrutable* surveillance will guarantee good behavior.

At 404 Media, Jason Koebler debunks Ellison point by point. Research and other evidence shows securing schools is unlikely to make them safer; body cameras don’t appear to improve police behavior; and all the technologies Ellison talks about have problems with accuracy and false positives. Indeed, the mayor of Chicago wants to end the city’s contract with ShotSpotter (now SoundThinking), saying it’s expensive and doesn’t cut crime; some research says it slows police 911 response. Worth noting Simon Spichak at Brain Facts, who finds with AI tools humans make worse decisions. So…not a good idea for police.

More disturbing is Koebler’s main point: most of the technology Ellison calls “future” is already here and failing to lower crime rates or solve its causes – while being very expensive. Ellison is already out of date.

The book Ellison’s fantasy evokes for me is the less-known This Perfect Day, by Ira Levin, written in 1970. The novel’s world is run by a massive computer (“Unicomp”) that decides all aspects of individuals’ lives: their job, spouse, how many children they can have. Enforcing all this are human counselors and permanent electronic bracelets individuals touch to ubiquitous scanners for permission.

Homogeneity rules: everyone is mixed race, there are only four boys’ and girls’ names, they eat “totalcakes”, drink cokes, wear identical clothing. For the rest, regularly administered drugs keep everyone healthy and docile. “Fight” is an abominable curse word. The controlled world over which Unicomp presides is therefore almost entirely benign: there is no war, crime, and little disease. It rains only at night.

Naturally, the novel’s hero rebels, joins a group of outcasts (“the Incurables”), and finds his way to the secret underground luxury bunker where a few “Programmers” help Unicomp’s inventor, Wei Li Chun, run the world to his specification. So to me, Ellison’s plan is all about installing himself as world ruler. Which, I mean, who could object except other billionaires?

Illustrations: The CCTV camera on George Orwell’s Portobello Road house.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Boxed up

If the actions of the owners of streaming services are creating the perfect conditions for the return of piracy, it’s equally true that the adtech industry’s decisions continue to encourage installing ad blockers as a matter of self-defense. This is overall a bad thing, since most of us can’t afford to pay for everything we want to read online.

This week, Google abruptly aborted a change it’s been working on for four years: it will abandon its plan to replace third-party cookies with new technology it called Privacy Sandbox. From the sounds of it, Google will continue working on the Sandbox, but will continue to retain third-party cookies. The privacy consequences of this are…muddy.

To recap: there are two kinds of cookies, which are small files websites place on your computer, distinguished by their source and use. Sites use first-party cookies to give their pages the equivalent of memory. They’re how the site remembers which items you’ve put in your cart, or that you’ve logged in to your account. These are the “essential cookies” that some consent banners mention, and without them you couldn’t use the web interactively.

Third-party cookies are trackers. Once a company deposits one of these things on your computer, it can use it to follow along as you browse the web, collecting data about you and your habits the whole time. To capture the ickiness of this, Demos researcher Carl Miller has suggested renaming them slime trails. Third-party cookies are why the same ads seem to follow you around the web. They are also why people in the UK and Europe see so many cookie consent banners: the EU’s General Data Protection Regulation requires all websites to obtain informed consent before dropping them on our machines. Ad blockers help here. They won’t stop you from seeing the banners, but they can save you the time you’d have to spend adjusting settings on the many sites that make it hard to say no.

The big technology companies are well aware that people hate both ads and being tracked in order to serve ads. In 2020, Apple announced that its Safari web browser would block third-party cookies by default, continuing work it started in 2017. This was one of several privacy-protecting moves the company made; in 2021, it began requiring iPhone apps to offer users the opportunity to opt out of tracking for advertising purposes at installation. In 2022, Meta estimated Apple’s move would cost it $10 billion that year.

If the cookie seemed doomed at that point, it seemed even more so when Google announced it was working on new technology that would do away with third-party cookies in its dominant Chrome browser. Like Apple, however, Google proposed to give users greater control only over the privacy invasions of third parties without in any way disturbing Google’s own ability to track users. Privacy advocates quickly recognized this.

At Ars Technica, Ron Amadeo describes the Sandbox’s inner workings. Briefly, it derives a list of advertising topics from the websites users visits, and shares those with web pages when they ask. This is what you turn on when you say yes to Chrome’s “ad privacy feature”. Back when it was announced, EFF’s Bennett Cyphers was deeply unimpressed: instead of new tracking versus old tracking, he asked, why can’t we have *no* tracking? Just a few days ago, EFF followed up with the news that its Privacy Badger browser add-on now opts users out of the Privacy Sandbox (EFF has also published manual instructions.).

Google intended to make this shift in stages, beginning the process of turning off third-party cookies in January 2024 and finishing the job in the second half of 2024. Now, when the day of completion should be rapidly approaching, the company has said it’s over – that is, it no longer plans to turn off third-party cookies. As Thomas Claburn writes at The Register, implementing the new technology still requires a lot of work from a lot of companies besides Google. The technology will remain in the browser – and users will “get” to choose which kind of tracking they prefer; Kevin Purdy reports at Ars Technica that the company is calling this a “new experience”.

At The Drum, Kendra Barnett reports that the UK’s Information Commissioner’s Office is unhappy about Google’s decision. Even though it had also identified possible vulnerabilities in the Sandbox’s design, the ICO had welcomed the plan to block third-party cookies.

I’d love to believe that Google’s announcement might have been helped by the fact that Sandbox is already the subject of legal action. Last month the privacy-protecting NGO noyb complained to the Austrian data protection authority, arguing that Sandbox tracking still requires user consent. Real consent, not obfuscated “ad privacy feature” stuff, as Richard Speed explains at The Register. But far more likely it’s money, At the Press Gazette, Jim Edwards reports that Sandbox could cost publishers 60% of their revenue “from programmatically sold ads”. Note, however, that the figure is courtesy of adtech company Criteo, likely a loser under Sandbox.

The question is what comes next. As Cyphers said, we deserve real choices: *whether* we are tracked, not just who gets to do it. Our lives should not be the leverage big technology companies use to enhance their already dominant position.

Illustrations: A sandbox (via Wikimedia)

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.