Lawfaring

Fining companies who have spare billions down the backs of their couches is pointless, but what about threatening their executives with prosecution? In a scathing ruling (PDF), US District Judge Yvonne Gonzales Rogers finds that Apple’s vice-president of finance, Alex Roman, “lied outright under oath” and that CEO Tim Cook “chose poorly” in failing to follow her injunction in Epic Games v. Apple. She asks the US Attorney for the Northern District of California to investigate whether criminal contempt proceedings are appropriate. “This is an injunction, not a negotiation.”

As noted here last week, last year Google lost the similar Epic Games v. Google. In both cases, Epic Games complained that the punishing commissions both companies require of the makers of apps downloaded from their app stores were anti-competitive. This is the same issue that last week led the European Commission to announce fines and restrictions against Apple under the Digital Markets Act. These rulings could, as Matt Stoller suggests, change the entire app economy.

Apple has said it strongly disagrees with the decision and will appeal – but it is complying.

At TechRadar, Lance Ulanoff sounds concerned about the impact on privacy and security as Apple is forced to open up its app store. This argument reminds of a Bell Telephone engineer who confiscated a 30-foot cord from Woolworth’s that I’d plugged in, saying it endangered the telephone network. Apple certainly has the right to market its app store with promises of better service. But it doesn’t have the right to defy the court to extend its monopoly, as Mike Masnick spells out at Techdirt.

Masnick notes the absurdity of the whole thing. Apple had mostly won the case, and could have made the few small changes the ruling ordered and gone about its business. Instead, its executives lied and obfuscated for a few years of profits, and here we are. Although: Apple would still have lost in Europe.

A Perplexity search for the last S&P 500 CEO to be jailed for criminal contempt finds Kevin Trudeau. Trudeau used late-night infomercials and books to sell what Wikipedia calls “unsubstantiated health, diet, and financial advice”. He was sentenced to ten years in prison in 2013, and served eight. Trudeau and the Federal Trade Commission formally settled the fines and remaining restrictions in 2024.

The last time the CEO of a major US company was sent to prison for criminal contempt? It appears, never. The rare CEOs who have gone to prison, it’s typically been for financial fraud or insider trading. Think Worldcom’s Bernie Ebbers. Not sure this is the kind of innovation Apple wants to be known for.

***

Reuters reports that 23andMe has, after pressure from many US states, agreed to allow a court-appointed consumer protection ombudsman to ensure that customers’ genetic data is protected. In March, it filed for bankrupcy protection, fulfilling last September’s predictions that it would soon run out of money.

The issue is that the DNA 23andMe has collected from its 15 million customers is its only real asset. Also relevant: the October 2023 cyberattack, which, Cambridge Analytica-like, leveraged hacking into 14,000 accounts to access ancestry data relating to approximately 6.9 million customers. The breach sparked a class action suit accusing the company of inadequate security under the Health Insurance Portability and Accountability Act (1996). It was settled last year for $30 million – a settlement whose value is now uncertain.

Case after case has shown us that no matter what promises buyers and sellers make at the time of a sale, they generally don’t stick afterwards. In this case, every user’s account of necessity exposes information about all their relatives. And who knows where it will end up and for how long the new owner can be blocked from exploiting it?

***

There’s no particular relationship between the 23andMe bankruptcy and the US government. But they make each other scarier: at 404 Media, Joseph Cox reported two weeks ago that Palantir is merging data from a wide variety of US departments and agencies to create a “master database” to help US Immigration and Customs Enforcement target and locate prospective deportees. The sources include the Internal Revenue Service, Health and Human Services, the Department of Labor, and Housing and Urban Development; the “ATrac” tool being built already has data from the Social Security Administration and US Citizenship and Immigration Services, as well as law enforcement agencies such as the FBI, the Bureau of Alcohol, Tobacco, Firearms and Explosives, and the U.S. Marshals Service.

As the software engineer and essayist Ellen Ullman wrote in 1996 in her book Close to the Machine, databases “infect” their owners with the desire to link them together and find out things they never previously felt they needed to know. The information in these government databases was largely given out of necessity to obtain services we all pay for. In countries with data protection laws, the change of use Cox outlines would require new consent. The US has no such privacy laws, and even if it did it’s not clear this government would care.

“Never volunteer information,” used to be a commonly heard-mantra, typically in relation to law enforcement and immigration authorities. No one lives that way now.

Illustrations: DNA strands (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Three times a monopolist

It’s multiply official: Google is a monopoly.

The latest such ruling is a decision handed down on April 17 by Judge Leonie Brinkema in United States of America v. Google LLC, a 2023 case that focuses on Google’s control over both the software publishers use to manage online ads and the exchanges where those same ads are bought and sold. In August 2024, Judge Amit P. Mehta also ruled Google was a monopoly; that United States of America v. Google LLC, filed in 2020, focused on Google’s payments to mobile phone companies, wireless carriers, and browser companies to promote its search engine. Before *that*, in 2023 a jury found in Epic Games v. Google that Google violated antitrust laws with respect to the Play Store and Judge James Donato ordered it to allow alternative app stores on Android devices by November 1, 2024. Appeals are proceeding.

Google has more trouble to look forward to. At The Overspill, veteran journalist Charles Arthur is a member of a class representative bringing a UK case against Google. The AdTechClaim case seeks £13.6 billion in damages, claiming that Google’s adtech system has diverted revenues that otherwise would have accrued to UK-based website and app publishers. Reuters reported last week on the filing of a second UK challenge, a £5 billion suit representing thousands of businesses who claim Google manipulated the search ecosystem to block out rivals and force advertisers to rely on its platform. Finally, the Competition and Markets Authority is conducting its own investigation into the company’s search and advertising practices.

It is hard to believe that all of this will go away leaving Google intact, despite the company’s resistance to each one. We know from past experience that fines change nothing; only structural remedies will

The US findings against Google seem to have taken some commentators by surprise, perhaps assuming that the Trump administration would have a dampening effect. Trump, however, seems more exercised about the EU’s and UK’s mounting regulatory actions. Just this week the European Commission fined Apple €500 million and Meta €200 million, the first under the Digital Markets Act, and ordered them to open up user choice within 60 days. The White House has called some of these recent fines a new form of economic blackmail.

I’ve observed before that antitrust cases are often well behind the times, partly because these cases take so long to litigate. It wasn’t until 2024 that Google lost its 2017 appeal to the European Court of Justice in the Foundem search case and was ordered to pay a €2.4 billion fine. That case was first brought in 2009.

In 2014, I imagined that Google’s recently-concluded purchase of Nest smart thermostats might form the basis of an antitrust suit in 2024. Obviously, that didn’t happen; I wish instead the UK government had blocked Google’s acquisition of DeepMind. Partly, because perhaps the pre-monopolization of AI could have been avoided. And partly because I’ve been reading Angus Hanton’s recent book, Vassal State, and keeping it would have hugely benefited Britain.

Unfortunately, forcing Google to divest DeepMind is not on anyone’s post-trial list of possible remedies. In October, the Department of Justice filed papers listing a series of possibilities for the search engine case. The most-discussed of these was ordering Google to divest Chrome. In a sensible world, however, one must hope remedies will be found that address the differing problems these cases were brought to address.

At Big, Matt Stoller suggests that the latest judgment increases the likelihood that Google will be broken up, the first such order since AT&T in 1984. The DoJ, now under Trump’s control, could withdraw, but, Stoller points out, the list of plaintiffs includes several state attorneys general, and the DoJ can’t dictate what they do.

Trying to figure out what remedies would make real change is a difficult game, as the folks at the the April 20 This Week In Tech podcast say. This is unlike the issue around Google’s and Apple’s app stores that the European Commission fines cover, where it’s comparatively straightforward to link opening up their systems to alternatives and changing their revenue structure to ensuring that app makers and publishers get a fairer percentage.

Breaking up the company to separate Chrome, search, adtech, and Android would disable the company’s ability to use those segments as levers. In such a situation Google and/or its parent, Alphabet, could not, as now, use them in combination to maintain its ongoing data collection and build a durable advantage in training sophisticated models to underpin automated services. But would forcing the company to divest those segments create competition in any of them? Each would likely remain dominant in its field.

Yet something must be done. Even though Microsoft was not in the end broken up in 2001 when the incoming Bush administration settled the case, the experience of being investigated and found guilty of monopolistic behavior changed the company. None of today’s technology companies are likely to follow suit unless they’re forced; these companies are too big, too powerful, too rich, and too arrogant. If Google is not forced to change its structure or its business model, all of them will be emboldened to behave in even worse ways. As unimaginable as that seems.

Illustrations: “The kind of anti-trust legislation we need”, by J. S. Pughe (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Predatory inclusion

The recent past is a foreign country; they view the world differently there.

At last week’s We Robot conference on technology, policy, and law, the indefatigably detail-oriented Sue Glueck was the first to call out a reference to the propagation of transparency and accountability by the “US and its allies” as newly out of date. From where we were sitting in Windsor, Ontario, its conjoined fraternal twin, Detroit, Michigan, was clearly visible just across the river. But: recent events.

As Ottawa law professor Teresa Scassa put it, “Before our very ugly breakup with the United States…” Citing, she Anu Bradford, she went on, “Canada was trying to straddle these two [US and EU] digital empires.” Canada’s human rights and privacy traditions seem closer to those of the EU, even though shared geography means the US and Canada are superficially more similar.

We’ve all long accepted that the “technology is neutral” claim of the 1990s is nonsense – see, just this week, Luke O’Brien’s study at Mother Jones of the far-right origins of the web-scraping facial recognition company Clearview AI. The paper Glueck called out, co-authored in 2024 by Woody Hartzog, wants US lawmakers to take a tougher approach to regulating AI and ban entirely some systems that are fundamentally unfair. Facial recognition, for example, is known to be inaccurate and biased, but improving its accuracy raises new dangers of targeting and weaponization, a reality Cynthia Khoo called “predatory inclusion”. If he were writing this paper now, Hartzog said, he would acknowledge that it’s become clear that some governments, not just Silicon Valley, see AI as a tool to destroy institutions. I don’t *think* he was looking at the American flags across the water.

Later, Khoo pointed out her paper on current negotiations between the US and Canada to develop a bilateral law enforcement data-sharing agreement under the US CLOUD Act. The result could allow US police to surveil Canadians at home, undermining the country’s constitutional human rights and privacy laws.

In her paper, Clare Huntington proposed deriving approaches to human relationships with robots from family law. It can, she argued, provide analogies to harms such as emotional abuse, isolation, addiction, invasion of privacy, and algorithmic discrimination. In response, Kate Darling, who has long studied human responses to robots, raised an additional factor exacerbating the power imbalance in such cases: companies, “because people think they’re talking to a chatbot when they’re really talking to a company.” That extreme power imbalance is what matters when trying to mitigate risk (see also Sarah Wynn-Williams’ recent book and Congressional testimony on Facebook’s use of data to target vulnerable teens).

In many cases, however, we are not agents deciding to have relationships with robots but what AJung Moon called “incops”, or “incidentally co-present”. In the case of the Estonian Starship delivery robots you can find in cities from San Francisco to Milton Keynes, that broad category includes human drivers, pedestrians, and cyclists who share their spaces. In a study, Adeline Schneider found that white men tended to be more concerned about damage to the robot, where others worried more about communication, the data they captured, safety, and security. Delivery robots are, however, typically designed with only direct users in mind, not others who may have to interact with it.

These are all social problems, not technological ones, as conference chair Kristen Thomasen observed. Carys Craig later modified it: technology “has compounded the problems”.

This is the perennial We Robot question: what makes robots special? What qualities require new laws? Just as we asked about the Internet in 1995, when are robots just new tools for old rope, and when do they bring entirely new problems? In addition, who is responsible in such cases? This was asked in a discussion of Beatrice Panattoni‘s paper on Italian proposals to impose harsher penalties for crime committed with AI or facilitated by robots. The pre-conference workshop raised the same question. We already know the answer: everyone will try to blame someone or everyone else. But in formulating a legal repsonse, will we tinker around the edges or fundamentally question the criminal justice system? Andrew Selbst helpfully summed up: “A law focusing on specific harms impedes a structural view.”

At We Robot 2012, it was novel to push lawyers and engineers to think jointly about policy and robots. Now, as more disciplines join the conversation, familiar Internet problems surface in new forms. Human-robot interaction is a four-dimensional version of human-computer interaction; I got flashbacks to old hacking debates when Elizabeth Joh wondered in response to Panattoni’s paper if transforming a robot into a criminal should be punished; and a discussion of the use of images of medicalized children for decades in fundraising invoked publicity rights and tricky issues of consent.

Also consent-related, lawyers are starting to use generative AI to draft contracts, a step that Katie Szilagyi and Marina Pavlović suggested further diminishes the bargaining power already lost to “clickwrap”. Automation may remove our remaining ability to object from more specialized circumstances than the terms and conditions imposed on us by sites and services. Consent traditionally depends on a now-absent “meeting of minds”.

The arc of We Robot began with enthusiasm for robots, which waned as big data and generative AI became players. Now, robots/AI are appearing as something being done to us.

Illustrations: Detroit, seen across the river from Windsor, Ontario with a Canadian Coast Guard boat in the foreground.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Catoptromancy

It’s a commonly held belief that technology moves fast, and law slowly. This week’s We Robot workshop day gave the opposite impression: these lawyers are moving ahead, while the technology underlying robots is moving slower than we think.

A mainstay of this conference over the years has been Bill Smart‘s and Cindy Grimm‘s demonstrations of the limitations of the technologies that make up robots. This year, that gambit was taken up by Jason Millar and AJung Moon. Their demonstration “robot” comprised six people – one brain, four sensors, and one color sensor. Ordering it to find the purple shirt quickly showed that robot programming isn’t getting any easier. The human “sensors” can receive useful information only as far as their outstretched fingertips, and even then the signal they receive is minimal.

“Many of my students program their robots into a ditch and can’t understand why,” Moon said. It’s the required specificity. For one thing, a color sensor doesn’t see color; it sends a stream of numeric values. It’s all 1s and 0s and tiny engineering decisions whose existence is never registered at the policy level but make all the difference. One of her students, for example, struggled with a robot that kept missing the croissant it was supposed to pick up by 30 centimeters. The explanation turned out to be that the sensor was so slow that the robot was moving a half-second too early, based on historical information. They had to insert a pause before the robot could get it right.

So much of the way we talk about robots and AI misrepresents those inner workings. A robot can’t “smell honey”; it merely has a sensor that’s sensitive to some chemicals and not others. It can’t “see purple” if its sensors are the usual red, green, blue. Even green may not be identifiable to an RGB sensor if the lighting is such that reflections make a shiny green surface look white. Faster and more diverse sensors won’t change the underlying physics. How many lawmakers understand this?

Related: what does it mean to be a robot? Most people attach greater intelligence to things that can move autonomously. But a modern washing machine is smarter than a Roomba, while an iPhone is smarter than either but can’t affect the physical world, as Smart observed at the very first We Robot, in 2012.

This year we are in Canada – to be precise, in Windsor, Ontario, looking across the river to Detroit, Michigan. Canadian law, like the country itself, is a mosaic: common law (inherited from Britain), civil law (inherited from France), and myriad systems of indigenous peoples’ law. Much of the time, said Suzie Dunn, new technology doesn’t require new law so much as reinterpretation and, especially, enforcement of existing law.

“Often you can find criminal law that already applies to digital spaces, but you need to educate the legal system how to apply it,” she said. Analogous: in the late 1990s, editors of the technology section at the Daily Telegraph had a deal-breaking question: “Would this still be a story if it were about the telephone instead of the Internet?”

We can ask that same question about proposed new law. Dunn and Katie Szilagyi asked what robots and AI change that requires a change of approach. They set us to consider scenarios to study this question: an autonomous vehicle kills a cyclist; an autonomous visa system denies entry to a refugee who was identified in her own country as committing a crime when facial recognition software identifies her in images of an illegal LGBTQ protest. In the first case, it’s obvious that all parties will try to blame someone – or everyone – else, probably, as Madeleine Clare Elish suggested in 2016, on the human driver, who becomes the “moral crumple zone”. The second is the kind of case the EU’s AI Act sought to handle by giving individuals the right to meaningful information about the automated decision made about them.

Nadja Pelkey, a curator at Art Windsor-Essex, provided a discussion of AI in a seemingly incompatible context. Citing Georges Bataille, who in 1929 saw museums as mirrors, she invoked the word “catoptromancy”, the use of mirrors in mystical divination. Social and political structures are among the forces that can distort the reflection. So are the many proliferating AI tools such as “personalized experiences” and other types of automation, which she called “adolescent technologies without legal or ethical frameworks in place”.

Where she sees opportunities for AI is in what she called the “invisible archives”. These include much administrative information, material that isn’t digitized, ephemera such as exhibition posters, and publications. She favors small tools and small private models used ethically so they preserve the rights of artists and cultural contexts, and especially consent. In a schematic she outlined a system that can’t be scraped, that allows data to be withdrawn as well as added, and that enables curiosity and exploration. It’s hard to imagine anything less like the “AI” being promulgated by giant companies. *That* type of AI was excoriated in a final panel on technofascism and extractive capitalism.

It’s only later I remember that Pelkey also said that catoptromancy mirrors were first made of polished obsidian.

In other words, black mirrors.

Illustrations: Divination mirror made of polished obsidian by artisans of the Aztec Empire of Mesoamerica between the 15th and 16th centuries (via Wikimedia

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A short history of We Robot 2012-

On the eve of We Robot 2025, here are links to my summaries of previous years. 2014 is missing; I didn’t make it that year for family reasons. There was no conference in 2024 in order to move the event back to its original April schedule (covid caused its move to September in 2020). These are my personal impressions; nothing I say here should be taken as representing the conference, its founders, its speakers, or their institutions.

We Robot was co-founded by Michael Froomkin, Ryan Calo, and Ian Kerr to bring together lawyers and engineers to think early about the coming conflicts in robots, law, and policy.

2024 No conference.

2023 The end of cool. After struggling to design a drone delivery service that had any benefits over today’s cycling couriers, we find ourselves less impressed by robot that can do somersaults but not do anything useful.

2022 Insert a human. “Robots” are now “sociotechnical systems”.

Workshop day Coding ethics. The conference struggles to design an ethical robot.

2021 Plausible diversions. How will robots rehape human space?

Workshop day Is the juice worth the squeeze?. We think about how to regulate delivery robots, which will likely have no user-serviceable parts. Title from Woody Hartzog.

2020 (virtual) The zero on the phone. AI exploitation becomes much more visible.

2019 Math, monsters, and metaphors. The trolley problem is dissected; the true danger is less robots than the “pile of math that does some stuff”.

Workshop day The Algernon problem. New participants remind that robots/AI are carrying out the commands of distant owners.

2018 Deception. The conference tries to tease out what makes robots different and revisits Madeleine Clare Elish’s moral crumple zones after the first pedestrian death by self-driving car.

Workshop day Late, noisy, and wrong. Engineers Bill Smart and Cindy Grimm explain why sensors never capture what you think they’re capturing and how AI systems use their data.

2017 Have robot, will legislate. Discussion of risks this year focused on the intermediate sitaution, when automation and human norms clash.

2016 Humans all the way down. Madeline Clare Elish introduces “moral crumple zones”.

Workshop day: The lab and the world. Bill Smart uses conference attendees in formation to show why building a robot is difficult.

2015 Multiplicity. A robot pet dog begs its owner for an upgraded service subscription.

2014 Missed conference

2013 Cautiously apocalyptic. Diversity of approaches to regulation will be needed to handle the diversity of robots.

2012 A really fancy hammer with a gun. Unsentimental engineer Bill Smart provided the title.

wg

Lost futures

In early December, the Biden administration’s Department of Justice filed its desired remedies, having won its case that Google is a monopoly. Many foresaw a repeat of 2001, when the incoming Bush administration dropped the Clinton DoJ’s plan to break up Microsoft.

Maybe not this time. In its first filing, Trump’s DoJ still wants Google to divest itself of the Chrome browser and intends to bar it from releasing other browsers. The DoJ also wants to impose some restrictions on Android and Google’s AI investments.

At The Register, Thomas Claburn reports that Mozilla is objecting to the DoJ’s desire to bar Google from paying other companies to promote its search engine by default. Those payments, Mozilla president Mark Surman admits to Claburn, keep small independent browsers afloat.

Despite Mozilla’s market shrinkage and current user complaints, it and its fellow minority browsers remain important in keeping the web open and out of full corporate control. It’s definitely counter-productive if the court, in trying to rein in Google’s monopoly, takes away what viability these small players have left. They are us.

***

On the other hand, it’s certainly not healthy for those small independents to depend for their survival on the good will of companies like Google. The Trump administration’s defunding of – among so many things – scientific research is showing just how dangerous it can be.

Within the US itself, the government has announced cuts to indirect funding, which researchers tell me are crippling to universities; $800 million cut in grants to Johns Hopkins, $400 at Columbia University, and so many more.

But it doesn’t stop in the US or with the cuts to USAID, which have disrupted many types of projects around the world, some of them scientific or medical research. The Trump administration is using its threats to scientific funding across the world to control speech and impose its, um, values. This morning, numerous news sources report that Australian university researchers have been sent questionnaires they must fill out to justify their US-funded grants. Among the questions: their links to China and their compliance with Trump’s gender agenda.

To be fair, using grants and foreign aid to control speech is not a new thing for US administrations. For example, Republican presidents going back to Reagan have denied funding to international groups that advocated abortion rights or provided abortions, limiting what clinicians could say to pregnant patients. (I don’t know if there are Democratic comparables.)

Science is always political to some extent: think the for stating that the earth was not the center of the universe. Or take intelligence: in his 1981 book The Mismeasure of Man, Stephen Jay Gould documented a century or more of research by white, male scientists finding that white, male scientists were the smartest things on the planet. Or say it inBig Tobacco and Big Oil, which spent decades covering up research showing that their products were poisoning us and our planet.

The Trump administration’s effort is, however, a vastly expanded attempt that appears to want to squash anything that disagrees with policy, and it shows the dangers of allowing any one nation to amass too much “soft power”. The consequences can come quickly and stay long. It reminds me of what happened in the UK in the immediate post-EU referendum period, when Britain-based researchers found themselves being dropped from cross-EU projects because they were “too risky”, and many left for jobs in other countries where they could do their work in peace.

The writer Prashant Vaze sometimes imagines a future in which India has become the world’s leading scientific and technical superpower. This imagined future seems more credible by the day.

***

It’s strange to read that the 35-year-old domestic robots pioneer, iRobot, may be dead in a year. It seemed like a sure thing; early robotics researchers say that people were begging for robot vacuum cleaners even in the 1960s, perhaps inspired by Rosie, The Jetsons‘ robot maid.

Many people may have forgotten (or not known) the excitement that attended the first Roombas in 2002. Owners gave them names, took them on vacation, and posted videos. It looked like the start of a huge wave.

I bought a Roomba in 2003, reviewing it so enthusiastically that an email complained that I should have said I had been given it by a PR person. For a few happy months it wandered around cleaning.

Then one day it stopped moving and I discovered that long hair paralyzed it. I gave it away and went back to living with moths.

The Roomba now has many competitors, some highly sophisticated, run by apps, and able to map rooms, identify untouched areas, scrub stains, and clean in corners. Even so, domestic robots have not proliferated as imagined 20 – or 12 – years ago. I visit people’s houses, and while I sometimes encounter Alexas or Google Assistants, robot vacuums seem rare.

So much else of smart homes as imagined by companies like Microsoft and IBM remain dormant. It does seem like – perhaps a reflection on my social circle – the “smart home” is just a series of remote-control apps and outsourced services. Meh.

Illustrations: Rosie, the Jetsons‘ XB-500 robot maid, circa 1962.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Unsafe

The riskiest system is the one you *think* you can trust. Say it in encryption: the least secure encryption is encryption that has unknown flaws. Because, in the belief that your communication or data is protected, you feel it’s safe to indulge in what in other contexts would be obviously risky behavior. Think of it like an unseen hole in a condom.

This has always been the most dangerous aspect of the UK government’s insistence that its technical capability notices remain secret. Whoever alerted the Washington Post to the notice Apple received a month ago commanding it to weaken its Advanced Data Protection performed an important public service. Now, Carly Page reports at TechCrunch based on a blog posting by security expert Alec Muffett, the UK government is recognizing that principle by quietly removing from its web pages advice to use that same encryption that was directed at people whose communications are at high risk – such as barristers and other legal professionals. Apple has since withdrawn ADP in the UK.

More important long-term, at the Financial Times, Tim Bradshaw and Lucy Fisher report that Apple has appealed the government’s order to the Investigatory Powers Tribunal. This will be, as the FT notes, the first time government powers under the Investigatory Powers Act (2016) to compel the weakening of security features will be tested in court. A ruling that the order was unlawful could be an important milestone in the seemingly interminable fight over encryption.

***

I’ve long had the habit of doing minor corrections on Wikipedia – fixing typos, improving syntax – as I find them in the ordinary course of research. But recently I have had occasion to create a couple of new pages, with the gratefully-received assistance of a highly experienced Wikipedian. At one time, I’m sure this was a matter of typing a little text, garlanding it with a few bits of code, and garnishing it with the odd reference, but standards have been rising all along, and now if you want your newly-created page to stay up it needs a cited reference for every statement of fact and a minimum of one per sentence. My modest pages had ten to 20 references, some servicing multiple items. Embedding the page matters, too, so you need to link mentions to all those pages. Even then, some review editor may come along and delete the page if they think the subject is not notable enough or violates someone’s copyright. You can appeal, of course…and fix whatever they’ve said the problem is.

It should be easier!

All of this detailed work is done by volunteers, who discuss the decisions they make in full view on the talk page associated with every content page. Studying the more detailed talk pages is a great way to understand how the encyclopedia, and knowledge in general, is curated.

Granted, Wikipedia is not perfect. Its policy on primary sources can be frustrating, and errors in cited secondary sources can be difficult to correct. The culture can be hostile if you misstep. Its coverage is uneven, But, as Margaret Talbot reports at the New Yorker and Amy Bruckman writes in her 2022 book, Should You Believe Wikipedia?, all those issues are fully documented.

Early on, Wikipedia was often the butt of complaints from people angry that this free encyclopedia made by *amateurs* threatened the sustainability of Encyclopaedia Britannica (which has survived though much changed). Today, it’s under attack by Elon Musk and the Heritage Foundation, as Lila Shroff writes at The Atlantic. The biggest danger isn’t to Wikipedia’s funding; there’s no offer anyone can make that would lead to a sale. The bigger vulnerability is the safety of individual editors. Scold they may, but as a collective they do important work to ensure that facts continue to matter.

***

Firefox users are manifesting more and more unhappiness about the direction Mozilla is taking with Firefox. The open source browser’s historic importance is outsized compared to its worldwide market share, which as of February 2025 is 2.63%, according to Statcounter. A long tail of other browsers are based on it, such as LibreWolf, Waterfox, and the privacy-protecting Tor.

The latest complaint, as Liam Proven and Thomas Claburn write at The Register is that Mozilla has removed its commitment not to sell user data from Firefox’s terms and conditions and privacy policy. Mozilla responded that the company doesn’t sell user data “in the way that most people think about ‘selling data'” but needed to change the language because of jurisdictional variations in what the word “sell” means. Still, the promise is gone.

This follows Mozilla’s September 2024 decision, reported by Richard Speed at The Register, to turn on by default a “privacy-preserving feature” to track users that led the NGO noyb to file a complaint with the Austrian data protection authority. And a month ago, Mark Hachman reported at PC World that Mozilla is building access to third-party generative AI chatbots into Firefox, and there are reports that it’s adding “AI-powered tab grouping.

All of these are basically unwelcome, and of all organizations Mozilla should have been able to foresee that. Go away, AI.

***

Molly White is expertly covering the Trump administration’s proposed “US Crypto Reserve”. Remains only to add Rachel Maddow, who compared it to having a strategic reserve of Beanie Babies.

Illustrations:: Beanie baby pelican.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Optioned

The UK’s public consultation on creating a copyright exception for AI model training closed on Tuesday, and it was profoundly unsatisfying.

Many, many creators and rights holders (who are usually on opposing sides when it comes to contract negotiations) have opposed the government’s proposals. Every national newspaper ran the same Make It Fair front page opposing them; musicians released a silent album. In the Guardian, the peer and independent filmmaaker Beeban Kidron calls the consultation “fixed” in favor of the AI companies. Kidron’s resume includes directing Bridget Jones: The Edge of Reason (2004) and the meticulously researched 2013 study of teens online, InRealLife, and she goes on to call the government’s preferred option a “wholesale transfer of wealth from hugely successful sector that invests hundreds of millions in the UK to a tech industry that extracts profit that is not assured and will accrue largely to the US and indeed China.”

The consultation lists four options: leave the situation as it is; require AI companies to get licenses to use copyrighted work (like everyone else has to); allow AI companies to use copyrighted works however they want; and allow AI companies to use copyrighted works but grant rights holders the right to opt out.

I don’t like any of these options. I do believe that creators will figure out how to use AI tools to produce new and valuable work. I *also* believe that rights holders will go on doing their best to use AI to displace or impoverish creators. That is already happening in journalism and voice acting, and was a factor in the 2023 Hollywood writers’ strike. AI companies have already shown that won’t necessarily abide by arrangements that lack the force of law. The UK government acknowledged this in its consultation document, saying that “more than 50% of AI companies observe the longstanding Internet convention robots.txt.” So almost half of them *don’t*.

At Pluralistic, Cory Doctorow argued in February 2023 that copyright won’t solve the problems facing creators. His logic is simple: after 40 years of expanding copyright terms (from a maximum of 56 years in 1975 to “author’s life plus 70” now), creators are being paid *less* than they were then. Yes, I know Taylor Swift has broken records for tour revenues and famously took back control of her own work. but millions of others need, as Doctorow writes, structural market changes. Doctorow highlights what happened with sampling: the copyright maximalists won, and now musicians are required to sign away sampling rights to their labels, who pocket the resulting royalties.

For this sort of reason, the status quo, which the consultation calls “option 0”, seems likely to open the way to lots more court cases and conflicting decisions, but provide little benefit to anyone. A licensing regime (“option 1”) will likely go the way of sampling. If you think of AI companies as inevitably giant “pre-monopolized” outfits, like Vladen Joler at last year’s Computers, Privacy, and Data Protection conference, “Option 2” looks like simply making them richer and more powerful at the expense of everyone else in the world. But so does “option 3”, since that *also* gives AI companies the ability to use anything they want. Large rights holders will opt out and demand licensing fees, which they will keep, and small ones will struggle to exercise their rights.

As Kidron said, the government’s willingness to take chances with the country’s creators’ rights is odd, since intellectual property is a sector in which Britain really *is* a world leader. On the other hand, as Moody says, all of it together is an anthill compared to the technology sector.

None of these choices is a win for creators or the public. The government’s preferred option 3 seems unlikely to achieve its twin goals of making Britain a world leader in AI and mainlining AI into the veins of the nation, as the government put it last month.

China and the US both have complete technology stacks *and* gigantic piles of data. The UK is likely better able to matter in AI development than many countries – see for example DeepMind, which was founded here in 2010. On the other hand, also see DeepMind for the probable future: Google bought it in 2014, and now its technology and profits belong to that giant US company.

At Walled Culture, Glyn Moody argued last May that requiring the AI companies to pay copyright industries makes no sense; he regards using creative material for training purposes as “just a matter of analysis” that should not require permission. And, he says correctly, there aren’t enough such materials anyway. Instead, he and Mike Masnick at Techdirt propose that the generative AI companies should pay creators of all types – journalists, musicians, artists, filmmakers, book authors – to provide them with material they can use to train their models, and the material so created should be placed in the public domain. In turn it could become new building blocks the public can use to produce even more new material. As a model for supporting artists, patronage is old.

I like this effort to think differently a lot better than any of the government’s options.

Illustrations:: Tuesday’s papers, unprecedentedly united to oppose the government’s copyright plan.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Isolate

Yesterday, the Global Encryption Coalition published a joint letter calling on the UK to rescind its demand that Apple undermine (“backdoor”) the end-to-end encryption on its services. The Internet Society is taking signatures until February 20.

The background: on February 7, Joseph Menn reported at the Washington Post (followed by Dominic Preston at The Verge) that in January the office of the Home Secretary sent Apple a technical capability notice under the Investigatory Powers Act (2018) ordering it to provide access to content that anyone anywhere in the world has uploaded to iCloud and encrypted with Apple’s Advanced Data Protection.

Technical capability notices are supposed to be secret. It’s a criminal offense to reveal that you’ve been sent one. Apple can’t even tell users that their data may be compromised. (This kind of thing is why people publish warrant canaries.) Menn notes that even if Apple withdraws ADP in the UK, British authorities will still demand access to encrypted data everywhere *else*. So it appears that if the Home Office doesn’t back down and Apple is unwilling to cripple its encryption, the company will either have to withdraw ADP across the world or exit the UK market entirely. At his Odds and Ends of History blog, James O’Malley calls the Uk’s demand stupid, counter-productive, and unworkable. At TechRadar, Chiara Castro asks who’s next, and quotes Big Brother Watch director Silkie Carlo: “unprecedented for a government in any democracy”.

When the UK first began demanding extraterritorial jurisdiction for its interception rules, most people wondered how the country thought it would be able to impose it. That was 11 years ago; it was one of the new powers codified in the Data Retention and Investigatory Powers Act (2014) and kept in its replacement, the IPA in 2016.

Governments haven’t changed – they’ve been trying to undermine strong encryption in the hands of the masses since 1991, when Phil Zinmmermann launched PGP – but the technology has, as Graham Smith recounted at Ars Technica in 2017. Smartphones are everywhere. People store their whole lives on them for everything and giant technology companies encrypt both the device itself and the cloud backups. Government demands have changed to reflect that, from focusing on the individual with key escrow and key lengths to focusing on the technology provider with client-side scanning, encrypted messaging (see also the EU) and now cloud storage.

At one time, a government could install a secret wiretap by making a deal with a legacy telco. The Internet’s proliferation of communications providers changed that for a while. During the resulting panic the US passed the Communications Assistance for Law Enforcement Act (1994), which requires Internet service providers and telecommunications companies to install wiretap-ready equipment – originally for telephone calls, later broadband and VOIP traffic as well.

This is where the UK government’s refusal to learn from others’ mistakes is staggering. Just four months ago, the US discovered Salt Typhoon, a giant Chinese hack into its core telecommunications networks that was specifically facilitated by…by…CALEA. To repeat: there is no such thing as a magic hole that only “good guys” can use. If you undermine everyone’s privacy and security to facilitate law enforcement, you will get an insecure world where everyone is vulnerable. The hack has led US authorities to promote encrypted messaging.

Joseph Cox’s recent book, Dark Wire touches on this. It’s a worked example of what law enforcement internationally can do if given open access to all messages criminals send across a network when they think they are operating in complete safety. Yes, the results were impressive: hundreds of arrests, dozens of tons of drugs seized, masses if firearms impounded. But, Cox writes, all that success was merely a rounding error in global drug trade. Universal loss of privacy and security versus a rounding error: it’s the definition of “disproportionate”.

It remains to be seen what Apple decides to do and whether we can trust what the company tells us. At his blog, Alec Muffett is collecting ongoing coverage of events. The Future of Privacy Forum celebrated Safer Internet Day, February 11, with an infographic showing how encryption protects children and teens.

But set aside for a moment all the usual arguments about encryption, which really haven’t changed in over 30 years because mathematical reality hasn’t.

In the wider context, Britain risks making itself a technological backwater. First, there’s the backdoored encryption demand, which threatens every encrypted service. Second, there’s the impact of the onrushing Online Safety Act, which comes into force in March. Ofcom, the regulator charged with enforcing it, is issuing thousands of pages of guidance that make it plain that only large platforms will have the resources to comply. Small sites, whether businesses, volunteer-run Fediverse instances, blogs, established communities, or web boards, will struggle even if Ofcom starts to do a better job of helping them understand their legal obligations. Many will likely either shut down or exit the UK, leaving the British Internet poorer and more isolated as a result. Ofcom seems to see this as success.

It’s not hard to predict the outcome if these laws converge in the worst possible timeline: a second Brexit, this one online.

Illustrations: T-shirt (gift from Jen Persson).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Banning TikTok

Two days from now, TikTok may go dark in the US. Nine months ago, in April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act, banning TikTok if its Chinese owner, ByteDance, has not removed itself from ownership by January 19, 2025.

Last Friday, January 10, the US Supreme Court heard three hours of arguments in consolidated challenges filed by TikTok and a group of TikTok users: TikTok, Inc. v. Garland and Furbaugh v. Garland. Too late?

As a precaution, Kinling Lo and Viola Zhou report at Rest of World, at least some of TikTok’s 170 million American users are building community arks elsewhere – the platform Xiaohongshu (“RedNote”), for one. This is not the smartest choice; it, too is Chinese and could become subject to the same national security concerns, like the other Chinese apps Makena Kelly reports at Wired are scooping up new US users. Ashley Belanger reports at Ars Technica that rumors say the Chinese are thinking of segregating these American newcomers.

“The Internet interprets censorship as damage, and routes around it,” EFF founder and activist John Gilmore told Time Magazine in 1993. He meant Usenet, which could get messages past individual server bans, but it’s really more a statement about Internet *users*, who will rebel against bans. That for sure has not changed despite the more concentrated control of the app ecosystem. People will access each other by any means necessary. Even *voice* calls.

PAFACA bans apps from four “foreign adversaries to the United States” – China, Russia, North Korea, and Iran. That being the case, Xiaohongshu/RedNote is not a safe haven. The law just hasn’t noticed this hitherto unknown platform yet.

The law’s passage in April 2024 was followed in early May by TikTok’s legal challenge. Because of the imminent sell-by deadline, the case was fast-tracked, and landed in the US District of Columbia Circuit Court of Appeals in early December. The district court upheld the law and rejected both TikTok’s constitutional challenage and its request for an injunction staying enforcement until the constitutional claims could be fully reviewed by the Supreme Court. TikTok appealed that decision, and so last week here we were. This case is separate from Free Speech Coalition v. Paxton, which SCOTUS heard *this* week and challenges Texas’s 2023 age verification law (H.B. 1181), which could have even further-reaching Internet effects.

Here it gets silly. Incoming president Donald Trump, who originally initiated the ban but was blocked by the courts on constitutional grounds, filed an amicus brief arguing that any ban should be delayed until after he’s taken office on Monday because he can negotiate a deal. NBC News reports that the outgoing Biden administration is *also* trying to stop the ban and, per Sky News, won’t enforce it if it takes effect.

Previously, both guys wanted a ban, but I guess now they’ve noticed that, as Mike Masnick says at Techdirt, it makes them look out of touch to nearly half the US population. In other words, they moved from “Oh my God! The kids are using *TikTok*!” to “Oh, my God! The kids are *using* TikTok!”

The court transcript shows that TikTok’s lawyers made three main arguments. One: TikTok is now incorporated in the US, and the law is “a burden on TikTok’s speech”. Two: PAFACA is content-based, in that it selects types of content to which it applies (user-generated) and ignores others (reviews). Three: the US government has “no valid interest in preventing foreign propaganda”. Therefore, the government could find less restrictive alternatives, such as banning the company from sharing sensitive data. In answer to questions, TikTok’s lawyers claimed that the US’s history of banning foreign ownership of broadcast media is not relevant because it was due to bandwidth scarcity. The government’s lawyers countered with national security: the Chinese government could manipulate TikTok’s content and use the data it collects for espionage.

Again: the Chinese can *buy* piles of US data just like anyone else. TikTok does what Silicon Valley does. Pass data privacy laws!

Experts try to read the court. Amy Howe at SCOTUSblog says the justices seemed divided, but overall likely to issue a swift decision. At This Week in Google and Techdirt, Cathy Gellis says the proceedings, have left her “cautiously optimistic” that the court will not undermine the First Amendment, a feeling seemingly echoed by some of the panel of experts who liveblogged the proceedings.

The US government appears to have tied itself up in knots: SCOTUS may uphold a Congressionally-legislated ban neither old nor new administration now wants, that half the population resents, and that won’t solve the US’s pervasive privacy problems. Lost on most Americans is the irony that the rest of the world has complained for years that under the PATRIOT Act foreign subsidiaries of US companies are required to send data to US intelligence. This is why Max Schrems keeps winning cases under GDPR.

So, to wrap up: the ban doesn’t solve the problem it purports to solve, and it’s not the least restrictive possibility. On the other hand, national security? The only winner may be, as Jason Koebler writes at 404Media, Mark Zuckerberg.

Illustrations: Logo of Douyin, ByteDance’s Chinese version of TikTok.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.