Privacy technologies typically fail for one of two reasons: 1) they’re too complicated and/or expensive to find widespread adoption among users; 2) sites and services ignore, undermine, or bypass them in order to preserve their business model. In the first category are numerous privacy-enhancing technologies that failed to make their case in the marketplace. Among examples of the first category are numerous encryption-related attempts to secure communications. Repeated failures in the marketplace, usually because the resulting products were too technically difficult for most users, they never found mass adoption. In the end, encrypted messaging didn’t really took off until WhatsApp built it into its service.
This week saw a category two failure: Mozilla announced it is removing the Do Not Track option from Firefox’s privacy settings. DNT is simple enough to implement if you can stand to check and change settings, but it falls on the wrong side of modern business models and, other than in California, the US has no supporting legislation to make it enforceable. Granted, Firefox is a minority browser now, but the moment feels significant for this 13-year-old technology.
As Kevin Purdy explains at Ars Technica, DNT began as an FTC proposal, based on work by Christopher Soghoian and Sid Stamm, that aimed to create a mechanism for the web similar to the “Do Not Call” list for telephone networks.
The world in which DNT seemed a hopeful possibility seems almost quaint now: then, one could still imagine that websites might voluntarily respect the signal web browsers sent indicating users’ preferences. Do Not Call, by contrast, was established by US federal legislation. Despite various efforts, the US failed to pass legislation underpinning DNT, and it never became a web standard. The closest it has come to the latter is Section 2.12 of the W3C’s Ethical Web Principles, which says, “People must be able to change web pages according to their needs.” Can I say I *need* to not be tracked?
Even at the time it seemed doubtful that web companies would comply. But it also suffered from unfortunate timing. DNT arrived just as the twin onslaught of smartphones and social media was changing the ethos that built the open web. Since then, as Cory Doctor wrote earlier this year, the incentives have aligned to push web browsers to become faithless user agents, and conventions mean less and less.
Ultimately, DNT only ever worked insofar as users could trust websites to honor their preference. As it’s become clear they can’t, ad blockers have proliferated, depriving sites of ad revenue they need to survive. Had DNT been successful, perhaps we’d have all been better off.
***
Also on the way out this week is Cruise’s San Francisco robotaxis. My last visit to San Francisco, about a year ago, was the first time I saw these in person. Most of the ones I saw were empty Waymos, perhaps in transit to a passenger, perhaps just pointlessly clogging the streets. Around then, a Cruise robotaxi ran over a pedestrian who’d been hit by another car and then dragged her 20 feet. San Francisco promptly suspended Cruise’s license. Technology critic Paris Marx thought the incident would likely be Cruise’s “death knell”. And so it’s proving. The announcement from GM, which acquired Cruise in 2016 for $1 billion, leaves just Waymo standing in the US self-driving taxi business, with Tesla saying it will enter the market late next year.
I always associate robotaxis with Vernor Vinge‘s 2006 novel Rainbows End. In it, Vinge imagined a future in which robotaxis arrived within minutes of being hailed and replaced both public transport and private car ownership. By 2012 or so, his fictional imagining had become real-life projection, and many were predicting that our streets would imminently be filled with self-driving cars, taxis or not. In 2017, the conversation was all about what ethics to program into them and reclaiming urban space. Now, that imagined future seems to be receding, as skeptics predicted it would.
***
American journalism has long operated under the presumption that the stories it produces should be “neutral”. Now, at the LA Times, CEO Patrick Soon-Shiong thinks he can enforce this neutrality by running an AI-based “bias meter” over the paper’s stories. If you remember, in the late stages of the US presidential election, Soon-Shiong blocked the paper from endorsing Kamala Harris. Reports say that the bias meter, due out next month, is meant to identify any bias the story’s source has and then deliver “both sides” of that story.
This is absurd. Few news stories have just two competing sides. A biased source can’t be countered by rewriting the story unless you include more sources and points of view, which means additional research. Most important, AI can’t think.
But readers can. And so what this story says is that Soon-Shiung doesn’t trust either the journalists who work for him or the paper’s readers to draw the conclusions he wants. If he knew more about journalism, he’d know that readers generally don’t adopt opinions just because someone tells them to. The far greater power, I recall reading years ago, lies in determining what readers *think about* by deciding what topics are important enough to cover. There’s bias there, too, but Soon-Shiong’s meter won’t show it.
Illustrations: Dominic Wilccox‘s concept driverless sleeper car, 2014.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.