Information wants to be surveiled

It has long been the case that the person sitting next to you on an airplane may have paid a very different price for their seat than you did. For two reasons. First, airlines don’t price seats – they price itineraries. A seat from London to New York may be priced very differently depending on whether it’s one-way, one-half of a round trip, or one stage in a larger, multi-stage journey. Second, however, airlines are sophisticated about maximizing the value of your seat, responding to patterns of peak travel (Thanksgiving, August), when you buy it, and other factors. Despite their complexity, those prices are supposed to be based on published tariffs that anyone can, in theory, calculate for themselves and come up with the same answer.

In 2012, travel and data privacy expert Edward Hasbrouck explained all this as part of documenting and opposing the airlines’ desire to move to personalized pricing. Instead of acting as common carriers and publishing tariffs that apply across the board, with personalized pricing the airlines would use the information they have about you to charge what you can and would be willing to pay. In 2012, the International Air Transportation Association called it “New Distribution Capability”.

This is a much scarier proposition now than it was then; companies have a lot more data they can exploit. In 2012, they might simply have known your flying habits and credit score, while balancing their desire to get the most they can for the seat against the time they have left to sell it. Uber uses similar tactics; it raises the price of a ride – “surge pricing”, or “dynamic pricing” – when demand is high.

Today, an airline might know – or be able to find out – that you are racing against time to see a beloved relative before they die. And why should it stop with airlines? Exploitative possibilities abound.

Uber has already been dubiously accused of charging more when riders’ phone batteries are low. Retailers collect shopping histories through loyalty cards and apps; often customers already pay more for opting out.

Some types of price discrimination have long been illegal under consumer protection law. On February 12, US senators Ben Ray Luján (D-NM) and Jeff Merkley (D-OR) introduced the Stop Price Gouging in Grocery Stores Act of 2026 to block it in grocery stores. Other efforts are also underway in the US. This week’s news is calling this “surveillance pricing” or “predatory pricing”, which accurately reflects the data collection and surveillance capitalism underpinning.

This is not really a story about specific technologies, although “AI”; the issue is, as Hasbrouck writes, opacity and discrimination.

The technological pieces are in place to make this all much worse. Not just online, where prices are easily generated on the fly, but also in real-world retailers via wireless connections and electronic shelf tags, which already exist in some stores. A May 2025 study finds that these tags are so far not used to implement surge pricing but to update sale prices and offer discounts – but for how long?

In a May 2025 paper in the International Journal of Research in Marketing researchers examine the rise of algorithmic pricing – Uber, landlords, retailers – generally and personalized pricing in particular. The authors note that the latter requires market power, and buyers must have limited ability to exploit price differences. And also: it’s profitable (duh). The authors go on to discuss the role of privacy, data protection, consumer laws, and backlash, in curbing unfairness. At Big, which focuses on consolidation and market power, Matt Stoller warns of the potential power of Google’s plan to run pricing strategies for advertisers.

Hasbrouck returned to the subject last year while recapping the latest season of The Amazing Race to explain the airlines’ use of systems that deliver increasingly opaque and unpredictable pricing and the lack of enforcement enabling it.

In a pair of posts, the ACLU’s Jay Stanley follows Hasbrouck’s logic to new levels, laying out a possible future these developments may bring. The desire to wring every last possible bit of profit out of us – we’re talking everyone who sells goods or services now, not just airlines – in Stanley’s view will lead to collecting more and more detailed data. The digital identification infrastructure being built into airports – as planned here in 2013 and shown arriving in 2022 – will ensure there is no countermeasure we can take to escape monitoring and data collection. In Stanley’s projection, stores will be able to demand a digital identification sign-in as a condition of entry.

Again, a bit of this is here already. In the UK, Facewatch, which we first encountered in 2013, is used by some major retailers to identify and bar shoppers who have previously been caught shoplifting or being violent. Recently, the system flagged a shopper and staff ousted the wrong person, who found it difficult to prove the mistake.

In other words, all these technologies, wrapped up together, could enable a world not much different from that imagined by Ira Levin in his 1970 book This Perfect Day, where everything anyone wanted to do required permission from a centralized system. Although: the key to making that work was drugging the entire population.

Illustrations: Burmese python in Florida, 2011 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: More Than a Glitch

More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech
By Meredith Broussard
MIT Press
ISBN: 978-0-262-04765-4

At the beginning of the 1985 movie Brazil, a family’s life is ruined when a fly gets stuck in a typewriter key so that the wrong man is carted away to prison. It’s a visual play on “computer bug”, so named after a moth got trapped in a computer at Harvard.

Based on her recent book More Than a Glitch, NYU associate professor Meredith Broussard, would call both the fly and the moth a “glitch”. In the movie, the error is catastrophic for Buttle-not-Tuttle and his family, but it’s a single, ephemeral mistake that can be prevented with insecticide and cross-checking. A “bug” is more complex and more significant: it’s “substantial”, “a more serious matter that makes software fail”. It “deserves attention”. It’s the difference between the lone rotten apple in a bushel full of good ones and a barrel that causes all the apples put it in to rot.

This distinction is Broussard’s prelude to her fundamental argument that the lack of fairness in computer systems is persistent, endemic, and structural. In the book, she examines numerous computer systems that are already out in the world causing trouble. After explaining the fundamentals of machine bias, she goes through a variety of sectors and applications to examine failures of fairness in each one. In education, proctoring software penalizes darker-skinned students by failing to identify them accurately, and algorithms used to estimate scores on tests canceled during the pandemic penalized exceptional students from unexpected backgrounds. In health, long-practiced “race correction” that derives from slavery preferences white patients for everything from painkillers to kidney transplants – and gets is embedded into new computer systems built to replicate existing practice. If computer developers don’t understand the way in which the world is prejudiced – and they don’t – how can the systems they create be more neutral than the precursors they replace? Broussard delves inside each system to show why, not just how, it doesn’t work as intended.

In other cases Broussard highlights, part of the problem is rigid inflexibility in back-end systems that need to exchange data. There’s little benefit in having 58 gender options if the underlying database only supports two choices. At a doctor’s office, Broussard is told she can only check one box for race; she prefer to check both “black” and “white” because in medical settings it may affect her treatment. The digital world remains only partially accessible. And, as Broussard discovered when she was diagnosed with breast cancer, even supposed AI successes like reading radiology films are overhyped. This section calls back to her 2018 book, Artificial Unintelligence, which did a good job of both explaining how machine learning and “AI” computer systems work and why a lot of the things the industry says work…really don’t (see also self-driving cars).

Broussard concludes by advocating for public interest technology and a rethink. New technology imitates the world it comes from; computers “predict the status quo”. Making change requires engineering technology so that it performs differently. It’s a tall order, and Broussard knows that. But wasn’t that the whole promise the technology founder made? That they could change the world to empower the rest of us?