Dangerous corner

This year’s Computers. Privacy, and Data Protection conference arrived at a crossroads moment. The European Commission, wanting to compete to “win the AI race”, is pursuing an agenda of simplification. Based on a recent report by former European Central Bank president Mario Draghi, it’s looking to streamline or roll back some of the regulation the EU is famous for.

Cue discussion of “The Brussels Effect”, derived from The California Effect, which sees compliance with regulation voluntarily shift towards the strictest regime. As Mireille Hildebrandt explained in her opening keynote, this phenomenon requires certain conditions. In the case of data protection legislation, that means three things: that companies will comply with the most stringent rules to ensure they are universally compliant, and that they want and need to compete in the EU. If you want your rules to dominate, it seems like a strategy. Except: China’s in-progress data protection regime may well be the strongest when it’s complete, but in that very different culture it will include no protection against the government. So maybe not a winning game?

Hildebrandt went on to prove with near-mathematical precision that na artificial general intelligence can never be compatible with the General Data Protection Regulation – AGI is “based on an incoherent conceptualization” and can’t be tested.

“Systems built with the goal of performing any task under any circumstances are fundamentally unsafe,” she said. “They cannot be designed for safety using fundamental engineering principles.”

AGI failing to meet existing legal restrictions seems minor in one way, since AGI doesn’t exist now, and probably never will. But as Hildebrandt noted, huge money is being poured into it nonetheless, and the spreading impact of that is unavoidable even if it fails.

The money also makes politicians take the idea seriously, which is the likely source of the EU’s talk of “simplification” instead of fundamental rights. Many fear that forthcoming simplification packages will reopen GDPR with a view to weakening the core principles of data minimization and purpose limitation. As one conference attendee asked, “Simplification for whom?”

In a panel on conflicting trends in AI governance, Shazeda Ahmed agreed: “There is no scientific basis around the idea of sentient AI, but it’s really influential in policy conversations. It takes advantage of fear and privileges technical knowledge.”

AI is having another impact technology companies may not have notidced yet: it is aligning the interests of the environmental movement and the privacy field.

Sustainability and privacy have often been played off against each other. Years ago, for example, there were fears that councils might inspect household garbage for elements that could have been recycled. Smart meters may or may not reduce electricity usage, but definitely pose privacy risks. Similarly, many proponents of smart cities stress the sustainability benefits but overlook the privacy impact of the ubiquitous sensors.

The threat generative AI poses to sustainability is well-documented by now. The threat the world’s burgeoning data centers pose to the transition to renewables is less often clearly stated and it’s worse than we might think. Claude Turmes, for example, highlighted the need to impose standards for data centers. Where an individual is financially incentivized to charge their electric vehicle at night and help even out the load on the grid, the owners of data centers don’t care. They just want the power they need – even if that means firing up coal plants to get it. Absent standards, he said, “There will be a whole generation of data centers that…use fossil gas and destroy the climate agenda.” Small nuclear power reactors, which many are suggesting, won’t be available for years. Worse,, he said, the data centers refuse to provide information to help public utilities plan despite their huge cosumption.

Even more alarming was the panel on the conversion of the food commons into data spaces. So far, most of what I had heard about agricultural data revolved around precision agriculture and its impact on farm workers, as explored in work (PDF) by Karen Levy, Solon Barocas, and Alexandra Mateescu. That was plenty disturbing, covering the loss of autonomy as sensors collect massive amounts of fine-grained information, everything from soil moisture to the distribution of seeds and fertilizer.

Much more alarming to see Monja Sauvagerd connect up in detail the large companies that are consolidating our food supply into a handful of platforms. Chinese government-owned Sinochem owns Syngenta; John Deere expanded by buying the machine learning company Blue River; and in 2016, Bayer bought Monsanto.

“They’re blurring the lines between seeds, agrichemicals, bio technology, and digital agriculture,” Sauvagerd said. So: a handful of firms in charge of our food supply are building power based on existing concentration. And, selling them cloud and computing infrastructure services, the array of big technology platforms that are already dangerously monopolistic. In this case, “privacy”, which has always seemed abstract, becomes a factor in deciding the future of our most profoundly physical system. What rights should farmers have to the data their farms generate?

In her speech, Hildebrandt called the goals of TESCREAL – transhumanism, extropianism, singularitarianism, cosmism, rationalist ideology, effective altruism, and long-termism – “paradise engineering”. She proposed three questions for assessing new technologies: What will it solve? What won’t it solve? What new problems will it create? We could add a fourth: while they’re engineering paradise, how do we live?

Illustrations: Brussels’ old railway hub, next to its former communications hub, the Maison de la Poste, now a conference center.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Admiring the problem

In one sense, the EU’s barely dry AI Act and the other complex legislation – the Digital Markets Act, Digital Services Act, GDPR, and so on -= is a triumph. Flawed it may be, but it’s a genuine attempt to protect citizens’ human rights against a technology that is being birthed with numerous trigger warnings. The AI-with-everything program at this year’s Computers, Privacy, and Data Protection, reflected that sense of accomplishment – but also the frustration that comes with knowing that all legislation is flawed, all technology companies try to game the system, and gaps will widen.

CPDP has had these moments before: new legislation always comes with a large dollop of frustration over the opportunities that were missed and the knowledge that newer technologies are already rushing forwards. AI, and the AI Act, more or less swallowed this year’s conference as people considered what it says, how it will play internationally, and the necessary details of implementation and enforcement. Two years at this event, inadequate enforcement of GDPR was a big topic.

The most interesting future gaps that emerged this year: monopoly power, quantum sensing, and spatial computing.

For at least 20 years we’ve been hearing about quantum computing’s potential threat to public key encryption – that day of doom has been ten years away as long as I can remember, just as the Singularity is always 30 years away. In the panel on quantum sensing, Chris Hoofnagle argued that, as he and Simson Garfinkel recently wrote at Lawfare and in their new book, quantum cryptanalysis is overhyped as a threat (although there are many opportunities for quantum computing in chemistry and materials science). However, quantum sensing is here now, works (because qubits are fragile), and is cheap. There is plenty of privacy threat here to go around: quantum sensing will benefit entirely different classes of intelligence, particularly remote, undetectable surveillance.

Hoofnagle and Garfinkel are calling this MASINT, for machine and signature intelligence, and believe that it will become very difficult to hide things, even at a national level. In Hoofnagle’s example, a quantum sensor-equipped drone could fly over the homes of parolees to scan for guns.

Quantum sensing and spatial computing have this in common: they both enable unprecedented passive data collection. VR headsets, for example, collect all sorts of biomechanical data that can be mined more easily for personal information than people expect.

Barring change, all that data will be collected by today’s already-powerful entities.

The deeper level on which all this legislation fails particularly exercised Cristina Caffarra, the co-founder of the Centre for Economic Policy Research in the panel on AI and monopoly, saying that all this legislation is basically nibbling around the edges because they do not touch the real, fundamental problem of the power being amassed by the handful of companies who own the infrastructure.

“It’s economics 101. You can have as much downstream competition as you like but you will never disperse the power upstream.” The reports and other material generated by government agencies like the UK’s Competition and Markets Authority are, she says, just “admiring the problem”.

A day earlier, the Novi Sad professor Vladen Joler had already pointed out the fundamental problem: at the dawn of the Internet anyone could start with nothing and build something; what we’re calling “AI” requires billions in investment, so comes pre-monopolized. Many people dismiss Europe for not having its own homegrown Big Tech, but that overlooks open technologies: the Raspberry Pi, Linux, and the web itself, which all have European origins.

In 2010, the now-departing MP Robert Halfon (Con-Harlow) said at an event on reining in technology companies that only a company the size of Google – not even a government – could create Street View. Legend has it that open source geeks heard that as a challenge, and so we have OpenStreetMap. Caffarra’s fiery anger raises the question: at what point do the infrastructure providers become so entrenched that they could choke off an open source competitor at birth? Caffarra wants to build a digital public interest infrastructure using the gaps where Big Tech doesn’t yet have that control.

The Dutch Groenlinks MEP Kim van Sparrentak offered an explanation for why the AI Act doesn’t address market concentration: “They still dream of a European champion who will rule the world.” An analogy springs to mind: people who vote for tax cuts for billionaires because one day that might be *them*. Meanwhile, the UK’s Competition and Markets Authority finds nothing to investigate in Microsoft’s partnership with the French AI startup Mistral.

Van Sparrentak thinks one way out is through public procurement; adopt goals of privacy and sustainability, and support European companies. It makes sense; as the AI Now Institute’s Amba Kak, noted, at the moment almost everything anyone does digitally has to go through the systems of at least one Big Tech company.

As Sebastiano Toffaletti, head of the secretariat of the European SME Alliance, put it, “Even if you had all the money in the world, these guys still have more data than you. If you don’t and can’t solve it, you won’t have anyone to challenge these companies.”

Illustrations: Vladen Joler shows Anatomy of an AI System, a map he devised with Kate Crawford of the human labor, data, and planetary resources that are extracted to make “AI”.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.