Sovereign

On May 19, a group of technologists, researchers, economists, and scientists published an open letter calling on British prime minister Keir Starmer to prioritize the development of “sovereign advanced AI capabilities through British startups and industry”. I am one of the many signatories. Britain’s best shot at the kind of private AI research lab under discussion was Deepmind, sold to Google in 2014; the country has nothing now that’s domestically owned. ”

Those with long memories know that Leo was the first computer used for a business application – running Lyons tea rooms. In the 1980s, Britain led personal computing.

But the bigger point is less about AI in specific and more about information technology generally. At a panel at Computers, Privacy, and Data Protection in 2022, the former MEP Jan Philipp Albrecht, who was the special rapporteur for the General Data Protection Regulation, outlined his work building up cloud providers and local hardware as the Minister for Energy, Agriculture, the Environment, Nature and Digitalization of Schleswig-Holstein. As he explained, the public sector loses a great deal when it takes the seemingly easier path of buying proprietary software and services. Among the lost opportunities: building capacity and sovereignty. While his organization used services from all over the world, it set its own standards, one of which was that everything must be open source,

As the events of recent years are making clear, proprietary software fails if you can’t trust the country it’s made in, since you can’t wholly audit what it does. Even more important, once a company is bedded in, it can be very hard to excise it if you want to change supplier. That “customer lock-in” is, of course, a long-running business strategy, and it doesn’t only apply to IT. If we’re going to spend large sums of money on IT, there’s some logic to investing it in building up local capacity; one of the original goals in setting up the Government Digital Service was shifting to smaller, local suppliers instead of automatically turning to the largest and most expensive international ones.

The letter calls relying on US technology companies and services a “national security risk. Elsewhere, I have argued that we must find ways to build trusted systems out of untrusted components, but the problem here is more complex because of the sensitivity of government data. Both the US and China have the right to command access to data stored by their companies, and the US in particular does not grant foreigners even the few privacy rights it grants its citizens.

It’s also long past time for countries to stop thinking in terms of “winning the AI race”. AI is an umbrella term that has no single meaning. Instead, it would be better to think in terms of there being many applications of AI, and trying to build things that matter.

***

As predicted here two years ago, AI models are starting to collapse, Stephen J. Vaughan writes at The Register.

The basic idea is that as the web becomes polluted with synthetically-generated data, the quality of the data used to train the large language models degrades, so the models themselves become less useful. Even without that, the AI-with-everything approach many search engines are taking is poisoning their usefulness. Model collapse just makes it worse.

We would point out to everyone frantically adding “AI” to their services that the historical precedents are not on their side. In the late 1990s, every site felt it had to be a portal, so they all had search, and weather, and news headlines, and all sorts of crap that made it hard to find the search results. The result? Google disrupted all that with a clean, white page with no clutter (those were the days). Users all switched. Yahoo is the most obvious survivor from that period, and I think it’s because it does have some things – notably financial data – that it does extremely well.

It would be more satisfying to be smug about this, but the big issue is that companies are going on spraying toxic pollution over the services we all need to be able to use. How bad does it have to get before they stop?

***

At Privacy Law Scholars this week, in a discussion of modern corporate oligarchs and their fantasies of global domination, an attendee asked if any of us had read the terms of service for Starlink. She wanted to draw out attention to the following passage, under “Governing Law”:

For Services provided to, on, or in orbit around the planet Earth or the Moon, this Agreement and any disputes between us arising out of or related to this Agreement, including disputes regarding arbitrability (“Disputes”) will be governed by and construed in accordance with the laws of the State of Texas in the United States. For Services provided on Mars, or in transit to Mars via Starship or other spacecraft, the parties recognize Mars as a free planet and that no Earth-based government has authority or sovereignty over Martian activities. Accordingly, Disputes will be settled through self-governing principles, established in good faith, at the time of Martian settlement.

Reminder: Starlink has contracts worth billions of dollars to provide Internet infrastructure in more than 100 countries.

So who’s signing this?

Illustrations: The Martian (Ray Walston) in the 1963-1966 TV series My Favorite Martian.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Dangerous corner

This year’s Computers. Privacy, and Data Protection conference arrived at a crossroads moment. The European Commission, wanting to compete to “win the AI race”, is pursuing an agenda of simplification. Based on a recent report by former European Central Bank president Mario Draghi, it’s looking to streamline or roll back some of the regulation the EU is famous for.

Cue discussion of “The Brussels Effect”, derived from The California Effect, which sees compliance with regulation voluntarily shift towards the strictest regime. As Mireille Hildebrandt explained in her opening keynote, this phenomenon requires certain conditions. In the case of data protection legislation, that means three things: that companies will comply with the most stringent rules to ensure they are universally compliant, and that they want and need to compete in the EU. If you want your rules to dominate, it seems like a strategy. Except: China’s in-progress data protection regime may well be the strongest when it’s complete, but in that very different culture it will include no protection against the government. So maybe not a winning game?

Hildebrandt went on to prove with near-mathematical precision that na artificial general intelligence can never be compatible with the General Data Protection Regulation – AGI is “based on an incoherent conceptualization” and can’t be tested.

“Systems built with the goal of performing any task under any circumstances are fundamentally unsafe,” she said. “They cannot be designed for safety using fundamental engineering principles.”

AGI failing to meet existing legal restrictions seems minor in one way, since AGI doesn’t exist now, and probably never will. But as Hildebrandt noted, huge money is being poured into it nonetheless, and the spreading impact of that is unavoidable even if it fails.

The money also makes politicians take the idea seriously, which is the likely source of the EU’s talk of “simplification” instead of fundamental rights. Many fear that forthcoming simplification packages will reopen GDPR with a view to weakening the core principles of data minimization and purpose limitation. As one conference attendee asked, “Simplification for whom?”

In a panel on conflicting trends in AI governance, Shazeda Ahmed agreed: “There is no scientific basis around the idea of sentient AI, but it’s really influential in policy conversations. It takes advantage of fear and privileges technical knowledge.”

AI is having another impact technology companies may not have notidced yet: it is aligning the interests of the environmental movement and the privacy field.

Sustainability and privacy have often been played off against each other. Years ago, for example, there were fears that councils might inspect household garbage for elements that could have been recycled. Smart meters may or may not reduce electricity usage, but definitely pose privacy risks. Similarly, many proponents of smart cities stress the sustainability benefits but overlook the privacy impact of the ubiquitous sensors.

The threat generative AI poses to sustainability is well-documented by now. The threat the world’s burgeoning data centers pose to the transition to renewables is less often clearly stated and it’s worse than we might think. Claude Turmes, for example, highlighted the need to impose standards for data centers. Where an individual is financially incentivized to charge their electric vehicle at night and help even out the load on the grid, the owners of data centers don’t care. They just want the power they need – even if that means firing up coal plants to get it. Absent standards, he said, “There will be a whole generation of data centers that…use fossil gas and destroy the climate agenda.” Small nuclear power reactors, which many are suggesting, won’t be available for years. Worse,, he said, the data centers refuse to provide information to help public utilities plan despite their huge cosumption.

Even more alarming was the panel on the conversion of the food commons into data spaces. So far, most of what I had heard about agricultural data revolved around precision agriculture and its impact on farm workers, as explored in work (PDF) by Karen Levy, Solon Barocas, and Alexandra Mateescu. That was plenty disturbing, covering the loss of autonomy as sensors collect massive amounts of fine-grained information, everything from soil moisture to the distribution of seeds and fertilizer.

Much more alarming to see Monja Sauvagerd connect up in detail the large companies that are consolidating our food supply into a handful of platforms. Chinese government-owned Sinochem owns Syngenta; John Deere expanded by buying the machine learning company Blue River; and in 2016, Bayer bought Monsanto.

“They’re blurring the lines between seeds, agrichemicals, bio technology, and digital agriculture,” Sauvagerd said. So: a handful of firms in charge of our food supply are building power based on existing concentration. And, selling them cloud and computing infrastructure services, the array of big technology platforms that are already dangerously monopolistic. In this case, “privacy”, which has always seemed abstract, becomes a factor in deciding the future of our most profoundly physical system. What rights should farmers have to the data their farms generate?

In her speech, Hildebrandt called the goals of TESCREAL – transhumanism, extropianism, singularitarianism, cosmism, rationalist ideology, effective altruism, and long-termism – “paradise engineering”. She proposed three questions for assessing new technologies: What will it solve? What won’t it solve? What new problems will it create? We could add a fourth: while they’re engineering paradise, how do we live?

Illustrations: Brussels’ old railway hub, next to its former communications hub, the Maison de la Poste, now a conference center.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: The Promise and Peril of CRISPR

The Promise and the Peril of CRISPR
Edited by Neal Baer
Johns Hopkins University Press
ISBN: 978-1-4214493-02

It’s an interesting question: why are there so many articles in which eminent scientists fear an artificial general superintelligence (which is pure fantasy for the foreseeable future)…and so few that are alarmed by human gene editing tools, which are already arriving? The pre-birth genetic selection in the 1997 movie Gattaca is closer to reality than an AGI that decides to kill us all and turn us into paperclips.

In The Promise and the Peril of CRISPR, Neal Baer collects a series of essays considering the ethical dilemmas posed by a technology that could be used to eliminate whole classes of disease and disabilities. The promise is important: gene editing offers the possibility of curing chronic, painful, debilitating congenital conditions. But for everything, there may be a price. A recent episode of HBO Max’s TV series The Pitt showed the pain that accompanies sickle cell anemia. But that same condition confers protection against malaria, which was an evolutionary advantage in some parts of the world. There may be many more such tradeoffs whose benefits are unknown to us.

Baer started with a medical degree, but quickly found work as a TV writer. He is best known for his work on the first seven years of ER and seasons two through 12 of Law and Order: Special Victims Unit. In his medical career as an academic pediatrician, he writes extensively and works with many health care-related organizations.

Most books on new technologies like CRISPR (for clustered regularly interspaced short palindromic repeats) are either all hype or all panic. In pulling together the collection of essays that make up The Promise and Peril of CRISPR, Baer has brought in voices rarely heard in discussions of new technologies. Ethan Weiss tells the story of his daughter, who was born with albinism, which has more difficult consequences than simple lack of pigmentation. Had the technology been available, he writes, they might have opted to correct the faulty gene that causes it; lacking that, they discovered new richness in life that they would never wish to give up. In another essay, Florence Ashley explores the potential impact from several directions on trans people, who might benefit from being able to alter their bodies through genetics rather than frequent medical interventions such as hormones. And in a third, Krystal Tsosie considers the impact on indigenous peoples, warning against allowing corporate ownership of DNA.

Other essays consider the potential for conditions such as cystic fibrosis (Sandra Sufian) and deafness (Carol Padden and Jacqueline Humphries), and international human rights. One use he omits, despite its status as intermittent media fodder since techniques for gene editing were first developed, is performance enhancement in sports. There is so far no imaginable way to test athletes for it. And anyone who’s watched junior sports knows there are definitely parents crazy enough to adopt any technology that will improve their kids’ abilities. Baer was smart to skip this; it will be a long time before CRISPR is cheap enough and advanced enough to be accessible for that sort of thing.

In one essay, molecular biologist Ellen D. Jorgenson discusses a class she co-designed to facilitate teaching CRISPR to anyone who cared to learn. At the time, the media were focused on its dangers, and she believed that teaching it would help alleviate public fear. Most uses, she writes, are benign. Based on previous experience with scientific advances, it will depend who wields it and for what purpose.

Hallucinations

It makes obvious sense that the people most personally affected by a crime should have the right to present their views in court. Last week, in Arizona, Stacey Wales, the sister of Chris Pelkey, who was killed in a road rage shooting in 2021, delegated her victim impact statement offering forgiveness to Pelkey’s artificially-generated video likeness. According to Cy Neff at the Guardian, the judge praised this use of AI and said he felt the forgiveness was “genuine”. It is unknown if it affected his sentencing.

It feels instinctively wrong to use a synthesized likeness this way to represent living relatives, who could have written any script they chose – even, had they so desired, one presenting this reportedly peaceful religious man’s views as a fierce desire for vengeance. *Of course* seeing it acted out by a movie-like AI simulation of the deceased victim packs emotional punch. But that doesn’t make it *true* or, as Wales calls it at the YouTube video link above, “his own impact statement”. It remains the thoughts of his family and friends, culled from their possibly imperfect memories of things Pelkey said during his lifetime, and if it’s going to be presented in a court, it ought to be presented by the people who wrote the script.

This is especially true because humans are so susceptible to forming relationships with *anything*, whether it’s a basketball that reminds you of home, as in the 2000 movie Cast Away, or a chatbot that appears to answer your questions, as in 1966’s ELIZA or today’s ChatGPT.

There is a lot of that about. Recently, Miles Klee reported at Rolling Stone that numerous individuals are losing loved ones to “spiritual fantasies” engendered by intensive and deepening interaction with chatbots. This reminds of Ouija boards, which seem to respond to people’s questions but in reality react to small muscle movements in the operators’ hands.

Ouija boards “lie” because their operators unconsciously guide them to spell out words via the ideomotor effect. Those small, unnoticed muscle movements are also, more impressively, responsible for table tilting. The operators add to the illusion by interpreting the meaning of whatever the Ouija board spells out.

Chatbots “hallucinate” because the underlying large language models, based on math and statistics, predict the most likely next words and phrases with no understanding of meaning. But a conundrum is developing: as the large language models underlying chatbots improve, the bots are becoming *more*, not less, prone to deliver untruths.

At The Register, Thomas Claburn reports that researchers at Carnegie-Mellon, the University of Michigan, and the Allen Institute for AI find that AI models will “lie” in to order to meet the goals set for them. In the example in their paper, a chatbot instructed to sell a new painkiller that the company knows is more addictive than its predecessor will deny its addictiveness in the interests of making the sale. This is where who owns the technology and sets its parameters is crucial.

This result shouldn’t be too surprising. In her 2019 book, You Look Like a Thing and I Love You, Janelle Shane highlighted AIs’ tendency to come up with “short-cuts” that defy human expectations and limitations to achieve the goals set for them. No one has yet reported that a chatbot has been intentionally programmed to lead its users from simple scheduling to a belief that they are talking to a god – or are one themselves, as Klee reports. This seems more like operator error, as unconscious as the ideomotor effect

OpenAI reported at the end of April that it was rolling back GPT-4o to an earlier version because the chatbot had become too “sycophantic”. Tthe chatbot’s tendency to flatter its users apparently derived from the company’s attempt to make it “feel more intuitive”.

It’s less clear why Elon Musk’s Grok has been shoehorning rants alleging white genocide in South Africa into every answer it gives to every question, no matter how unrelated, as Kyle Orland reports at Ars Technica.

Meanwhile, at the New York Times Cade Metz and Karen Weise find that AI hallucinations are getting worse as the bots become more powerful. They give examples, but we all have our own: irrelevant search results, flat-out wrong information, made-up legal citations. Metz and Weise say “it’s not entirely clear why”, but note that the reasoning systems that DeepSeek so explosively introduced in February are more prone to errors, and that those errors compound the more time they spend stepping through a problem. That seems logical, just as a tiny error in an early step can completely derail a mathematical proof.

This all being the case, it would be nice if people would pause to rethink how they use this technology. At Lawfare, Cullen O’Keefe and Ketan Ramakrishnan are already warning about the next stage, agentic AI, which is being touted as a way to automate law enforcement. Lacking fear of punishment, AIs don’t have the motivations humans do to follow the law (nor can a mistargeted individual reason with them). Therefore, they must be instructed to follow the law, with all the problems of translating human legal code into binary code that implies.

I miss so much the days when you could chat online with a machine and know that really underneath it was just a human playing pranks.

Illustrations: “Mystic Tray” Ouija board (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The Skype of it all

This week, Microsoft shuttered Skype. For a lot of people, it’s a sad but nostalgic moment. Sad, because for older Internet users it brings back memories of the connections it facilitated; nostalgic because hardly anyone seemed to be using it any more. As Chris Stokel-Walker wrote at Wired in 2021, somehow when covid arrived and poured accelerant on remote communications, everyone turned to Zoom instead. Stokel-Walker blamed the Microsoft team for lacking focus on the bit that mattered most: keeping the video link up and stable. Zoom had better video, true, but also far better usability in terms of getting people to calls.

Skype’s service – technically, VoIP, for Voice over Internet Protocol – was pioneering in its time, which arguably peaked around 2010. Like CompuServe before it and Twitter since, there was a period when everyone had their Skype ID on their business cards. In 2005, when eBay bought it for $1.3 billion, it was being widely copied. In 2009, when eBay sold it to an investor group, it was valued at $2.75 billion.

In 2011, Microsoft bought it for $8.5 billion in cash, to general puzzlement as to *why* and why for *so much*. I thought eBay would somehow embed it into its transaction infrastructure as it had Paypal, which it had bought in 2002 for $1.5 billion (and then in 2014 spun off as a public company). Similarly, Wired talked of Microsoft embedding it into its Xbox Live network. Instead, the company fiddled with the app in the general shift from desktop to mobile. Ironic, given that Skype was a *phone* app. If it struggled like Facebook did to make the change, it’s kind of embarrassing.

Forgotten in all this is the fact that although Skype was the first VoIP application to gain mainstream acceptance, it was not the first to connect phone calls over the Internet. That was the long-forgotten Free World Dial-Up project, pioneered by Jeff Pulver. On the ground I imagined Free World Dial-Up as looking something like the switchboard and radio phone Radar O’Reilly (Gary Burghoff) used in the TV series M*A*S*H (1973-1982), who was patching phone calls being transmitted via radio networks. As Pulver described it, calls were sent across the Internet between servers, each connected to a box that patched the calls into the local phone system.

Rereading my notes from my 1995 interview with Pulver, when he was just getting his service up and running, it’s astonishing to remember how many hurdles there were for his prototype VoIP project to overcome – and this was all being done by volunteers. In many countries outside North America, charges for local phone calls made it financially risky to run a server. Some countries had prohibitive licensing regulations that made it illegal to offer such a service if you weren’t a telephone company. The hardware and software were readily available but had to be bought and required tinkering to set up. Plus, few outside the business world had continuous high-speed connections; most of us were using modems to dial up a service provider.

Small surprise that those early calls were not great. A Chicago recipient of a test call said she’d had better connections over the traditional phone network to Harare. Network lag made it more like a store-and-forward audio clipping service than a phone call. This didn’t matter as much to people with a history in ham radio, like Pulver himself; they were used to the cognitive effort to understand despite static and dropouts.

On the other hand, international calling was so wildly expensive at the time that even so FWD opened up calling for half a million people.

FWD was the experiment that proved the demand and the potential. Soon, numerous companies were setting up to offer VoIP services via desktop applications of varying quality and usability. It was into this hodge-podge that Skype was launched in 2003 from Estonia. For a time, it kept getting better: it began with free calling between Skype users and paid calls to phone lines, and moved on to offering local phone numbers around the world, as Google Voice does now.

Around the early 2000s it was popular to predict that VoIP services would kill off telephone companies. This was a moment when network neutrality, now under threat, was crucial; had telcos been allowed to discriminate against VoIP traffic, we’d all still be paying through the nose for international calling and probably wouldn’t have had video calling during the covid lockdowns.

Instead, the telcos themselves have become VoIP companies. In 2007, BT was the first to announce it was converting its entire network to IP. That process is supposed to complete this year. My landline is already a VoIP line. (Downside: no electricity, no telecommunications.)

Pulver, I find, is still pushing away at the boundaries of telecommunications. His website these days is full of virtualized conversations (vCons) and Supply Chain Integrity, Transparency, and Trust (SWICC), which he explains here (PDF). The first is an IETF proposed standard for AI-enhanced digital records. The second is an IETF proposed framework that intends to define “a set of interoperable building blocks that will allow implementers to build integrity and accountability into software supply chain systems to help assure trustworthy operation”. This is the sort of thing that may make a big difference to companies while being invisible and/or frustrating to most of us.

As for Skype, it will fade from human memory. If it ever comes up, we’ll struggle to explain what it was to a generation who have no idea that calling across the world was ever difficult and expensive.

Illustrations: Radar O’Reilly (Gary Burghoff) in the TV series M*A*S*H with his radio telephone setup.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Lawfaring

Fining companies who have spare billions down the backs of their couches is pointless, but what about threatening their executives with prosecution? In a scathing ruling (PDF), US District Judge Yvonne Gonzales Rogers finds that Apple’s vice-president of finance, Alex Roman, “lied outright under oath” and that CEO Tim Cook “chose poorly” in failing to follow her injunction in Epic Games v. Apple. She asks the US Attorney for the Northern District of California to investigate whether criminal contempt proceedings are appropriate. “This is an injunction, not a negotiation.”

As noted here last week, last year Google lost the similar Epic Games v. Google. In both cases, Epic Games complained that the punishing commissions both companies require of the makers of apps downloaded from their app stores were anti-competitive. This is the same issue that last week led the European Commission to announce fines and restrictions against Apple under the Digital Markets Act. These rulings could, as Matt Stoller suggests, change the entire app economy.

Apple has said it strongly disagrees with the decision and will appeal – but it is complying.

At TechRadar, Lance Ulanoff sounds concerned about the impact on privacy and security as Apple is forced to open up its app store. This argument reminds of a Bell Telephone engineer who confiscated a 30-foot cord from Woolworth’s that I’d plugged in, saying it endangered the telephone network. Apple certainly has the right to market its app store with promises of better service. But it doesn’t have the right to defy the court to extend its monopoly, as Mike Masnick spells out at Techdirt.

Masnick notes the absurdity of the whole thing. Apple had mostly won the case, and could have made the few small changes the ruling ordered and gone about its business. Instead, its executives lied and obfuscated for a few years of profits, and here we are. Although: Apple would still have lost in Europe.

A Perplexity search for the last S&P 500 CEO to be jailed for criminal contempt finds Kevin Trudeau. Trudeau used late-night infomercials and books to sell what Wikipedia calls “unsubstantiated health, diet, and financial advice”. He was sentenced to ten years in prison in 2013, and served eight. Trudeau and the Federal Trade Commission formally settled the fines and remaining restrictions in 2024.

The last time the CEO of a major US company was sent to prison for criminal contempt? It appears, never. The rare CEOs who have gone to prison, it’s typically been for financial fraud or insider trading. Think Worldcom’s Bernie Ebbers. Not sure this is the kind of innovation Apple wants to be known for.

***

Reuters reports that 23andMe has, after pressure from many US states, agreed to allow a court-appointed consumer protection ombudsman to ensure that customers’ genetic data is protected. In March, it filed for bankrupcy protection, fulfilling last September’s predictions that it would soon run out of money.

The issue is that the DNA 23andMe has collected from its 15 million customers is its only real asset. Also relevant: the October 2023 cyberattack, which, Cambridge Analytica-like, leveraged hacking into 14,000 accounts to access ancestry data relating to approximately 6.9 million customers. The breach sparked a class action suit accusing the company of inadequate security under the Health Insurance Portability and Accountability Act (1996). It was settled last year for $30 million – a settlement whose value is now uncertain.

Case after case has shown us that no matter what promises buyers and sellers make at the time of a sale, they generally don’t stick afterwards. In this case, every user’s account of necessity exposes information about all their relatives. And who knows where it will end up and for how long the new owner can be blocked from exploiting it?

***

There’s no particular relationship between the 23andMe bankruptcy and the US government. But they make each other scarier: at 404 Media, Joseph Cox reported two weeks ago that Palantir is merging data from a wide variety of US departments and agencies to create a “master database” to help US Immigration and Customs Enforcement target and locate prospective deportees. The sources include the Internal Revenue Service, Health and Human Services, the Department of Labor, and Housing and Urban Development; the “ATrac” tool being built already has data from the Social Security Administration and US Citizenship and Immigration Services, as well as law enforcement agencies such as the FBI, the Bureau of Alcohol, Tobacco, Firearms and Explosives, and the U.S. Marshals Service.

As the software engineer and essayist Ellen Ullman wrote in 1996 in her book Close to the Machine, databases “infect” their owners with the desire to link them together and find out things they never previously felt they needed to know. The information in these government databases was largely given out of necessity to obtain services we all pay for. In countries with data protection laws, the change of use Cox outlines would require new consent. The US has no such privacy laws, and even if it did it’s not clear this government would care.

“Never volunteer information,” used to be a commonly heard-mantra, typically in relation to law enforcement and immigration authorities. No one lives that way now.

Illustrations: DNA strands (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.