Anachronistics

“In my mind, computers and the Internet arrived at the same time,” my twenty-something companion said, delivering an entire mindset education in one sentence.

Just a minute or two earlier, she had asked in some surprise, “Did bulletin board systems predate the Internet?” Well, yes: BBSs were a software package running on a single back room computer with a modem users dialed into, whereas the Internet is this giant sprawling mess of millions of computers connected together…simple first, complex later.

Her confusion is understandable: from her perspective, computers and the Internet did arrive at the same time, since her first conscious encounters with them were simultaneous.

But still, speaking as someone who first programmed a (mainframe, with punch cards) computer in 1972 as a student, who got her first personal computer in 1982, and got online in 1991 by modem and 1999 by broadband and to whom the sequence of events is memorable: wow.

A 25-year-old today was born in 1999 (the year I got broadband). Her counterpart 15 years hence (born 2014, the year a smartphone replaced my personal digital assistant) may think smart phones and the Internet were simultaneous. And sometime around 2045 *her* counterpart born in 2020 (two years before ChatGPT was released) might think generative text and image systems were contemporaneous with the first computers.

I think this confusion must have something to do with the speed of change in a relatively narrow sector. I’m sure that even though they all entered my life simultaneously, by the time I was 25 I knew that radio preceded TV (because my parents grew up with radio), bicycles preceded cars, and that handwritten manuscripts predated printed books (because medieval manuscripts). But those transitions played out over multiple lifetimes, if not centuries, and all those memories were personal. Few of us reminisce about the mainframes of the 1960s because most of us didn’t have access to them.

And yet, understanding the timeline of earlier technologies probably mattered less than not understanding the sequence of events in information technology. Jumbling the arrival dates of the pieces of information technology means failing to understand dependencies. What currently passes for “AI” could not exist without being able to train models on giant piles of data that the Internet and the web made possible, and that took 20 years to build. Neural networks pioneer Geoff Hinton came up with the ideas for convolutional neural networks as long ago as the 1980s, but it took until the last decade for them to become workable. That’s because it took that long to build sufficiently powerful computers and to amass enough training data. How do you understand the ongoing battle between those who wish to protect privacy via data protection laws and those who want data to flow freely without hindrance if you do not understand what those masses of data are important for?

This isn’t the only such issue. A surprising number of people who should know better seem to believe that the solution to all our ills with social media is to destroy Section 230, apparently believing that if S230 allowed Big Tech to get big, it must be wrong. Instead, the reality is also that it allows small sites to exist and it is the legal framework that allows content moderation. Improve it by all means, but understand its true purpose first.

Reviewing movies and futurist projections such as Vannevar Bush’s 1946 essay As We May Think (PDF) and Alan Turing’s lecture, Computing Machinery and Intelligence? (PDF) doesn’t really help because so many ideas arrive long before they’re feasible. The crew in the original 1966 Star Trek series (to say nothing of secret agent Maxwell Smart in 1965) were talking over wireless personal communicators. A decade earlier, Arthur C. Clarke (in The Nine Billion Names of God) and Isaac Asimov (in The Last Question) were putting computers – albeit analog ones – in their stories. Asimov in particular imagined a sequence that now looks prescient, beginning with something like a mainframe, moving on to microcomputers, and finishing up with a vast fully interconnected network that can only be held in hyperspace. (OK, it took trillions of years, starting in 2061, but still..) Those writings undoubtedly inspired the technologists of the last 50 years when they decided what to invent.

This all led us to fakes: as the technology to create fake videos, images, and texts continues to improve, she wondered if we will ever be able to keep up. Just about every journalism site is asking some version of that question; they’re all awash in stories about new levels of fakery. My 25-year-old discussant believes the fakes will always be improving faster than our methods of detection – an arms race like computer security, to which I’ve compared problems of misinformation / disinformation before.

I’m more optimistic. I bet even a few years from now today’s versions of generative “AI” will look as primitive to us as the special effects in a 1963 episode of Dr Who or the magic lantern used to create the Knock apparitions do to generations raised on movies, TV, and computer-generated imagery. Humans are adaptable; we will find ways to identify what is authentic that aren’t obvious in the shock of the new. We might even go back to arguing in pubs.

Illustrations: Secret agent Maxwell Smart (Don Adams) talking on his shoe phone (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The bridge

Seven months ago, Mastodon was fretting about Meta’s newly-launched Threads. The issue: Threads, which was built on top of Instagram’s user database, had said it complied with the Activity Pub protocol, which allows Mastodon servers (“instances”) to federate with any other service that also uses that protocol. The potential threat that Threads would become interoperable and that potentially millions of Threads users would swamp Mastodon, ignoring its existing social norms and culture created an existential dilemma: to federate or not to federate?

Today, Threads’ integration is still just a plan.

Instead, it seems the first disruptive arrival looks set to be Bluesky, created by a team backed by Twitter co-founder Jack Dorsey and facilitated by a third party. Bluesky wrote a new open source protocol, AT, so the proposal isn’t federation with Mastodon but a bridge, as Amanda Silberling reports at TechCrunch. According to Silberling’s numbers, year-old Bluesky stands at 4.8 million users to Mastodon’s 8.7 million. Anyone familiar with the history of AOL’s gateway to Usenet will tell you that’s big enough to disrupt existing social norms. The AOL exercise was known as Eternal September (because every September Usenet had to ingest a new generation of incoming university freshmen).

There are two key differences, however. First, a third of those Blusky users are new to that system, only joining last week, when the service opened fully to the public. They will bring challenges to the culture Bluesky has so far developed. Second, AOL’s gateway was unidirectional: AOLers could read and post to Usenet newsgroups, but Usenet posters could not read anything on AOL without paying for access. The Bluesky-Mastodon bridge is planned to be bidirectional, so anything posted publicly on one service would be accessible to both – or to outsiders using BridgyFed to connect via website feeds.

I haven’t spent a lot of time on Bluesky, but it’s clear it and Mastodon have different cultures. Friends who spend more time there say Bluesky has a “weirdness” they like and is less “scoldy” than Mastodon, where long-time users tended to school incoming ex-Twitter users in 2022 on their mistakes. That makes sense, when you consider that Mastodon has had time since its 2016 founding to develop an existing culture that newcomers are joining, where Bluesky has been a closed beta group until last week, and its users to date were the ones defining its culture for the future. The newcomers of the past week may have a very different experience.

Even if they don’t, there’s a fundamental economic difference that no technology can bridge: Mastodon is a non-profit cooperative endeavor, while Bluesky is has venture capital funding, although the list of investors is not the usual suspects. Social media users have often been burned by corporate business decisions. It’s therefore easy to believe that the $8 million in seed funding will lead inevitably to user data exploitation, no matter what they say now about being determined to find a different and more sustainable business model based on selling ancillary servicesx. Even if that strategy works, later owners or the dictates of shareholders may demand higher profits via a pivot to advertising, just as the Netflix and Amazon Prime streaming services are doing now.

Designing any software involves making rules for how it will operate and setting defaults. Here’s where the project hit trouble: should it be opt-out, so that users who don’t want their posts to be visible outside their home system have to specifically turn it off, or opt-in, so that users who want their posts published far and wide have to turn it on? BridgyFed’s creator, Ryan Barrett chose opt-out. It was immediately divisive: privacy versus openness.

Silberman reports that Barrett has fashioned a solution, giving users warning pop-ups and a chance to decline if someone from another service tries to follow them, and is thinking more carefully about the risks to safety his bridge might bring.

That’s great, but the next guy may not be so willing to reconsider. As we’ve observed before, there is no way to restrict the use of open protocols without closing them and putting them under centralized control – which is the opposite of the federated, decentralized systems Mastodon and Bluesky were created to build.

In a federated system anything one person can open another can close. Individual admins will decide for their users how their instances will operate. Those who don’t like their choice will be told they can port their accounts to one whose policies they prefer. That’s true, but unsatisfying as an answer. As the “Fediverse” grows, it must accommodate millions of mainstream users for whom moving servers is too complicated.

The key point, however, is that the illusion of control Mastodon seemed to offer is being punctured. Usenet users could have warned them: from its creation in 1979, users believed their postings were readable for a few weeks before expiring and being expunged. Then, in 1995, Steve Madere created the Deja News archive from scattered collections. Overnight, those “ephemeral” postings became permanent and searchable – and even more so, after 2001, when Google bought the archive (see groups.google.com).

The upshot: privacy in public networks is only ever illusory. Assume you have no control over anything you post, no matter how cozy and personal the network seems. As we’ve said before, the privacy-in-public afforded by the physical world has no online counterpart.

Illustrations: A mastodon by Heinrich Harder (public domain, via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

To tell the truth

It was toward the end of Craig Wright’s cross-examination on Wednesday when, for the first time in many days, he was lost for words. Wright is in court because the non-profit Crypto Open Patent Alliance seeks a ruling that he is not, as he claims, bitcoin inventor Satoshi Nakomoto, who was last unambiguously heard from in 2011.

Over the preceding days, Wright had repeatedly insisted “I am the real Satoshi” and disputed forensic analysis – anachronistic fonts, metadata, time stamps – pronouncing his proffered proofs forgeries.. He was consistently truculent, verbose, and dismissive of everyone’s expertise but his own and of everyone’s degrees except the ones he holds. For example: “Meiklejohn has not studied cryptography in any depth,” he said of Sarah Meiklejohn, the now-professor who as a student in 2013 showed that bitcoin transactions are traceable. In a favorite moment, Jonathan Hough, KC, who did most of the cross-examination, interrupted a diatribe about the failings of the press with, “Moving on from your expertise on journalism, Dr Wright…”

Participants in a drinking game based on his saying “That is not correct” would be dead of alcohol poisoning. In between, he insisted several times that he never wanted to be outed as Satoshi, and wishes that everyone would “leave me alone and let me invent”. Any money he is awarded in court he will give to charities ; he wants nothing for himself.

But at the moment we began with he was visibly stumped. The question, regarding a variable on a Github page: “Do you know what unsigned means?”

Wright: “Basically, an unsigned variable…it’s not an integer with…it’s larger. I’m not sure how to say it.”

Lawyer: “Try.”

Wright: “How I’d describe it, I’m not quite sure. I’m not good with trying to do things like this.” He could explain it easily in writing… (Transcription by Norbert on exTwitter.)

The lawyer explained it thusly: an unsigned variable cannot be a negative number.

“I understand that, but would I have thought of saying it in such a simple way? No.”

Experience as a journalist teaches you that the better you understand something the more simply and easily you can explain it. Wright’s inability to answer blew the inadequately bolted door plug out of his world’s expert persona. Everything until then could be contested: the stomped hard drive, the emails he wrote, or didn’t write, or wrote only one sentence of, the allegations that he had doctored old documents to make it look like he had been thinking about bitcoin before the publication of Satoshi’s foundational 2008 paper. But there’s no disguising lack of basic knowledge. “Should have been easy,” says a security professor (tenured, chaired) friend.

Normally, cryptography removes ambiguity. This is especially true of public key cryptography and its complementary pair of public and private keys. Being able to decrypt something with a well-attested public key is clear proof that it was encrypted with the complementary private key. Contrariwise, if a specific private key decrypts it, you know that key’s owner is the intended recipient. In both cases, as a bonus, you get proof that the text has not been altered since its encryption. It *ought* to be simple for Wright to support his claim by using Satoshi’s private keys. If he can’t do that, he must present a reason and rely on weaker alternatives.

Courts of law, on the other hand, operate on the balance of probabilities. They don’t remove ambiguity; they study it. Wright’s case is therefore a cultural clash, with far-reaching consequences. COPA is complaining that Wright’s repeated intellectual property lawsuits against developers working on bitcoin projects are expensive in both money and time. Soon after the unsigned variable exchange, the lawyer asked Wright what he will do if the court rules against him. “Move on to patents,” Wright said. He claims thousands of patents relating to bitcoin and the blockchain, and a brief glance at Google Patents shows many filings, some granted.

However this case comes out, therefore, it seems likely Wright will continue to try to control bitcoin. Wright insists that bitcoin isn’t meant to be “digital gold”, but that its true purpose is to facilitate micropayments. I haven’t “studied bitcoin in any depth” (as he might say), but as far as I can tell it’s far too slow, too resource-intensive, and too volatile to be used that way. COPA argues, I think correctly, that it’s the opposite of the world enshrined in Satoshi’s original paper; its whole point was to use cryptography to create the blockchain as a publicly attested, open, shared database that could eliminate central authorities such as banks.

In the Agatha Christie version of this tale, most likely Wright would be an imposter, an early hanger-on who took advantage of the gap formed by Satoshi’s disappearance and the deaths of other significant candidates. Dorothy Sayers would have Lord Peter Wimsey display unexpected mathematical brilliance to improve on Satoshi’s work, find him, and persuade him to turn over his keys and documents to king and country. Sir Arthur Conan Doyle would have both Moriarty and Sherlock Holmes on the trail. Holmes would get there first and send him into protection to ensure Morarty couldn’t take criminal advantage. And then the whole thing would be hushed up in the public interest.

The case continues.

Illustrations: The cryptographic code from “The Dancing Men”, by Sir Arthur Conan Doyle (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Review: Virtual You

Virtual You: How Building Your Digital Twin Will Revolutionize Medicine and Change Your Life
By Peter Coveney and Roger Highfield
Princeton University Press
ISBN: 978-0-691-22327-8

Probably the quickest way to appreciate how much medicine has changed in a lifetime is to pull out a few episodes of TV medical series over the years: the bloodless 1960s Dr Kildare; the 1980s St Elsewhere, which featured a high-risk early experiment in now-routine cardiac surgery; the growing panoply of machcines and equipment of the 2000s series E.R. (1994-2009). But there are always more improvements to be made, and around 2000, when the human genome was being sequenced, we heard a lot about the promise of personalized medicine it was supposed to bring. Then we learned over time that, as so often with scientific advances, knowing more merely served to show us how much more we *didn’t* know – in the genome’s case, about epigenetics, proteomics, and the microbiome. With some exceptions such as cancers that can be tested for vulnerability to particular drugs, the dream of personalized medicine so far mostly remains just that.

Growing alongside all that have been computer models, mostly famously used for metereology and climate change predictions. As Peter Coveney and Roger Highfield explain in Virtual You, models are expected to play a huge role in medicine, too. The best-known use is in drug development, where modeling can help suggest new candidates. But the use that interests Coveney and Highfield is on the personal level: a digital twin for each of us that can be used to determine the right course of treatment by spotting failures in advance, or help us make better lifestyle choices tailored to our particular genetic makeup.

This is not your typical book of technology hype. Instead, it’s a careful, methodical explanation of the mathematical and scientific basis for how this technology will work and its state of development from math and physics to biology. As they make clear, developing the technology to create these digital twins is a huge undertaking. Each of us is a massively complex ecosystem generating masses of data and governed by masses of variables. Modeling our analog selves requires greater complexity than may even be possible with classical digital computers. Coveney and Highfield explain all this meticulously.

It’s not as clear to me as it is to them that virtual twins are the future of mainstream “retail” medicine, especially if, as they suggest, they will be continually updated as our bodies produce new data. Some aspects will be too cost-effective to ignore; ensuring that the most expensive treatments are directed only to those who can benefit will be a money saver to any health service. But the vast amount of computational power and resources likely required to build and maintain a virtual twin for each individual seem prohibitive for all but billionaires. As in engineering, where virtual twins are used for prototyping or meterology, where simulations have led to better and more detailed forecasts, the primary uses seem likely to be at the “wholesale” level. That still leaves room for plenty of revolution.

Nefarious

Torrentfreak is reporting that OCLC, owner of the WorldCat database of bibliographic records, is suing the “shadow library” search engine Anna’s Archive. The claim: that Anna’s Archive hacked into WorldCat, copied 2.2TB of records, and posted them publicly.

Shadow libraries are the text version of “pirate” sites. The best-known is probably Sci-Hub, which provides free access to hundreds of thousands of articles from (typically expensive) scientific journals. Others such as Library Genesis and sites on the dark web offer ebooks. Anna’s Archive indexes as many of these collections as it can find; it was set up in November 2022, shortly after the web domains belonging to the then-largest of these book libraries, Z-Library, were seized by the US Department of Justice. Z-Library has since been rebuilt on the dark web, though it remains under attack by publishers and law enforcement.

Anna’s Archive also includes some links to the unquestionably legal and long-running Gutenberg Project, which publishes titles in the public domain in a wide variety of formats.

The OCLC-Anna’s Archive case has a number of familiar elements that are variants of long-running themes, open versus gatekept being the most prominent. Like many such sites (post-Napster), Anna’s Archive does not host files itself. That’s no protection from the law; authorities in various countries from have nonetheless blocked or seized the domains belonging to such sites. But OCLC is not a publisher or rights holder, although it takes large swipes at Anna’s Archive for lawlessness and copyright infringement. Instead, it says Anna’s Archive hacked WorldCat, violating its terms and conditions, disrupting its business arrangements, and costing it $1.4 million and 10,000 employee hours in system remediation. Second, it complains that Anna’s Archive has posted the data in the aggregate for public download, and is “actively encouraging nefarious use of the data”. Other than the use of “nefarious”, there seems little to dispute about either claim; Anna’s Archive published the details in an October 2023 blog posting.

Anna’s Archive describes this process as “preserving” the world’s books for public access. OCLC describes it as “tortious inference” with its business. It wants the court to issue injunctive relief to make the scraping and use of the data stop, compensatory damages in excess of $75,000, punitive damages, costs, and whatever else the court sees fit. The sole named defendant is a US citizen, María A. Matienzo, thought to be resident near Seattle. If the identification and location are correct, that’s a high-risk situation to be in.

In the blog posting, Anna’s Archive writes that its initial goal was to answer the question of what percentage of the world’s published books are held in shadow libraries and create a to-do list of gaps to fill. To answer these questions, they began by scraping ISBNdb, the database of publications with ISBNs, which only came into use in 1970. When the overlap with the Internet Archive’s Open Library and the seized Z-library was less than they hoped, they turned to Worldcat. At that point, they openly say that security flaws in the fortuitously redesigned Worldcat website allowed them to grab more or less the comprehensive set of records. While scraped“>scraping can be legal, exploiting security flaws to gain unauthorized access to a computer system likely violates the widely criticized Computer Fraud and Abuse Act (1986), which could be a felony. OCLC has, however, brought a civil case.

Anna’s Archive also searches the Internet Archive’s Open Library, founded in 2006. In 2009, co-creator Aaron Swartz told me that he believed the creation of Open Library pushed OCLC into opening up greater public access to the basic tier of its bibliographic data. The Open Library currently has its own legal troubles; it lost in court in August 2023 after Hachette sued it for copyright infringement. The Internet Archive is appealing; in the meantime it is required to remove on request of any member of the American Asociation of Publishers any book commercially available in electronic format.

OCLC began life as the Ohio Library College Library Center; its WorldCat database is a collaboration between it and its member libraries to create a shared database of bibliographic records and enable online cataloguing. The last time I wrote about it, in 2009, critics were complaining that libraries in general were failing to bring book data onto the open web. It has gotten a lot better in the years since, and many local libraries are now searchable online and enable their card holders to borrow from their holdings of ebooks over the web.

The fact that it’s now often possible to borrow ebooks from libraries should mean there’s less reason to use unauthorized sites. Nonetheless, these still appeal: they have the largest catalogues, the most convenient access, DRM-free files, and no time limits, so you can read them at your leisure using the full-featured reader you prefer.

In my 2009 piece, an OCLC spokesperson fretted about “over-exploitation”, which there would be no good way to maintain or update countless unknown scattered pockets of data, seemingly a solvable problem.

OCLC and its member libraries are all non-profit organizations ultimately funded by taxpayers. The data they collect has one overriding purpose: to facilitate public access to libraries’ holdings by showing who holds what books in which editions. What are “nefarious” uses? Arguably, the data they collect should be public by right. But that’s not the question the courts will decide.

Illustrations: The New York Public Library, built 1911 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Irreparable harm, part II

Time to revisit the doping case of Karmila Valieva, the 15-year-old Russian figure skater who was allowed to compete in the February 2022 Winter Olympics despite testing positive for the banned substance trimetazidine on the grounds that banning her from competition would cause her “irreparable harm”. The harm was never defined, but presumably went something like careers are short, Valieva was a multiple champion and the top prospect for gold, and she could be later disqualified but couldn’t retroactively compete. An adult would have been disqualified there and then, but 15-year-old was made a “protected person” in the World Anti-Doping Agency’s 2021 code.

Two years on, the Court for Arbitration of Sport has confirmed she is banned for four years, backdated to December 2021, and will be stripped of the results, prizes, medals, and awards she has won in the interim. CAS, the ultimate authority in such cases, did not buy her defense that her grandfather, who is prescribed trimetazidine, inadvertently contaminated her food. She will be eligible to compete again in December 2026.

In a paper, Marcus Camposa,b, Jim Parrya, and Irena Martínková conclude that the WADA’s concept of “protected person” “transforms…potential victims into suspects”. As they write, the “protection” is more imagined than real, since minors are subjected to the same tests and the same sanctions as adults. While the code talks of punishing those involved in doping minors, to date the only person suffering consequences for Valieva’s positive test is Valieva, still a minor but no longer a “protected person”.

A reminder: besides her positive test for trimetazidine, widely used in Russia to prevent angina attacks, Valieva had therapeutic use exemptions for two other heart medications. How “protected” is that? Shouldn’t the people authorizing TUEs raise the alarm when a minor is being prescribed multiple drugs for a condition vastly more typical of middle-aged men?

According to the Anti-doping Database, doping is not particularly common in figure skating – but Russia has the most cases. WADA added trimetazidine to the banned list in 2014 as a metabolic modulator; if it helps athletes it’s by improving cardiovascular efficiency and therefore endurance. CNN compares it to meldonium, the drug that got tennis player Maria Sharapova banned in 2016.

In a statement, the World Anti-Doping Agency said it welcomed the ruling but that “The doping of children is unforgivable. Doctors, coaches or other support personnel who are found to have provided performance-enhancing substances to minors should face the full force of the World Anti-Doping Code. Indeed, WADA encourages governments to consider passing legislation – as some have done already – making the doping of minors a criminal offence.” That seems essential for real protection; otherwise the lowered sanctions imposed upon minors could be an incentive to take more risks doping them.

The difficulty is that underage athletes are simultaneously children and professional athletes competing as peers with adults. For the rules of the sport itself, of course the rules must be the same; 16-year-old Mirren Andreeva doesn’t get an extra serve or a larger tennis court to hit into. Hence 2014 bronze medalist Ashley Wagner’s response to an exTwitter poster calling the ruling irrational and cruel: “every athlete plays by the same rules”. But anti-doping protocols are different, involving issues of consent, medical privacy, and public shaming. For the rest of the field, it’s not fair to exempt minors from the doping rules that apply to everyone else; for the minor, who lacks agency and autonomy, it’s not fair if you don’t. This is only part of the complexity of designing an anti-doping system and applying it equally to minors, 40-something hundred-millionaire tennis players, and minimally funded athletes in minority sports who go back to their day jobs when the comptition ends.

Along with its statement, WADA launched the Operation Refuge report (PDF) on doping and minors. The most commonly identified doping substance for both girls and boys is the diuretic furosemide followed by methylphenidate (better known as the ADHD medication Ritalin). The most positive tests come from Russia, India, and China. The youngest child sanctioned for a doping violation was 12. The report goes on to highlight the trauma and isolation experienced by child athletes who test positive – one day a sporting hero, the next a criminal.

The alphabet soup of organizations in charge of Valieva’s case – the Russian Anti-Doping Agency, the International Skating Union, WADA, CAS – could hardly have made a bigger mess. The delays: it took six weeks to notify Valieva of her positive test, and two years to decide her case. Then, despite the expectation that disqualifying Valieva disqualifies her entire team, the ISU recalculated the standings, giving Russia the bronze medal, the US the gold, and Japan silver. The Canadian team, which placed fourth, is considering an appeal; Russia is preparing one. Ironically, according to this analysis by Martina Frammartino, the Russian bench is so strong that it could easily have won gold if Valeeva’s positive test had come through in time to replace her.

I’ve never believed that the anti-doping system was fit for purpose; considered as a security system, too many incentives are misaligned, as became clear in 2016, when the extent of Russian state-sponsored doping became public. This case shows the system at its worst.

Illustrations: Kamila Valieva in 2018 (via Luu at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Trust busted

It’s hard not to be agog at the ongoing troubles of Boeing. The covid-19 pandemic was the first break in our faith in the ready availability of travel; this is the second. As one of only two global manufacturers of commercial airplanes, Boeing’s problems are the industry’s problems.

I’ve often heard cybersecurity researchers talk with envy of the aviation industry. Usually, it’s because access to data is a perennial problem; many companies don’t want to admit they’ve been hacked or talk about it when they do. By contrast, they’ve said, the aviation industry recognized early that convincing people flying was safe was crucial to everyone’s success and every crash hurt everyone’s prospects, not just those of the competitor whose plane went down. The result was industry-wide adoption of strategies designed to maximize collaboration across the industry to improve safety: data sharing, no-fault reporting, and so on. That hard-won public trust has, we now see, allowed modern industry players to coast on their past reputations. With this added irony: while airlines and governments everywhere have focused on deterring terrorists, the risks are coming from within the industry.

I’m the right age to have rarely worried about aviation safety – young enough to have missed the crashes of the early years, old enough that my first flights were taken in childhood. Isaac Asimov, born in 1920, who said he refused to fly because passengers didn’t have a “sporting” chance of survival in a crash, was actually wrong; the survival rate for airplane crashes in over 90%. Many people feel safer when they feel in control. Yet, as Bruce Schneier has frequently said, you’re at greater risk on the drive to the airport than you are on the plane.

In fact, it’s an extraordinary privilege that most of us worry more about delays, lost luggage, bad food, and cramped seating than whether our flight will land safely. The 2018 crash of a Boeing 737 MAX 8 did little to dislodge this general sense of safety, even though 189 people died, and the same was true following the 2019 crash of the same plane, which killed another 156 people. Boeing tried to sell the idea that it was inadequately trained pilots working for substandard (read: not American or European) airlines, but the reality quickly became plain: the company had skimped on testing and training and its famed safety-first engineering-led culture had disintegrated under pressure to reward shareholders and executives.

We were able to tell ourselves that it was one model plane, and that changes followed, as Bloomberg investigative reporter Peter Robison documents in Flying Blind: The 737 MAX Tragedy and the Fall of Boeing. In particular, the US Congress undid the 2020 legal change that had let Boeing self-certify and restored the Federal Aviation Administration’s obligation of direct oversight, some executives were replaced, and a test pilot went to jail. However, Robison wrote for publication in 2021, many inside the industry, not just at Boeing, thought the FAA’s 20-month grounding of the MAX was “an overreaction”. You might think – as I did – that the airlines themselves would be strongly motivated not to fly planes that could put their safety record at risk, but Robison’s reporting is not comforting about that: the MAX, he writes, is “a moneymaker” for the airlines in that it saves 15% on fuel costs per flight.

Still, the problem seemed to be confined to one model of plane. Until, on January 5, the door plug blew out of a 737 MAX 9. A day later, the FAA grounded all planes of that model for safety inspections.

On January 13, a crack was found in a cockpit window of a 737-800 in Japan. On January 19, a cargo 747-8 caught fire leaving Miami. On January 24, Alaska Airlines reported finding many loose bolts during its fleetwide inspection of 737 Max 9s. Then on January 24, the nose wheel fell off a 757 departing Atlanta. Near-simultaneously, the Seattle Times reported that Boeing itself installed the door plug that blew out, not its supplier, Spirit Aerosystems. The online booking agent and price comparison site Kayak announced that increasing use of its aircraft-specific filter had led it to add separate options to avoid 737 MAX 8s and 9s.

The consensus that formed about the source of the troubles that led to the 2018-2019 crashes is holding: blame focuses on the change in company culture brought by the 1997 merger with McDonnell Douglas, valuing profits and shareholder payouts over engineering. Boeing is in for a period of self-reinvention in which its output will be greatly slowed. As airlines’ current fleets age, this will have to mean reduced capacity; there are only two major aircraft manufacturers in the world, and the other one – Airbus – is fully booked.

As Cory Doctorow writes, that’s only one constraint going forward, at least in the US: there aren’t enough pilots, air traffic controllers, or engine manufacturers. Anti-monopolist Matt Stoller proposes to nationalize and then break up Boeing, arguing that its size and importance mean only the state can backstop its failures. Ten years ago, when the US’s four big legacy airlines consolidated to three, it was easy to think passengers would pay in fees and lost comfort; now we know safety was on the line, too.

Illustrations: The Wright Brothers’ first heavier-than-air flight, in 1903 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Objects of copyright

Back at the beginning, the Internet was going to open up museums to those who can’t travel to them. Today…

At the Art Newspaper, Bender Grosvenor reports that a November judgment from the UK Court of Appeal means museums can’t go on claiming copyright in photographs of public domain art works. Museums have used this claim to create costly licensing schemes. For art history books and dissertations that need the images for discussion, the costs are often prohibitive. And, it turns out, the “GLAM” (galleries, libraries, archives, and museums) sector isn’t even profiting from it.

Grosvenor cites figures: the National Gallery alone lost £31,000 on its licensing scheme in 2021-2022 (how? is left as an exercise for the reader). This figure was familiar: Douglas McCarthy, whom the article quotes, cited it at gikii 2023. As an ongoing project with Andrea Wallace, McCarthy co-runs the Open GLAM survey, which collects data to show the state of open access in this sector.

In his talk, McCarthy, an art historian by training and the head of the Library Learning Center at Delft University of Technology, showed that *most* such schemes are losing money. The National Gallery of Liverpool, for example, lost £71,873 on licensing activities between 2018 and 2023.

Like Grosvenor, McCarthy noted that the scholars whose work underpins the work of museums and libraries, are finding it increasingly difficult to afford that work. One of McCarthy’s examples was St Andrews art history professor Kathryn M. Rudy, who summed up her struggles in a 2019 piece for Times Higher Education: “The more I publish, the poorer I get.”

Rudy’s problem is that publishing in art history, as necessary for university hiring and promotions, requires the use of images of the works under discussion. In her own case, the 1,419 images she needed to use to publish six monographs and 15 articles have consumed most of her disposable income. To be fair, licensing fees are only part of this. She also lists travel to view offline collecctions, the costs of attending conferences, data storage, academic publishers’ production fees, and paying for the copies of books contracts require her to send the libraries supplying the images; some of this is covered by her university. But much of those extra costs come from licensing fees that add up to thousands of pounds for the material necessary for a single book: reproduction fees, charges for buying high-resolution copies for publication, and even, when institutions allow it at all, fees for photographing images in situ using her phone. Yet these institutions are publicly funded, and the works she is photographing have been bought with money provided by taxpayers.

On the face of it, THJ v. Sheridan, as explained by the law firm the law firm Pennington, Manches, Cooper in a legal summary, doesn’t seem to have much to do with the GLAM sector. Instead, the central copyright claim was regarding the defendant software used in webinars and presentations. However, the point, as the Kluwer Copyright blog explains, was deciding which test to apply to decide whether a copyrighted work is original.

In court, THJ, a UK-based software development firm, claimed that Daniel Sheridan, a US options trading mentor and former licensee, had misrepresented its software as his own and had violated THJ’s copyright by using the software after his license agreement expired by including images of the software in his presentations. One of THJ’s two claims failed on the basis that the THJ logo and copyright notices were displayed throughout the presentation.

The second is the one that interests us here: THJ claimed copyright in the images of its software based on the 1988 Copyright, Designs, and Patents Act. The judge, however, ruled that while the CDPA applies to the software, images of the software’s graphical interface are not covered; to constitute infringement Sheridan would have had to communicate the images to the UK public. In analyzing the judgment, Grosvenor pulled out the requirements for copyright to apply: that making the images required some skill and labor on the part of the person or organization making the claim. By definition, this can’t be true of a photograph of a painting, which needs to be as accurate a representation as possible.

Grosvenor has been on this topic for a while. In 2017, he issued a call to arms in Art History News, arguing that image reproduction fees are “killing art history”.

In 2017, Grosvenor was hopeful, because US museums and a few European ones were beginning to do away with copyright claims and licensing fees and finding that releasing the images to the public to be used for free in any context created value in the form of increased discussion, broadcast, and visibility. Progress continues, as McCarthy’s data shows, but inconsistently: last year the incoming Italian government reversed its predecessor’s stance by bringing back reproduction fees even for scientific journals.

Granted, all of the GLAM sector is cash-strapped and is desperately seeking new sources of income. But these copyright claims seem particularly backwards. It ought to be obvious that the more widely images of an institution’s holdings are published the more people will want to see the original; greater discussion of these art works would seem to fulfill their mission of education. Opening all this up would seem to be a no-brainer. Whether the GLAM folks like it or not, the judge did them a favor.

Illustrations: “Harpist”, from antiphonal, Cambrai or Tournai c. 1260-1270, LA, Getty Museum, Ms. 44/Ludwig VI 5, p. 115 (via Discarding Images).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Infallibile

It’s a peculiarity of the software industry that no one accepts product liability. If your word processor gibbers your manuscript, if your calculator can’t subtract, if your phone’s security hole results in your bank account’s being drained, if a chatbot produces entirely false results….it’s your problem, not the software company’s. As software starts driving cars, running electrical grids, and deciding who gets state benefits, the lack of liability will matter in new and dangerous ways. In his 2006 paper, The Economics of Information Security, Ross Anderson writes about the “moral-hazard effect” connection between liability and fraud: if you are not liable, you become lazy and careless. Hold that thought.

To it add: in the British courts, there is a legal presumption that computers are reliable. Suggestions that this law should be changed go back at least 15 years, but this week they gained new force. It sounds absurd if applied to today’s complex computer systems, but the law was framed with smaller mechanical devices such as watches and Breathalyzers in mind. It means, however, that someone – say a subpostmaster – accused of theft has to find a way to show the accounting system computer was not operating correctly.

Put those two factors together and you get the beginnings of the Post Office Horizon scandal, which currently occupies just about all of Britain following ITV’s New Year’s airing of the four-part drama Mr Bates vs the Post Office.

For those elsewhere: this is the Post Office Horizon case, which is thought to be one of the worst miscarriages of justice in British history. The vast majority of the country’s post offices are run by subpostmasters, each of whom runs their own business under a lengthy and detailed contract. Many, as I learned in 2004, operate their post office counters inside other businesses; most are news agents, but some share old police stations and hairdressers.

In 1999, the Post Office began rolling out the “Horizon” computer accounting system, which was developed by ICL, formerly a British company but by then owned by Fujitsu. Subpostmasters soon began complaining that the new system reported shortfalls where none existed. Under their contract, subpostmasters bore all liability for discrepancies. The Post Office accordingly demanded payment and prosecuted those from whom it was not forthcoming. Many lost their businesses, their reputations, their homes, and much of their lives, and some were criminally convicted.

In May 2009, Karl Flinders published the first of dozens of articles on the growing scandal. Perhaps most important: she located seven subpostmasters who were willing to be identified. Soon afterwards, Welsh former subpostmaster Alan Bates convened the Justice for Subpostmasters Alliance, which continues to press for exoneration and compensation for the many hundreds of victims.

Pieces of this saga were known, particularly after a 2015 BBC Panorama documentary. Following the drama’s airing, the UK government is planning legislation to exonerate all the Horizon victims and fast-track compensation. The program has also drawn new attention to the ongoing public inquiry, which…makes the Post Office look so much worse, as do the Panorama team’s revelations of its attempts to suppress the evidence they uncovered. The Metropolitan Police is investigating the Post Office for fraud.

Two elements stand out in this horrifying saga. First: each subpostmaster calling the help line for assistance was told they were the only one having trouble with the system. They were further isolated by being required to sign NDAs. Second: the Post Office insisted that the system was “robust” – that is, “doesn’t make mistakes”. The defendants were doubly screwed; only their accuser had access to the data that could prove their claim that the computer was flawed, and they had no view of the systemic pattern.

It’s extraordinary that the presumption of reliability has persisted this long, since “infallibility” is the claim the banks made when customers began reporting phantom withdrawals years ago, as Ross Anderson discussed in his 1993 paper Why Cryptosystems Fail (PDF). Thirty years later, no one should be trusting any computer system so blindly. Granted, in many cases, doing what the computer says is how you keep your job, but that shouldn’t apply to judges. Or CEOs.

At the Guardian, Alex Hern reports that legal and computer experts have been urging the government to update the law to remove the legal presumption of reliability, especially given the rise of machine learning systems whose probabilistic nature means they don’t behave predictably. We are not yet seeing calls for the imposition of software liability, though the Guardian reports there are suggestions that if the onoing public inquiry finds Fujitsu culpable for producing a faulty system the company should be required to repay the money it was paid for it. The point, experts tell me, is not that product liability would make these companies more willing to admit their mistakes, but that liability would make them and their suppliers more careful to ensure up front the quality of the systems they build and deploy.

The Post Office saga is a perfect example of Anderson’s moral hazard. The Post Office laid off its liability onto the subpostmasters but retained the right to conduct investigations and prosecutions. When the deck is so stacked, you have to expect a collapsed house of cards. And, as Chris Grey writes, the government’s refusal to give UK-resident EU citizens physical proof of status means it’s happening again.

Illustrations: Local post office.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Relativity

“Status: closed,” the website read. It gave the time as 10:30 p.m.

Except it wasn’t. It was 5:30 p.m., and the store was very much open. The website, instead of consulting the time zone the store – I mean, the store’s particular branch whose hours and address I had looked up – was in was taking the time from my laptop. Which I hadn’t bothered to switch to the US east coat from Britain because I can subtract five hours in my head and why bother?

Years ago, I remember writing a rant (which I now cannot find) about the “myness” of modern computers: My Computer, My Documents. My account. And so on, like a demented two-year-old who needed to learn to share. The notion that the time on my laptop determined whether or not the store was open had something of the same feel: the computational universe I inhabit is designed to revolve around me, and any dispute with reality is someone else’s problem.

Modern social media have hardened this approach. I say “modern” because back in the days of bulletin board systems, information services, and Usenet, postings were time- and date-stamped with when they were sent and specifying a time zone. Now, every post is labelled “2m” or “30s” or “1d”, so the actual date and time are hidden behind their relationship to “now”. It’s like those maps that rotate along with you so wherever you’re pointed physically is at the top. I guess it works for some people, but I find it disorienting; instead of the map orienting itself to me, I want to orient myself to the map. This seems to me my proper (infinitesimal) place in the universe.

All of this leads up to the revival of software agents. This was a Big Idea in the late 1990s/early 2000s, when it was commonplace to think that the era of having to make appointments and book train tickets was almost over. Instead, software agents configured with your preferences would do the negotiating for you. Discussions of this sort of thing died away as the technology never arrived. Generative AI has brought this idea back, at least to some extent, particularly in the financial area, where smart contracts can be used to set rules and then run automatically. I think only people who never have to worry about being able to afford anything will like this. But they may be the only ones the “market” cares about.

Somewhere during the time when software agents were originally mooted, I happened to sit at a conference dinner with the University of Maryland human-computer interaction expert Ben Shneiderman. There are, he said, two distinct schools of thought in software. In one, software is meant to adapt to the human using it – think of predictive text and smartphones as an example. In the other, software is consistent, and while using it may be repetitive, you always know that x command or action will produce y result. If I remember correctly, both Shneiderman and I were of the “want consistency” school.

Philosophically, though, these twin approaches have something in common with seeing the universe as if the sun went around the earth as against the earth going around the sun. The first of those makes our planet and, by extension, us far more important in the universe than we really are. The second cuts us down to size. No surprise, then, if the techbros who build these things, like the Catholic church in Galileo’s day, prefer the former.

***

Politico has started the year by warning that the UK is seeking to expand its surveillance regime even further by amending the 2016 Investigatory Powers Act. Unnoticed in the run-up to Christmas, the industry body techUK sent a letter to “express our concerns”. The short version: the bill expands the definition of “telecommunications operator” to include non-UK providers when operating outside the UK; allows the Home Office to require companies to seek permission before making changes to a privately and uniquely specified list of services; and the government wants to whip it through Parliament as fast as possible.

No, no, Politico reports the Home Office told the House of Lords, it supports innovation and isn’t threatening encryption. These are minor technical changes. But: “public safety”. With the ink barely dry on the Online Safety Act, here we go again.

***

As data breaches go, the one recently reported by 23andMe is alarming. By using passwords exposed in previous breaches (“credential stuffing”) to break into 14,000 accounts, attackers gained access to 6.9 million account profiles. The reason is reminiscent of the Cambridge Analytica scandal, where access to a few hundred thousand Facebook accounts was leveraged to obtain the data of millions: people turned on “DNA Relatives to allow themselves to be found by those searching for genetic relatives. The company, which afterwards turned on a requireme\nt for two-factor authentication, is fending off dozens of lawsuits by blaming the users for reusing passwords. According to Gizmodo, the legal messiness is considerable, as the company recently changed its terms and conditions to make arbitration more difficult and litigation almost impossible.

There’s nothing good to say about a data breach like this or a company that handles such sensitive data with such disdainx. But it’s yet one more reason why putting yourself at the center of the universe is bad hoodoo.

Illustrations: DNA strands (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.