Irreparable harm, part II

Time to revisit the doping case of Karmila Valieva, the 15-year-old Russian figure skater who was allowed to compete in the February 2022 Winter Olympics despite testing positive for the banned substance trimetazidine on the grounds that banning her from competition would cause her “irreparable harm”. The harm was never defined, but presumably went something like careers are short, Valieva was a multiple champion and the top prospect for gold, and she could be later disqualified but couldn’t retroactively compete. An adult would have been disqualified there and then, but 15-year-old was made a “protected person” in the World Anti-Doping Agency’s 2021 code.

Two years on, the Court for Arbitration of Sport has confirmed she is banned for four years, backdated to December 2021, and will be stripped of the results, prizes, medals, and awards she has won in the interim. CAS, the ultimate authority in such cases, did not buy her defense that her grandfather, who is prescribed trimetazidine, inadvertently contaminated her food. She will be eligible to compete again in December 2026.

In a paper, Marcus Camposa,b, Jim Parrya, and Irena Martínková conclude that the WADA’s concept of “protected person” “transforms…potential victims into suspects”. As they write, the “protection” is more imagined than real, since minors are subjected to the same tests and the same sanctions as adults. While the code talks of punishing those involved in doping minors, to date the only person suffering consequences for Valieva’s positive test is Valieva, still a minor but no longer a “protected person”.

A reminder: besides her positive test for trimetazidine, widely used in Russia to prevent angina attacks, Valieva had therapeutic use exemptions for two other heart medications. How “protected” is that? Shouldn’t the people authorizing TUEs raise the alarm when a minor is being prescribed multiple drugs for a condition vastly more typical of middle-aged men?

According to the Anti-doping Database, doping is not particularly common in figure skating – but Russia has the most cases. WADA added trimetazidine to the banned list in 2014 as a metabolic modulator; if it helps athletes it’s by improving cardiovascular efficiency and therefore endurance. CNN compares it to meldonium, the drug that got tennis player Maria Sharapova banned in 2016.

In a statement, the World Anti-Doping Agency said it welcomed the ruling but that “The doping of children is unforgivable. Doctors, coaches or other support personnel who are found to have provided performance-enhancing substances to minors should face the full force of the World Anti-Doping Code. Indeed, WADA encourages governments to consider passing legislation – as some have done already – making the doping of minors a criminal offence.” That seems essential for real protection; otherwise the lowered sanctions imposed upon minors could be an incentive to take more risks doping them.

The difficulty is that underage athletes are simultaneously children and professional athletes competing as peers with adults. For the rules of the sport itself, of course the rules must be the same; 16-year-old Mirren Andreeva doesn’t get an extra serve or a larger tennis court to hit into. Hence 2014 bronze medalist Ashley Wagner’s response to an exTwitter poster calling the ruling irrational and cruel: “every athlete plays by the same rules”. But anti-doping protocols are different, involving issues of consent, medical privacy, and public shaming. For the rest of the field, it’s not fair to exempt minors from the doping rules that apply to everyone else; for the minor, who lacks agency and autonomy, it’s not fair if you don’t. This is only part of the complexity of designing an anti-doping system and applying it equally to minors, 40-something hundred-millionaire tennis players, and minimally funded athletes in minority sports who go back to their day jobs when the comptition ends.

Along with its statement, WADA launched the Operation Refuge report (PDF) on doping and minors. The most commonly identified doping substance for both girls and boys is the diuretic furosemide followed by methylphenidate (better known as the ADHD medication Ritalin). The most positive tests come from Russia, India, and China. The youngest child sanctioned for a doping violation was 12. The report goes on to highlight the trauma and isolation experienced by child athletes who test positive – one day a sporting hero, the next a criminal.

The alphabet soup of organizations in charge of Valieva’s case – the Russian Anti-Doping Agency, the International Skating Union, WADA, CAS – could hardly have made a bigger mess. The delays: it took six weeks to notify Valieva of her positive test, and two years to decide her case. Then, despite the expectation that disqualifying Valieva disqualifies her entire team, the ISU recalculated the standings, giving Russia the bronze medal, the US the gold, and Japan silver. The Canadian team, which placed fourth, is considering an appeal; Russia is preparing one. Ironically, according to this analysis by Martina Frammartino, the Russian bench is so strong that it could easily have won gold if Valeeva’s positive test had come through in time to replace her.

I’ve never believed that the anti-doping system was fit for purpose; considered as a security system, too many incentives are misaligned, as became clear in 2016, when the extent of Russian state-sponsored doping became public. This case shows the system at its worst.

Illustrations: Kamila Valieva in 2018 (via Luu at Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Trust busted

It’s hard not to be agog at the ongoing troubles of Boeing. The covid-19 pandemic was the first break in our faith in the ready availability of travel; this is the second. As one of only two global manufacturers of commercial airplanes, Boeing’s problems are the industry’s problems.

I’ve often heard cybersecurity researchers talk with envy of the aviation industry. Usually, it’s because access to data is a perennial problem; many companies don’t want to admit they’ve been hacked or talk about it when they do. By contrast, they’ve said, the aviation industry recognized early that convincing people flying was safe was crucial to everyone’s success and every crash hurt everyone’s prospects, not just those of the competitor whose plane went down. The result was industry-wide adoption of strategies designed to maximize collaboration across the industry to improve safety: data sharing, no-fault reporting, and so on. That hard-won public trust has, we now see, allowed modern industry players to coast on their past reputations. With this added irony: while airlines and governments everywhere have focused on deterring terrorists, the risks are coming from within the industry.

I’m the right age to have rarely worried about aviation safety – young enough to have missed the crashes of the early years, old enough that my first flights were taken in childhood. Isaac Asimov, born in 1920, who said he refused to fly because passengers didn’t have a “sporting” chance of survival in a crash, was actually wrong; the survival rate for airplane crashes in over 90%. Many people feel safer when they feel in control. Yet, as Bruce Schneier has frequently said, you’re at greater risk on the drive to the airport than you are on the plane.

In fact, it’s an extraordinary privilege that most of us worry more about delays, lost luggage, bad food, and cramped seating than whether our flight will land safely. The 2018 crash of a Boeing 737 MAX 8 did little to dislodge this general sense of safety, even though 189 people died, and the same was true following the 2019 crash of the same plane, which killed another 156 people. Boeing tried to sell the idea that it was inadequately trained pilots working for substandard (read: not American or European) airlines, but the reality quickly became plain: the company had skimped on testing and training and its famed safety-first engineering-led culture had disintegrated under pressure to reward shareholders and executives.

We were able to tell ourselves that it was one model plane, and that changes followed, as Bloomberg investigative reporter Peter Robison documents in Flying Blind: The 737 MAX Tragedy and the Fall of Boeing. In particular, the US Congress undid the 2020 legal change that had let Boeing self-certify and restored the Federal Aviation Administration’s obligation of direct oversight, some executives were replaced, and a test pilot went to jail. However, Robison wrote for publication in 2021, many inside the industry, not just at Boeing, thought the FAA’s 20-month grounding of the MAX was “an overreaction”. You might think – as I did – that the airlines themselves would be strongly motivated not to fly planes that could put their safety record at risk, but Robison’s reporting is not comforting about that: the MAX, he writes, is “a moneymaker” for the airlines in that it saves 15% on fuel costs per flight.

Still, the problem seemed to be confined to one model of plane. Until, on January 5, the door plug blew out of a 737 MAX 9. A day later, the FAA grounded all planes of that model for safety inspections.

On January 13, a crack was found in a cockpit window of a 737-800 in Japan. On January 19, a cargo 747-8 caught fire leaving Miami. On January 24, Alaska Airlines reported finding many loose bolts during its fleetwide inspection of 737 Max 9s. Then on January 24, the nose wheel fell off a 757 departing Atlanta. Near-simultaneously, the Seattle Times reported that Boeing itself installed the door plug that blew out, not its supplier, Spirit Aerosystems. The online booking agent and price comparison site Kayak announced that increasing use of its aircraft-specific filter had led it to add separate options to avoid 737 MAX 8s and 9s.

The consensus that formed about the source of the troubles that led to the 2018-2019 crashes is holding: blame focuses on the change in company culture brought by the 1997 merger with McDonnell Douglas, valuing profits and shareholder payouts over engineering. Boeing is in for a period of self-reinvention in which its output will be greatly slowed. As airlines’ current fleets age, this will have to mean reduced capacity; there are only two major aircraft manufacturers in the world, and the other one – Airbus – is fully booked.

As Cory Doctorow writes, that’s only one constraint going forward, at least in the US: there aren’t enough pilots, air traffic controllers, or engine manufacturers. Anti-monopolist Matt Stoller proposes to nationalize and then break up Boeing, arguing that its size and importance mean only the state can backstop its failures. Ten years ago, when the US’s four big legacy airlines consolidated to three, it was easy to think passengers would pay in fees and lost comfort; now we know safety was on the line, too.

Illustrations: The Wright Brothers’ first heavier-than-air flight, in 1903 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Objects of copyright

Back at the beginning, the Internet was going to open up museums to those who can’t travel to them. Today…

At the Art Newspaper, Bender Grosvenor reports that a November judgment from the UK Court of Appeal means museums can’t go on claiming copyright in photographs of public domain art works. Museums have used this claim to create costly licensing schemes. For art history books and dissertations that need the images for discussion, the costs are often prohibitive. And, it turns out, the “GLAM” (galleries, libraries, archives, and museums) sector isn’t even profiting from it.

Grosvenor cites figures: the National Gallery alone lost £31,000 on its licensing scheme in 2021-2022 (how? is left as an exercise for the reader). This figure was familiar: Douglas McCarthy, whom the article quotes, cited it at gikii 2023. As an ongoing project with Andrea Wallace, McCarthy co-runs the Open GLAM survey, which collects data to show the state of open access in this sector.

In his talk, McCarthy, an art historian by training and the head of the Library Learning Center at Delft University of Technology, showed that *most* such schemes are losing money. The National Gallery of Liverpool, for example, lost £71,873 on licensing activities between 2018 and 2023.

Like Grosvenor, McCarthy noted that the scholars whose work underpins the work of museums and libraries, are finding it increasingly difficult to afford that work. One of McCarthy’s examples was St Andrews art history professor Kathryn M. Rudy, who summed up her struggles in a 2019 piece for Times Higher Education: “The more I publish, the poorer I get.”

Rudy’s problem is that publishing in art history, as necessary for university hiring and promotions, requires the use of images of the works under discussion. In her own case, the 1,419 images she needed to use to publish six monographs and 15 articles have consumed most of her disposable income. To be fair, licensing fees are only part of this. She also lists travel to view offline collecctions, the costs of attending conferences, data storage, academic publishers’ production fees, and paying for the copies of books contracts require her to send the libraries supplying the images; some of this is covered by her university. But much of those extra costs come from licensing fees that add up to thousands of pounds for the material necessary for a single book: reproduction fees, charges for buying high-resolution copies for publication, and even, when institutions allow it at all, fees for photographing images in situ using her phone. Yet these institutions are publicly funded, and the works she is photographing have been bought with money provided by taxpayers.

On the face of it, THJ v. Sheridan, as explained by the law firm the law firm Pennington, Manches, Cooper in a legal summary, doesn’t seem to have much to do with the GLAM sector. Instead, the central copyright claim was regarding the defendant software used in webinars and presentations. However, the point, as the Kluwer Copyright blog explains, was deciding which test to apply to decide whether a copyrighted work is original.

In court, THJ, a UK-based software development firm, claimed that Daniel Sheridan, a US options trading mentor and former licensee, had misrepresented its software as his own and had violated THJ’s copyright by using the software after his license agreement expired by including images of the software in his presentations. One of THJ’s two claims failed on the basis that the THJ logo and copyright notices were displayed throughout the presentation.

The second is the one that interests us here: THJ claimed copyright in the images of its software based on the 1988 Copyright, Designs, and Patents Act. The judge, however, ruled that while the CDPA applies to the software, images of the software’s graphical interface are not covered; to constitute infringement Sheridan would have had to communicate the images to the UK public. In analyzing the judgment, Grosvenor pulled out the requirements for copyright to apply: that making the images required some skill and labor on the part of the person or organization making the claim. By definition, this can’t be true of a photograph of a painting, which needs to be as accurate a representation as possible.

Grosvenor has been on this topic for a while. In 2017, he issued a call to arms in Art History News, arguing that image reproduction fees are “killing art history”.

In 2017, Grosvenor was hopeful, because US museums and a few European ones were beginning to do away with copyright claims and licensing fees and finding that releasing the images to the public to be used for free in any context created value in the form of increased discussion, broadcast, and visibility. Progress continues, as McCarthy’s data shows, but inconsistently: last year the incoming Italian government reversed its predecessor’s stance by bringing back reproduction fees even for scientific journals.

Granted, all of the GLAM sector is cash-strapped and is desperately seeking new sources of income. But these copyright claims seem particularly backwards. It ought to be obvious that the more widely images of an institution’s holdings are published the more people will want to see the original; greater discussion of these art works would seem to fulfill their mission of education. Opening all this up would seem to be a no-brainer. Whether the GLAM folks like it or not, the judge did them a favor.

Illustrations: “Harpist”, from antiphonal, Cambrai or Tournai c. 1260-1270, LA, Getty Museum, Ms. 44/Ludwig VI 5, p. 115 (via Discarding Images).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Infallibile

It’s a peculiarity of the software industry that no one accepts product liability. If your word processor gibbers your manuscript, if your calculator can’t subtract, if your phone’s security hole results in your bank account’s being drained, if a chatbot produces entirely false results….it’s your problem, not the software company’s. As software starts driving cars, running electrical grids, and deciding who gets state benefits, the lack of liability will matter in new and dangerous ways. In his 2006 paper, The Economics of Information Security, Ross Anderson writes about the “moral-hazard effect” connection between liability and fraud: if you are not liable, you become lazy and careless. Hold that thought.

To it add: in the British courts, there is a legal presumption that computers are reliable. Suggestions that this law should be changed go back at least 15 years, but this week they gained new force. It sounds absurd if applied to today’s complex computer systems, but the law was framed with smaller mechanical devices such as watches and Breathalyzers in mind. It means, however, that someone – say a subpostmaster – accused of theft has to find a way to show the accounting system computer was not operating correctly.

Put those two factors together and you get the beginnings of the Post Office Horizon scandal, which currently occupies just about all of Britain following ITV’s New Year’s airing of the four-part drama Mr Bates vs the Post Office.

For those elsewhere: this is the Post Office Horizon case, which is thought to be one of the worst miscarriages of justice in British history. The vast majority of the country’s post offices are run by subpostmasters, each of whom runs their own business under a lengthy and detailed contract. Many, as I learned in 2004, operate their post office counters inside other businesses; most are news agents, but some share old police stations and hairdressers.

In 1999, the Post Office began rolling out the “Horizon” computer accounting system, which was developed by ICL, formerly a British company but by then owned by Fujitsu. Subpostmasters soon began complaining that the new system reported shortfalls where none existed. Under their contract, subpostmasters bore all liability for discrepancies. The Post Office accordingly demanded payment and prosecuted those from whom it was not forthcoming. Many lost their businesses, their reputations, their homes, and much of their lives, and some were criminally convicted.

In May 2009, Karl Flinders published the first of dozens of articles on the growing scandal. Perhaps most important: she located seven subpostmasters who were willing to be identified. Soon afterwards, Welsh former subpostmaster Alan Bates convened the Justice for Subpostmasters Alliance, which continues to press for exoneration and compensation for the many hundreds of victims.

Pieces of this saga were known, particularly after a 2015 BBC Panorama documentary. Following the drama’s airing, the UK government is planning legislation to exonerate all the Horizon victims and fast-track compensation. The program has also drawn new attention to the ongoing public inquiry, which…makes the Post Office look so much worse, as do the Panorama team’s revelations of its attempts to suppress the evidence they uncovered. The Metropolitan Police is investigating the Post Office for fraud.

Two elements stand out in this horrifying saga. First: each subpostmaster calling the help line for assistance was told they were the only one having trouble with the system. They were further isolated by being required to sign NDAs. Second: the Post Office insisted that the system was “robust” – that is, “doesn’t make mistakes”. The defendants were doubly screwed; only their accuser had access to the data that could prove their claim that the computer was flawed, and they had no view of the systemic pattern.

It’s extraordinary that the presumption of reliability has persisted this long, since “infallibility” is the claim the banks made when customers began reporting phantom withdrawals years ago, as Ross Anderson discussed in his 1993 paper Why Cryptosystems Fail (PDF). Thirty years later, no one should be trusting any computer system so blindly. Granted, in many cases, doing what the computer says is how you keep your job, but that shouldn’t apply to judges. Or CEOs.

At the Guardian, Alex Hern reports that legal and computer experts have been urging the government to update the law to remove the legal presumption of reliability, especially given the rise of machine learning systems whose probabilistic nature means they don’t behave predictably. We are not yet seeing calls for the imposition of software liability, though the Guardian reports there are suggestions that if the onoing public inquiry finds Fujitsu culpable for producing a faulty system the company should be required to repay the money it was paid for it. The point, experts tell me, is not that product liability would make these companies more willing to admit their mistakes, but that liability would make them and their suppliers more careful to ensure up front the quality of the systems they build and deploy.

The Post Office saga is a perfect example of Anderson’s moral hazard. The Post Office laid off its liability onto the subpostmasters but retained the right to conduct investigations and prosecutions. When the deck is so stacked, you have to expect a collapsed house of cards. And, as Chris Grey writes, the government’s refusal to give UK-resident EU citizens physical proof of status means it’s happening again.

Illustrations: Local post office.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Relativity

“Status: closed,” the website read. It gave the time as 10:30 p.m.

Except it wasn’t. It was 5:30 p.m., and the store was very much open. The website, instead of consulting the time zone the store – I mean, the store’s particular branch whose hours and address I had looked up – was in was taking the time from my laptop. Which I hadn’t bothered to switch to the US east coat from Britain because I can subtract five hours in my head and why bother?

Years ago, I remember writing a rant (which I now cannot find) about the “myness” of modern computers: My Computer, My Documents. My account. And so on, like a demented two-year-old who needed to learn to share. The notion that the time on my laptop determined whether or not the store was open had something of the same feel: the computational universe I inhabit is designed to revolve around me, and any dispute with reality is someone else’s problem.

Modern social media have hardened this approach. I say “modern” because back in the days of bulletin board systems, information services, and Usenet, postings were time- and date-stamped with when they were sent and specifying a time zone. Now, every post is labelled “2m” or “30s” or “1d”, so the actual date and time are hidden behind their relationship to “now”. It’s like those maps that rotate along with you so wherever you’re pointed physically is at the top. I guess it works for some people, but I find it disorienting; instead of the map orienting itself to me, I want to orient myself to the map. This seems to me my proper (infinitesimal) place in the universe.

All of this leads up to the revival of software agents. This was a Big Idea in the late 1990s/early 2000s, when it was commonplace to think that the era of having to make appointments and book train tickets was almost over. Instead, software agents configured with your preferences would do the negotiating for you. Discussions of this sort of thing died away as the technology never arrived. Generative AI has brought this idea back, at least to some extent, particularly in the financial area, where smart contracts can be used to set rules and then run automatically. I think only people who never have to worry about being able to afford anything will like this. But they may be the only ones the “market” cares about.

Somewhere during the time when software agents were originally mooted, I happened to sit at a conference dinner with the University of Maryland human-computer interaction expert Ben Shneiderman. There are, he said, two distinct schools of thought in software. In one, software is meant to adapt to the human using it – think of predictive text and smartphones as an example. In the other, software is consistent, and while using it may be repetitive, you always know that x command or action will produce y result. If I remember correctly, both Shneiderman and I were of the “want consistency” school.

Philosophically, though, these twin approaches have something in common with seeing the universe as if the sun went around the earth as against the earth going around the sun. The first of those makes our planet and, by extension, us far more important in the universe than we really are. The second cuts us down to size. No surprise, then, if the techbros who build these things, like the Catholic church in Galileo’s day, prefer the former.

***

Politico has started the year by warning that the UK is seeking to expand its surveillance regime even further by amending the 2016 Investigatory Powers Act. Unnoticed in the run-up to Christmas, the industry body techUK sent a letter to “express our concerns”. The short version: the bill expands the definition of “telecommunications operator” to include non-UK providers when operating outside the UK; allows the Home Office to require companies to seek permission before making changes to a privately and uniquely specified list of services; and the government wants to whip it through Parliament as fast as possible.

No, no, Politico reports the Home Office told the House of Lords, it supports innovation and isn’t threatening encryption. These are minor technical changes. But: “public safety”. With the ink barely dry on the Online Safety Act, here we go again.

***

As data breaches go, the one recently reported by 23andMe is alarming. By using passwords exposed in previous breaches (“credential stuffing”) to break into 14,000 accounts, attackers gained access to 6.9 million account profiles. The reason is reminiscent of the Cambridge Analytica scandal, where access to a few hundred thousand Facebook accounts was leveraged to obtain the data of millions: people turned on “DNA Relatives to allow themselves to be found by those searching for genetic relatives. The company, which afterwards turned on a requireme\nt for two-factor authentication, is fending off dozens of lawsuits by blaming the users for reusing passwords. According to Gizmodo, the legal messiness is considerable, as the company recently changed its terms and conditions to make arbitration more difficult and litigation almost impossible.

There’s nothing good to say about a data breach like this or a company that handles such sensitive data with such disdainx. But it’s yet one more reason why putting yourself at the center of the universe is bad hoodoo.

Illustrations: DNA strands (via Wikimedia.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon.

Last year’s news

It was tempting to skip wrapping up 2023, because at first glance large language models seemed so thoroughly dominant (and boring to revisit), but bringing the net.wars archive list up to date showed a different story. To be fair, this is partly personal bias: from the beginning LLMs seemed fated to collapse under the weight of their own poisoning; AI Magazine predicted such an outcome as early as June.

LLMs did, however, seem to accelerate public consciousness of three long-running causes of concern: privacy and big data; corporate cooption of public and private resources; and antitrust enforcement. That acceleration may be LLMs’ more important long-term effect. In the short term, the justifiably bigger concern is their propensity to spread disinformation and misinformation in the coming year’s many significant elections.

Enforcement of data protection laws has been slowly ramping up in any case, and the fines just keep getting bigger, culminating in May’s fine against Meta for €1.2 billion. Given that fines, no matter how large, seem insignificant compared to the big technology companies’ revenues, the more important trend is issuing constraints on how they do business. That May fine came with an order to stop sending EU citizens’ data to the US. Meta responded in October by announcing a subscription tier for European Facebook users: €160 a year will buy freedom from ads. Freedom from Facebook remains free.

But Facebook is almost 20 years old; it had years in which to grow without facing serious regulation. By contrast, ChatGPT, which OpenAI launched just over a year ago, has already faced investigation by the US Federal Trade Commission and been banned temporarily by the Italian data protection authority (it was reinstated a month later with conditions). It’s also facing more than a dozen lawsuits claiming copyright infringement; the most recent of these was filed just this week by the New York Times. It has settled one of these suits by forming a partnership with Axel Springer.

It all suggests a lessening tolerance for “ask forgiveness, not permission”. As another example, Clearview AI has spent most of the four years since Kashmir Hill alerted the world to its existence facing regulatory bans and fines, and public disquiet over the rampant spread of live facial recognition continues to grow. Add in the continuing degradation of exTwitter, the increasing number of friends who say they’re dropping out of social media generally, and the revival of US antitrust actions with the FTC’s suit against Amazon, and it feels like change is gathering.

It would be a logical time, for an odd reason: each of the last few decades as seen through published books has had a distinctive focus with respect to information technology. I discovered this recently when, for various reasons, I reorganized my hundreds of books on net.wars-type subjects dating back to the 1980s. How they’re ordered matters: I need to be able to find things quickly when I want them. In 1990, a friend’s suggestion of categorizing by topic seemed logical: copyright, privacy, security, online community, robots, digital rights, policy… The categories quickly broke down and cross-pollinated. In rebuilding the library, what to replace it with?

The exercise, which led to alphabetizing by author’s name within decade of publication, revealed that each of the last few decades has been distinctive enough that it’s remarkably easy to correctly identify a book’s decade without turning to the copyright page to check. The 1980s and 1990s were about exploration and explanation. Hype led us into the 2000s, which were quieter in publishing terms, though marked by bursts of business books that spanned the dot-com boom, bust, and renewal. The 2010s brought social media, content moderation, and big data, and a new set of technologies to hype, such as 3D printing and nanotechnology (about which we hear nothing now). The 2020s, it’s too soon to tell…but safe to say disinformation, AI, and robots are dominating these early years.

The 2020s books to date are trying to understand how to rein in the worst effects of Big Tech: online abuse, cryptocurrency fraud, disinformation, the loss of control as even physical devices turn into manufacturer-controlled subscription services, and, as predicted in 2018 by Christian Wolmar, the ongoing failure of autonomous vehicles to take over the world as projected just ten years ago.

While Teslas are not autonomous, the company’s Silicon Valley ethos has always made them seem more like information technology than cars. Bad idea, as Reuters reports; its investigation found a persistent pattern of mishaps such as part failures and wheels falling off – and an equally persistent pattern of the company blaming the customer, even when the car was brand new. If we don’t want shoddy goods and data invasion with everything to be our future, fighting back is essential. In 2032, I hope looking back shows that story.

The good news going into 2024 is, as the Center for the Public Domain at Duke University, Public Domain Review and Cory Doctorow write, the bumper crop of works entering the public domain: sound recordings (for the first time in 40 years), DH Lawrence’s Lady Chatterley’s Lover, Agatha Christie’s The Mystery of the Blue Train, Ben Hecht and Charles MacArthur’s play The Front Page. and the first of Mickey Mouse. Happy new year.

Illustrations: Promotional still from the 1928 production of The Front Page, which enters the public domain on January 1, 2024 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

A surveillance state of mind

­”Do computers automatically favor authoritarianism?” a friend asked recently. Or, are they fundamentally anti-democratic?

Certainly, at the beginning, many thought that both the Internet and personal computers (think, for example, of Apple’s famed Super Bowl ad, “1984”) – would favor democratic ideals by embedding values such as openness, transparency, and collaborative policy-making in their design. Universal access to information and to networks of distribution was always going to have downsides, but on balance was going to be a Good Thing (actually, I still believe this). So, my friend was asking, were those hopes always fundamentally absurd, or were the problems of disinformation and widespread installation of surveillance technology always inevitable for reasons inherent in the technology itself?

Computers, like all technology, are what we make them. But one fundamental characteristic does seem to me unavoidable: they upend the distribution of data-related costs. In the physical world, more data always involved more expense: storing it required space, and copying or transmitting it took time, ink, paper, and personnel. In the computer world, more data is only marginally more expensive, and what costs remain have kept falling for 70 years. For most purposes, more digital data incurs minimal costs. The expenses of digital data only kick in when you curate it: selection and curation take time and personnel. So the easiest path with computer data is always to keep it. In that sense, computers inevitably favor surveillance.

The marketers at companies that collect data about this try to argue this is a public *good* because doing so enables them to offer personalized services that benefit us. Underneath, of course, there are too many economic incentives for them not to “share” – that is, sell – it onward, creating an ecosystem that sends our data careening all over the place, and where “personalization” becomes “surveillance” and then, potentially, “maleveillance”, which is definitely not in our interests.

At a 2011 workshop on data abuse, participants noted that the mantra of the day was “the data is there, we might as well use it”. At the time, there was a definite push from the industry to move from curbing data collection to regulating its use instead. But this is the problem: data is tempting. This week has provided a good example of just how tempting in the form of a provision in the UK’s criminal justice bill will allow police to use the database of driver’s license photos for facial recognition searches. “A permanent police lineup,” privacy campaigners are calling it.

As long ago as 1996, the essayist and former software engineer Ellen Ullman called out this sort of temptation, describing it as a system “infecting” its owner. Data tempts those with access to it to ask questions they couldn’t ask before. In many cases that’s good. Data enables Patrick Ball’s Human Rights Data Analysis Group to establish “who did what to whom” in cases of human rights abuse. But, in the downside in Ullman’s example, it undermines the trust between a secretary and her boss, who realizes he can use the system to monitor her work, despite prior decades of trust. In the UK police example, the downside is tempting the authorities to combine the country’s extensive network of CCTV images and the largest database of photographs of UK residents. “Crime scene investigations,” say police and ministers. “Chill protests,” the rest of us predict. In a story I’m writing for the sucessor to the Cybersalon anthology Twenty-Two Ideas About the Future, I imagined a future in which police have the power and technology to compel every camera in the country to join a national network they control. When it fails to solve an important crime of the day, they successfully argue it’s because the network’s availability was too limted.

The emphasis on personalization as a selling point for surveillance – if you turn it off you’ll get irrelevant ads! – reminds that studies of astrology starting in 1949 have found that people’s rating of their horoscopes varies directly with how personalized they perceive them to be. The horoscope they are told has been drawn up just for them by an astrologer gets much higher ratings than the horoscope they are told is generally true of people with their sun sign – even when it’s the *same* horoscope.

Personalization is the carrot businesses use to get us to feed our data into their business models; their privacy policies dictate the terms. Governments can simply compel disclosure as a requirement for a benefit we’re seeking – like the photo required to get a driver’s license,, passport, or travel pass. Or, under greater duress, to apply for or await a decision about asylum, or try to cross a border.

“There is no surveillance state,” then-Home Secretary Theresa May said in 2014. No, but if you put all the pieces in place, a future government of a malveillance state of mind can turn it on at will.

So, going back to my friend’s question. Yes, of course we can build the technology so that it favors democratic values instead of surveillance. But because of that fundamental characteristic that makes creating and retaining data the default and the business incentives currently exploiting the results, it requires effort and thought. It is easier to surveil. Malveillance, however, requires power and a trust-no-one state of mind. That’s hard to design out.

Illustrations: The CCTV camera at 22 Portobello Road, where George Orwell lived circa 1927.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Game of carrots

The big news of the week has been the result of the Epic Games v. Google antitrust trial. A California jury took four hours to agree with Epic that Google had illegally tied together its Play Store and billing service, so that app makers could only use the Play Store to distribute their apps if they also used Google’s service for billing, giving Google a 30% commission. Sort of like, I own half the roads in this town, and if you want to sell anything to my road users you have to have a store in my mall and pay me a third of your sales revenue, and if you don’t like it, tough, because you can’t reach my road users any other way. Meanwhile, the owner of the other half of the town’s roads is doing exactly the same thing, so you can’t win.

At his BIG Substack, antitrust specialist Matt Stoller, who has been following the trial closely, gloats, “the breakup of Big Tech begins”. Maybe not so fast: Epic lost its similar case against Apple. Both of these cases are subject to appeal. Stoller suggests, however, that the latest judgment will carry more weight because it came from a jury of ordinary citizens rather than, as in the Apple case, a single judge. Stoller believes the precedent set by a jury trial is harder to ignore in future cases.

At The Verge, Sean Hollister, who has been covering the trial in detail, offers a summary of 20 key points he felt the trial established. Written before the verdict, Hollister’s assessment of Epic’s chances proved correct.

Even if the judgment is upheld in the higher courts, it will be a while before users see any effects. But: even if the judgment is overturned in the higher courts, my guess is that the technology companies will begin to change their behavior at least a bit, in self-defense. The real question is, what changes will benefit us, the people whose lives are increasingly dominated by these phones?

I personally would like it to be much easier to use an Android phone without ever creating a Google account, and to be confident that the phone isn’t sending masses of tracking data to either Google or the phone’s manufacturer.

But…I would still like to be able to download the apps I want from a source I can trust. I care less about who provides the source than I do about what data they collect about me and the cost.

I want that source to be easy to access, easy to use, and well-stocked, defining “well-stocked” as “has the apps I want” (which, granted, is a short list). The nearest analogy that springs to mind is TV channels. You don’t really care what channel the show you want to watch is on; you just want to be able to watch the show without too much hassle. If there weren’t so many rights holders running their own streaming services, the most sensible business logic would be for every show to be on every service. Then instead of competing on their catalogues, the services would be competing on privacy, or interface design, or price. Why shouldn’t we have independent app stores like that?

Mobile phones have always been more tightly controlled than the world of desktop computing, largely because they grew out of the tightly controlled telecommunications world. Desktop computing, like the Internet, served first the needs of the military and academic research, and they remain largely open even when they’re made by the same companies who make mobile phone operating systems. Desktop systems also developed at a time when American antitrust law still sought to increase competition.

It did not stay that way. As current FTC chair Lina Khan made her name pointing out in 2017, antitrust thinking for the last several decades has been limited to measuring consumer prices. The last big US antitrust case to focus on market effects was Microsoft, back in 1995. In the years since, it’s been left to the EU to act as the world’s antitrust enforcer. Against Google, the EU has filed three cases since 2010: over Shopping (Google was found guilty in 2017 and fined €2.4 billion, upheld on appeal in 2021); Android, over Google apps and the Play Store (Google was found guilty in 2018 and fined €4.3 billion and required to change some of its practices); and AdSense (fined €1.49 billion in 2019). But fines – even if the billions eventually add up to real money – don’t matter enough to companies with revenues the size of Google’s. Being ordered to restructure its app store might.

At the New York Times, Steve Lohr compares the Microsoft and Epic v Google cases. Microsoft used its contracts with PC makers to prevent them from preinstalling its main web browser rival, Netscape, in order to own users’ path into the accelerating digital economy. Google’s contracts instead paid Apple, Samsung, Mozilla, and others to favor it on their systems – “carrots instead of sticks,” NYU law professor Harry First told Lohr.

The best thing about all this is that the Epic jury was not dazzled by the incomprehensibility effect of new technology. Principles are coming back into focus. Tying – leveraging your control over one market in order to dominate another – is no different if you say it in app stores than if you say it in gas stations or movie theaters.

Illustrations: “The kind of anti-trust legislation that is needed”, by J.S. Pughe (via Library of Congress).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Property is theft

If you were to judge just by behavior, you would have to conclude that the entertainment industry’s rights holders are desperate to promote piracy.

The latest instance is that Sony has warned American Playstation owners that shows they purchased – *bought* – from Discovery will, now that Discovery has merged with Warner Brothers, be removed from their video libraries. This isn’t like Netflix losing the license to stream your current favorite show halfway through season 2, which you can maybe fix by joining whichever streaming service the show is now on (assuming there is one). No, this is you (thought you) bought and they took it away.

In other words, the entertainment industry has taken the old anarchist slogan property is theft and turned it into a business model.

This isn’t a one-time occurrence. As Timothy Geigner writes at TechDirt, in 2022 customers in Germany and Austria lost access to hundreds of movies when a deal between Sony and film distributor Studio Canal expired. As in the Warner Brothers/Discovery case, it’s not just that the movies were removed from the list available for purchase; the long, remote arm of Sony reached into individual Playstations and removed them from there, too.

If Warner Brothers sent a minion to come into my house, take a DVD from a shelf, and take it away, that would clearly be theft, even if I had given the company a key so it could come in and update my Blu-Ray player. Why is it different if it’s a digital file held on an electronic device?

This is the kind of question I used to get asked back when these copyright battles were new. “You’re a freelance writer,” said the first person I interviewed on this sort of subject, back in 1991; he was the new head of the Federation Against Software Theft. “You make your living from copyright. Why aren’t you against piracy?” (Or something close to that.)

At the time the big battle in freelance journalism was that publishers were pushing toward all-rights contracts that would let them use whatever we wrote forever without further payment. Freelances were trying to hang onto the old arrangement, under which the publisher just got the right to run the piece once (and *first*), and then the freelance could go on and resell the piece in whole or in part to others and in other markets. Columnists made money by compiling their pieces into books. Magazine writers made money by reselling to other countries or selling reworked versions to specialist publications.

By 1995 you couldn’t really make money that way any more. Today, younger freelances have little idea it was ever possible. This, again, is the future the recent SAG-AFTRA strikes were trying to avoid. The shift is more simply described like this: the old way was pay per use; the new way the studios want is pay once, use forever. This struggle is endemic to every industry, as SAG head Fran Drescher pointed out.

The exact opposite is what’s happening to consumer access. In the old way, because buying physical media conferred ownership of the media (and the fact that the content was only ever licensed was largely moot), consumers bought once and used as much as they wanted until the disc or tape wore out. Even if streaming doesn’t quite open the way for paying for every use (though I bet that’s the hope), it does grant remote control to anyone who has access to the device – even if you thought you only granted permission to put stuff there, not remove it.

If I remember correctly, the first time people realized this kind of power existed was in 2009, when Amazon deleted (irony of ironies) copies of George Orwell’s novel 1984 from thousands of Kindles because the third-party company selling the ebook did not in fact have the rights to it. In this particular case, Amazon did refund the money people had paid. Since then, there’s been a steady trickle of cases where ultimate control of the device stays with its maker and doesn’t transfer to the person who paid to buy it.

You might think that the solution is to go on (or back to) buying the entertainment you love on physical media…but that option is also under threat. Disney announced in July that it would stop selling DVDs and Blu-Ray discs in Australia. In the US, Best buy is about to stop carrying them. Add in the recent trend for deleting even successful shows for tax reasons and the unpredictability of which streaming service might have the thing you’re looking for, and you have an extremely consumer-hostile industry.

For consumers, the perfect service looks something like this: the library is, if not complete, *very* extensive, all indexed in one place, and easily searchable using a simple but effective interface. Downloads are quick and give you a file you can move around, replay, or copy to friends at will. There are no ads. It will play on any device that can play video. Repeated viewings don’t require an Internet connection. *That* is what piracy offers. It’s not that it’s free. It’s that it gives people what they want. And the worse commercial services become, the better piracy looks. If only it paid the artists…

Illustrations: Opera Australia performing The Pirates of Penzance in 2007 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The good fight

This week saw a small gathering to celebrate the 25th anniversary (more or less) of the Foundation for Information Policy Research, a think tank led by Cambridge and Edinburgh University professor Ross Anderson. FIPR’s main purpose is to produce tools and information that campaigners for digital rights can use. Obdisclosure: I am a member of its advisory council.

What, Anderson asked those assembled, should FIPR be thinking about for the next five years?

When my turn came, I said something about the burnout that comes to many campaigners after years of fighting the same fights. Digital rights organizations – Open Rights Group, EFF, Privacy International, to name three – find themselves trying to explain the same realities of math and technology decade after decade. Small wonder so many burn out eventually. The technology around the debates about copyright, encryption, and data protection has changed over the years, but in general the fundamental issues have not.

In part, this is because what people want from technology doesn’t change much. A tangential example of this presented itself this week, when I read the following in the New York Times, written by Peter C Baker about the “Beatles'” new mash-up recording:

“So while the current legacy-I.P. production boom is focused on fictional characters, there’s no reason to think it won’t, in the future, take the form of beloved real-life entertainers being endlessly re-presented to us with help from new tools. There has always been money in taking known cash cows — the Beatles prominent among them — and sprucing them up for new media or new sensibilities: new mixes, remasters, deluxe editions. But the story embedded in “Now and Then” isn’t “here’s a new way of hearing an existing Beatles recording” or “here’s something the Beatles made together that we’ve never heard before.” It is Lennon’s ideas from 45 years ago and Harrison’s from 30 and McCartney and Starr’s from the present, all welded together into an officially certified New Track from the Fab Four.”

I vividly remembered this particular vision of the future because just a few days earlier I’d had occasion to look it up – a March 1992 interview for Personal Computer World with the ILM animator Steve Williams, who the year before had led the team that produced the liquid metal man for the movie Terminator 2. Williams imagined CGI would become pervasive (as it has):

“…computer animation blends invisibly with live action to create an effect that has no counterpart in the real world. Williams sees a future in which directors can mix and match actors’ body parts at will. We could, he predicts, see footage of dead presidents giving speeches, films starring dead or retired actors, even wholly digital actors. The arguments recently seen over musicians who lip-synch to recordings during supposedly ‘live’ concerts are likely to be repeated over such movie effects.”

Williams’ latest work at the time was on Death Becomes Her. Among his calmer predictions was that as CGI became increasingly sophisticated the boundary between computer-generated characters and enhancements would become invisible. Thirty years on, the big excitement recently has been Harrison Ford’s deaging for Indiana Jones and the Dial of Destiny. That used CGI, AI, and other tools to digitally swap in his face from 1980s footage.

Side note: in talking about the Ford work to Wired, ILM supervisor Andrew Whitehurst, exactly like Williams in 1992, called the new technology “another pencil”.

Williams also predicted endless legal fights over copyright and other rights. That at least was spot-on; AI and the perpetual reuse of retained footage without further payment is part of what the recent SAG-AFTRA strikes were about.

Yet, the problem here isn’t really technology; it’s the incentives. The businessfolk of Hollywood’s eternal desire is to guarantee their return on investment, and they think recycling old successes is the safest way to do that. Closer to digital rights, law enforcement always wants greater access to private communications; the frustration is that incoming generations of politicians don’t understand the laws of mathematics any better than their predecessors in the 1990s.

Many of the speakers focused on the issue of getting government to listen to and understand the limits of technology. Increasingly, though, a new problem is that, as Bruce Schneier writes in his latest book, The Hacker’s Mind, everyone has learned to think like hackers and subvert the systems they’re supposed to protect. The Silicon Valley mantra of “ask forgiveness, not permission” has become pervasive, whether it’s a technology platform deciding to collect masses of data about us or a police force deciding to stick a live facial recognition pilot next to Oxford Circus tube station. Except no one asks for forgiveness either.

Five years ago, at FIPR’s 20th anniversary, when GDPR is new, Anderson predicted (correctly) that the battles over encryption would move to device access. Today, it’s less clear what’s next. Facial recognition represents a step change; it overrides consent and embeds distrust in our public infrastructure.

If I were to predict the battles of the next five years, I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentatioon because they can’t afford to protest and can’t vote. “Automated suspicion,” Euronews.next calls it. That habit of mind is danagerous.

Illustrations: The liquid metal man in Terminator 2 reconstituting itself.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon