Last year’s news

It was tempting to skip wrapping up 2023, because at first glance large language models seemed so thoroughly dominant (and boring to revisit), but bringing the net.wars archive list up to date showed a different story. To be fair, this is partly personal bias: from the beginning LLMs seemed fated to collapse under the weight of their own poisoning; AI Magazine predicted such an outcome as early as June.

LLMs did, however, seem to accelerate public consciousness of three long-running causes of concern: privacy and big data; corporate cooption of public and private resources; and antitrust enforcement. That acceleration may be LLMs’ more important long-term effect. In the short term, the justifiably bigger concern is their propensity to spread disinformation and misinformation in the coming year’s many significant elections.

Enforcement of data protection laws has been slowly ramping up in any case, and the fines just keep getting bigger, culminating in May’s fine against Meta for €1.2 billion. Given that fines, no matter how large, seem insignificant compared to the big technology companies’ revenues, the more important trend is issuing constraints on how they do business. That May fine came with an order to stop sending EU citizens’ data to the US. Meta responded in October by announcing a subscription tier for European Facebook users: €160 a year will buy freedom from ads. Freedom from Facebook remains free.

But Facebook is almost 20 years old; it had years in which to grow without facing serious regulation. By contrast, ChatGPT, which OpenAI launched just over a year ago, has already faced investigation by the US Federal Trade Commission and been banned temporarily by the Italian data protection authority (it was reinstated a month later with conditions). It’s also facing more than a dozen lawsuits claiming copyright infringement; the most recent of these was filed just this week by the New York Times. It has settled one of these suits by forming a partnership with Axel Springer.

It all suggests a lessening tolerance for “ask forgiveness, not permission”. As another example, Clearview AI has spent most of the four years since Kashmir Hill alerted the world to its existence facing regulatory bans and fines, and public disquiet over the rampant spread of live facial recognition continues to grow. Add in the continuing degradation of exTwitter, the increasing number of friends who say they’re dropping out of social media generally, and the revival of US antitrust actions with the FTC’s suit against Amazon, and it feels like change is gathering.

It would be a logical time, for an odd reason: each of the last few decades as seen through published books has had a distinctive focus with respect to information technology. I discovered this recently when, for various reasons, I reorganized my hundreds of books on net.wars-type subjects dating back to the 1980s. How they’re ordered matters: I need to be able to find things quickly when I want them. In 1990, a friend’s suggestion of categorizing by topic seemed logical: copyright, privacy, security, online community, robots, digital rights, policy… The categories quickly broke down and cross-pollinated. In rebuilding the library, what to replace it with?

The exercise, which led to alphabetizing by author’s name within decade of publication, revealed that each of the last few decades has been distinctive enough that it’s remarkably easy to correctly identify a book’s decade without turning to the copyright page to check. The 1980s and 1990s were about exploration and explanation. Hype led us into the 2000s, which were quieter in publishing terms, though marked by bursts of business books that spanned the dot-com boom, bust, and renewal. The 2010s brought social media, content moderation, and big data, and a new set of technologies to hype, such as 3D printing and nanotechnology (about which we hear nothing now). The 2020s, it’s too soon to tell…but safe to say disinformation, AI, and robots are dominating these early years.

The 2020s books to date are trying to understand how to rein in the worst effects of Big Tech: online abuse, cryptocurrency fraud, disinformation, the loss of control as even physical devices turn into manufacturer-controlled subscription services, and, as predicted in 2018 by Christian Wolmar, the ongoing failure of autonomous vehicles to take over the world as projected just ten years ago.

While Teslas are not autonomous, the company’s Silicon Valley ethos has always made them seem more like information technology than cars. Bad idea, as Reuters reports; its investigation found a persistent pattern of mishaps such as part failures and wheels falling off – and an equally persistent pattern of the company blaming the customer, even when the car was brand new. If we don’t want shoddy goods and data invasion with everything to be our future, fighting back is essential. In 2032, I hope looking back shows that story.

The good news going into 2024 is, as the Center for the Public Domain at Duke University, Public Domain Review and Cory Doctorow write, the bumper crop of works entering the public domain: sound recordings (for the first time in 40 years), DH Lawrence’s Lady Chatterley’s Lover, Agatha Christie’s The Mystery of the Blue Train, Ben Hecht and Charles MacArthur’s play The Front Page. and the first of Mickey Mouse. Happy new year.

Illustrations: Promotional still from the 1928 production of The Front Page, which enters the public domain on January 1, 2024 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

A surveillance state of mind

­”Do computers automatically favor authoritarianism?” a friend asked recently. Or, are they fundamentally anti-democratic?

Certainly, at the beginning, many thought that both the Internet and personal computers (think, for example, of Apple’s famed Super Bowl ad, “1984”) – would favor democratic ideals by embedding values such as openness, transparency, and collaborative policy-making in their design. Universal access to information and to networks of distribution was always going to have downsides, but on balance was going to be a Good Thing (actually, I still believe this). So, my friend was asking, were those hopes always fundamentally absurd, or were the problems of disinformation and widespread installation of surveillance technology always inevitable for reasons inherent in the technology itself?

Computers, like all technology, are what we make them. But one fundamental characteristic does seem to me unavoidable: they upend the distribution of data-related costs. In the physical world, more data always involved more expense: storing it required space, and copying or transmitting it took time, ink, paper, and personnel. In the computer world, more data is only marginally more expensive, and what costs remain have kept falling for 70 years. For most purposes, more digital data incurs minimal costs. The expenses of digital data only kick in when you curate it: selection and curation take time and personnel. So the easiest path with computer data is always to keep it. In that sense, computers inevitably favor surveillance.

The marketers at companies that collect data about this try to argue this is a public *good* because doing so enables them to offer personalized services that benefit us. Underneath, of course, there are too many economic incentives for them not to “share” – that is, sell – it onward, creating an ecosystem that sends our data careening all over the place, and where “personalization” becomes “surveillance” and then, potentially, “maleveillance”, which is definitely not in our interests.

At a 2011 workshop on data abuse, participants noted that the mantra of the day was “the data is there, we might as well use it”. At the time, there was a definite push from the industry to move from curbing data collection to regulating its use instead. But this is the problem: data is tempting. This week has provided a good example of just how tempting in the form of a provision in the UK’s criminal justice bill will allow police to use the database of driver’s license photos for facial recognition searches. “A permanent police lineup,” privacy campaigners are calling it.

As long ago as 1996, the essayist and former software engineer Ellen Ullman called out this sort of temptation, describing it as a system “infecting” its owner. Data tempts those with access to it to ask questions they couldn’t ask before. In many cases that’s good. Data enables Patrick Ball’s Human Rights Data Analysis Group to establish “who did what to whom” in cases of human rights abuse. But, in the downside in Ullman’s example, it undermines the trust between a secretary and her boss, who realizes he can use the system to monitor her work, despite prior decades of trust. In the UK police example, the downside is tempting the authorities to combine the country’s extensive network of CCTV images and the largest database of photographs of UK residents. “Crime scene investigations,” say police and ministers. “Chill protests,” the rest of us predict. In a story I’m writing for the sucessor to the Cybersalon anthology Twenty-Two Ideas About the Future, I imagined a future in which police have the power and technology to compel every camera in the country to join a national network they control. When it fails to solve an important crime of the day, they successfully argue it’s because the network’s availability was too limted.

The emphasis on personalization as a selling point for surveillance – if you turn it off you’ll get irrelevant ads! – reminds that studies of astrology starting in 1949 have found that people’s rating of their horoscopes varies directly with how personalized they perceive them to be. The horoscope they are told has been drawn up just for them by an astrologer gets much higher ratings than the horoscope they are told is generally true of people with their sun sign – even when it’s the *same* horoscope.

Personalization is the carrot businesses use to get us to feed our data into their business models; their privacy policies dictate the terms. Governments can simply compel disclosure as a requirement for a benefit we’re seeking – like the photo required to get a driver’s license,, passport, or travel pass. Or, under greater duress, to apply for or await a decision about asylum, or try to cross a border.

“There is no surveillance state,” then-Home Secretary Theresa May said in 2014. No, but if you put all the pieces in place, a future government of a malveillance state of mind can turn it on at will.

So, going back to my friend’s question. Yes, of course we can build the technology so that it favors democratic values instead of surveillance. But because of that fundamental characteristic that makes creating and retaining data the default and the business incentives currently exploiting the results, it requires effort and thought. It is easier to surveil. Malveillance, however, requires power and a trust-no-one state of mind. That’s hard to design out.

Illustrations: The CCTV camera at 22 Portobello Road, where George Orwell lived circa 1927.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Game of carrots

The big news of the week has been the result of the Epic Games v. Google antitrust trial. A California jury took four hours to agree with Epic that Google had illegally tied together its Play Store and billing service, so that app makers could only use the Play Store to distribute their apps if they also used Google’s service for billing, giving Google a 30% commission. Sort of like, I own half the roads in this town, and if you want to sell anything to my road users you have to have a store in my mall and pay me a third of your sales revenue, and if you don’t like it, tough, because you can’t reach my road users any other way. Meanwhile, the owner of the other half of the town’s roads is doing exactly the same thing, so you can’t win.

At his BIG Substack, antitrust specialist Matt Stoller, who has been following the trial closely, gloats, “the breakup of Big Tech begins”. Maybe not so fast: Epic lost its similar case against Apple. Both of these cases are subject to appeal. Stoller suggests, however, that the latest judgment will carry more weight because it came from a jury of ordinary citizens rather than, as in the Apple case, a single judge. Stoller believes the precedent set by a jury trial is harder to ignore in future cases.

At The Verge, Sean Hollister, who has been covering the trial in detail, offers a summary of 20 key points he felt the trial established. Written before the verdict, Hollister’s assessment of Epic’s chances proved correct.

Even if the judgment is upheld in the higher courts, it will be a while before users see any effects. But: even if the judgment is overturned in the higher courts, my guess is that the technology companies will begin to change their behavior at least a bit, in self-defense. The real question is, what changes will benefit us, the people whose lives are increasingly dominated by these phones?

I personally would like it to be much easier to use an Android phone without ever creating a Google account, and to be confident that the phone isn’t sending masses of tracking data to either Google or the phone’s manufacturer.

But…I would still like to be able to download the apps I want from a source I can trust. I care less about who provides the source than I do about what data they collect about me and the cost.

I want that source to be easy to access, easy to use, and well-stocked, defining “well-stocked” as “has the apps I want” (which, granted, is a short list). The nearest analogy that springs to mind is TV channels. You don’t really care what channel the show you want to watch is on; you just want to be able to watch the show without too much hassle. If there weren’t so many rights holders running their own streaming services, the most sensible business logic would be for every show to be on every service. Then instead of competing on their catalogues, the services would be competing on privacy, or interface design, or price. Why shouldn’t we have independent app stores like that?

Mobile phones have always been more tightly controlled than the world of desktop computing, largely because they grew out of the tightly controlled telecommunications world. Desktop computing, like the Internet, served first the needs of the military and academic research, and they remain largely open even when they’re made by the same companies who make mobile phone operating systems. Desktop systems also developed at a time when American antitrust law still sought to increase competition.

It did not stay that way. As current FTC chair Lina Khan made her name pointing out in 2017, antitrust thinking for the last several decades has been limited to measuring consumer prices. The last big US antitrust case to focus on market effects was Microsoft, back in 1995. In the years since, it’s been left to the EU to act as the world’s antitrust enforcer. Against Google, the EU has filed three cases since 2010: over Shopping (Google was found guilty in 2017 and fined €2.4 billion, upheld on appeal in 2021); Android, over Google apps and the Play Store (Google was found guilty in 2018 and fined €4.3 billion and required to change some of its practices); and AdSense (fined €1.49 billion in 2019). But fines – even if the billions eventually add up to real money – don’t matter enough to companies with revenues the size of Google’s. Being ordered to restructure its app store might.

At the New York Times, Steve Lohr compares the Microsoft and Epic v Google cases. Microsoft used its contracts with PC makers to prevent them from preinstalling its main web browser rival, Netscape, in order to own users’ path into the accelerating digital economy. Google’s contracts instead paid Apple, Samsung, Mozilla, and others to favor it on their systems – “carrots instead of sticks,” NYU law professor Harry First told Lohr.

The best thing about all this is that the Epic jury was not dazzled by the incomprehensibility effect of new technology. Principles are coming back into focus. Tying – leveraging your control over one market in order to dominate another – is no different if you say it in app stores than if you say it in gas stations or movie theaters.

Illustrations: “The kind of anti-trust legislation that is needed”, by J.S. Pughe (via Library of Congress).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

Property is theft

If you were to judge just by behavior, you would have to conclude that the entertainment industry’s rights holders are desperate to promote piracy.

The latest instance is that Sony has warned American Playstation owners that shows they purchased – *bought* – from Discovery will, now that Discovery has merged with Warner Brothers, be removed from their video libraries. This isn’t like Netflix losing the license to stream your current favorite show halfway through season 2, which you can maybe fix by joining whichever streaming service the show is now on (assuming there is one). No, this is you (thought you) bought and they took it away.

In other words, the entertainment industry has taken the old anarchist slogan property is theft and turned it into a business model.

This isn’t a one-time occurrence. As Timothy Geigner writes at TechDirt, in 2022 customers in Germany and Austria lost access to hundreds of movies when a deal between Sony and film distributor Studio Canal expired. As in the Warner Brothers/Discovery case, it’s not just that the movies were removed from the list available for purchase; the long, remote arm of Sony reached into individual Playstations and removed them from there, too.

If Warner Brothers sent a minion to come into my house, take a DVD from a shelf, and take it away, that would clearly be theft, even if I had given the company a key so it could come in and update my Blu-Ray player. Why is it different if it’s a digital file held on an electronic device?

This is the kind of question I used to get asked back when these copyright battles were new. “You’re a freelance writer,” said the first person I interviewed on this sort of subject, back in 1991; he was the new head of the Federation Against Software Theft. “You make your living from copyright. Why aren’t you against piracy?” (Or something close to that.)

At the time the big battle in freelance journalism was that publishers were pushing toward all-rights contracts that would let them use whatever we wrote forever without further payment. Freelances were trying to hang onto the old arrangement, under which the publisher just got the right to run the piece once (and *first*), and then the freelance could go on and resell the piece in whole or in part to others and in other markets. Columnists made money by compiling their pieces into books. Magazine writers made money by reselling to other countries or selling reworked versions to specialist publications.

By 1995 you couldn’t really make money that way any more. Today, younger freelances have little idea it was ever possible. This, again, is the future the recent SAG-AFTRA strikes were trying to avoid. The shift is more simply described like this: the old way was pay per use; the new way the studios want is pay once, use forever. This struggle is endemic to every industry, as SAG head Fran Drescher pointed out.

The exact opposite is what’s happening to consumer access. In the old way, because buying physical media conferred ownership of the media (and the fact that the content was only ever licensed was largely moot), consumers bought once and used as much as they wanted until the disc or tape wore out. Even if streaming doesn’t quite open the way for paying for every use (though I bet that’s the hope), it does grant remote control to anyone who has access to the device – even if you thought you only granted permission to put stuff there, not remove it.

If I remember correctly, the first time people realized this kind of power existed was in 2009, when Amazon deleted (irony of ironies) copies of George Orwell’s novel 1984 from thousands of Kindles because the third-party company selling the ebook did not in fact have the rights to it. In this particular case, Amazon did refund the money people had paid. Since then, there’s been a steady trickle of cases where ultimate control of the device stays with its maker and doesn’t transfer to the person who paid to buy it.

You might think that the solution is to go on (or back to) buying the entertainment you love on physical media…but that option is also under threat. Disney announced in July that it would stop selling DVDs and Blu-Ray discs in Australia. In the US, Best buy is about to stop carrying them. Add in the recent trend for deleting even successful shows for tax reasons and the unpredictability of which streaming service might have the thing you’re looking for, and you have an extremely consumer-hostile industry.

For consumers, the perfect service looks something like this: the library is, if not complete, *very* extensive, all indexed in one place, and easily searchable using a simple but effective interface. Downloads are quick and give you a file you can move around, replay, or copy to friends at will. There are no ads. It will play on any device that can play video. Repeated viewings don’t require an Internet connection. *That* is what piracy offers. It’s not that it’s free. It’s that it gives people what they want. And the worse commercial services become, the better piracy looks. If only it paid the artists…

Illustrations: Opera Australia performing The Pirates of Penzance in 2007 (via Wikimedia).

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon

The good fight

This week saw a small gathering to celebrate the 25th anniversary (more or less) of the Foundation for Information Policy Research, a think tank led by Cambridge and Edinburgh University professor Ross Anderson. FIPR’s main purpose is to produce tools and information that campaigners for digital rights can use. Obdisclosure: I am a member of its advisory council.

What, Anderson asked those assembled, should FIPR be thinking about for the next five years?

When my turn came, I said something about the burnout that comes to many campaigners after years of fighting the same fights. Digital rights organizations – Open Rights Group, EFF, Privacy International, to name three – find themselves trying to explain the same realities of math and technology decade after decade. Small wonder so many burn out eventually. The technology around the debates about copyright, encryption, and data protection has changed over the years, but in general the fundamental issues have not.

In part, this is because what people want from technology doesn’t change much. A tangential example of this presented itself this week, when I read the following in the New York Times, written by Peter C Baker about the “Beatles'” new mash-up recording:

“So while the current legacy-I.P. production boom is focused on fictional characters, there’s no reason to think it won’t, in the future, take the form of beloved real-life entertainers being endlessly re-presented to us with help from new tools. There has always been money in taking known cash cows — the Beatles prominent among them — and sprucing them up for new media or new sensibilities: new mixes, remasters, deluxe editions. But the story embedded in “Now and Then” isn’t “here’s a new way of hearing an existing Beatles recording” or “here’s something the Beatles made together that we’ve never heard before.” It is Lennon’s ideas from 45 years ago and Harrison’s from 30 and McCartney and Starr’s from the present, all welded together into an officially certified New Track from the Fab Four.”

I vividly remembered this particular vision of the future because just a few days earlier I’d had occasion to look it up – a March 1992 interview for Personal Computer World with the ILM animator Steve Williams, who the year before had led the team that produced the liquid metal man for the movie Terminator 2. Williams imagined CGI would become pervasive (as it has):

“…computer animation blends invisibly with live action to create an effect that has no counterpart in the real world. Williams sees a future in which directors can mix and match actors’ body parts at will. We could, he predicts, see footage of dead presidents giving speeches, films starring dead or retired actors, even wholly digital actors. The arguments recently seen over musicians who lip-synch to recordings during supposedly ‘live’ concerts are likely to be repeated over such movie effects.”

Williams’ latest work at the time was on Death Becomes Her. Among his calmer predictions was that as CGI became increasingly sophisticated the boundary between computer-generated characters and enhancements would become invisible. Thirty years on, the big excitement recently has been Harrison Ford’s deaging for Indiana Jones and the Dial of Destiny. That used CGI, AI, and other tools to digitally swap in his face from 1980s footage.

Side note: in talking about the Ford work to Wired, ILM supervisor Andrew Whitehurst, exactly like Williams in 1992, called the new technology “another pencil”.

Williams also predicted endless legal fights over copyright and other rights. That at least was spot-on; AI and the perpetual reuse of retained footage without further payment is part of what the recent SAG-AFTRA strikes were about.

Yet, the problem here isn’t really technology; it’s the incentives. The businessfolk of Hollywood’s eternal desire is to guarantee their return on investment, and they think recycling old successes is the safest way to do that. Closer to digital rights, law enforcement always wants greater access to private communications; the frustration is that incoming generations of politicians don’t understand the laws of mathematics any better than their predecessors in the 1990s.

Many of the speakers focused on the issue of getting government to listen to and understand the limits of technology. Increasingly, though, a new problem is that, as Bruce Schneier writes in his latest book, The Hacker’s Mind, everyone has learned to think like hackers and subvert the systems they’re supposed to protect. The Silicon Valley mantra of “ask forgiveness, not permission” has become pervasive, whether it’s a technology platform deciding to collect masses of data about us or a police force deciding to stick a live facial recognition pilot next to Oxford Circus tube station. Except no one asks for forgiveness either.

Five years ago, at FIPR’s 20th anniversary, when GDPR is new, Anderson predicted (correctly) that the battles over encryption would move to device access. Today, it’s less clear what’s next. Facial recognition represents a step change; it overrides consent and embeds distrust in our public infrastructure.

If I were to predict the battles of the next five years, I’d look at the technologies being deployed around European and US borders to surveil migrants. Migrants make easy targets for this type of experimentatioon because they can’t afford to protest and can’t vote. “Automated suspicion,” calls it. That habit of mind is danagerous.

Illustrations: The liquid metal man in Terminator 2 reconstituting itself.

Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon