The UK’s public consultation on creating a copyright exception for AI model training closed on Tuesday, and it was profoundly unsatisfying.
Many, many creators and rights holders (who are usually on opposing sides when it comes to contract negotiations) have opposed the government’s proposals. Every national newspaper ran the same Make It Fair front page opposing them; musicians released a silent album. In the Guardian, the peer and independent filmmaaker Beeban Kidron calls the consultation “fixed” in favor of the AI companies. Kidron’s resume includes directing Bridget Jones: The Edge of Reason (2004) and the meticulously researched 2013 study of teens online, InRealLife, and she goes on to call the government’s preferred option a “wholesale transfer of wealth from hugely successful sector that invests hundreds of millions in the UK to a tech industry that extracts profit that is not assured and will accrue largely to the US and indeed China.”
The consultation lists four options: leave the situation as it is; require AI companies to get licenses to use copyrighted work (like everyone else has to); allow AI companies to use copyrighted works however they want; and allow AI companies to use copyrighted works but grant rights holders the right to opt out.
I don’t like any of these options. I do believe that creators will figure out how to use AI tools to produce new and valuable work. I *also* believe that rights holders will go on doing their best to use AI to displace or impoverish creators. That is already happening in journalism and voice acting, and was a factor in the 2023 Hollywood writers’ strike. AI companies have already shown that won’t necessarily abide by arrangements that lack the force of law. The UK government acknowledged this in its consultation document, saying that “more than 50% of AI companies observe the longstanding Internet convention robots.txt.” So almost half of them *don’t*.
At Pluralistic, Cory Doctorow argued in February 2023 that copyright won’t solve the problems facing creators. His logic is simple: after 40 years of expanding copyright terms (from a maximum of 56 years in 1975 to “author’s life plus 70” now), creators are being paid *less* than they were then. Yes, I know Taylor Swift has broken records for tour revenues and famously took back control of her own work. but millions of others need, as Doctorow writes, structural market changes. Doctorow highlights what happened with sampling: the copyright maximalists won, and now musicians are required to sign away sampling rights to their labels, who pocket the resulting royalties.
For this sort of reason, the status quo, which the consultation calls “option 0”, seems likely to open the way to lots more court cases and conflicting decisions, but provide little benefit to anyone. A licensing regime (“option 1”) will likely go the way of sampling. If you think of AI companies as inevitably giant “pre-monopolized” outfits, like Vladen Joler at last year’s Computers, Privacy, and Data Protection conference, “Option 2” looks like simply making them richer and more powerful at the expense of everyone else in the world. But so does “option 3”, since that *also* gives AI companies the ability to use anything they want. Large rights holders will opt out and demand licensing fees, which they will keep, and small ones will struggle to exercise their rights.
As Kidron said, the government’s willingness to take chances with the country’s creators’ rights is odd, since intellectual property is a sector in which Britain really *is* a world leader. On the other hand, as Moody says, all of it together is an anthill compared to the technology sector.
None of these choices is a win for creators or the public. The government’s preferred option 3 seems unlikely to achieve its twin goals of making Britain a world leader in AI and mainlining AI into the veins of the nation, as the government put it last month.
China and the US both have complete technology stacks *and* gigantic piles of data. The UK is likely better able to matter in AI development than many countries – see for example DeepMind, which was founded here in 2010. On the other hand, also see DeepMind for the probable future: Google bought it in 2014, and now its technology and profits belong to that giant US company.
At Walled Culture, Glyn Moody argued last May that requiring the AI companies to pay copyright industries makes no sense; he regards using creative material for training purposes as “just a matter of analysis” that should not require permission. And, he says correctly, there aren’t enough such materials anyway. Instead, he and Mike Masnick at Techdirt propose that the generative AI companies should pay creators of all types – journalists, musicians, artists, filmmakers, book authors – to provide them with material they can use to train their models, and the material so created should be placed in the public domain. In turn it could become new building blocks the public can use to produce even more new material. As a model for supporting artists, patronage is old.
I like this effort to think differently a lot better than any of the government’s options.
Illustrations:: Tuesday’s papers, unprecedentedly united to oppose the government’s copyright plan.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.