When an individual user does it, it’s piracy. When a major company does it…it may just get away with it.
At TechCrunch, Kyle Wiggers reports that buried in newly unredacted documents in the copyright case Kadrey v. Meta is testimony that Meta trained its Llama language model on a dataset of ebooks it torrented from LibGen. So, two issues. First, LibGen has been sued numerous times, fined, and ordered to shut down. Second: torrent downloads simultaneously upload to others. So, allegedly, Meta knowingly pirated copyrighted books to train its language model.
Kadrey v. Meta was brought by novelist Richard Kardrey, writer Christopher Golden, and comedian Sarah Silverberg, and is one of a number of cases accusing technology companies of training language models on copyrighted works without permission. Meta claims fair use. Still, not a good look.
***
Coincidentally, this week CEO Mark Zuckerberg announced changes to the company’s content moderation policies in the US (for now), a move widely seen as pandering to the incoming administration. The main changes announced in Zuckerberg’s video clip: Meta will replace fact-checkers (“too politically biased”) with a system of user-provided “community notes” as on exTwitter, remove content restrictions that “shut out people with different ideas”, dial back its automated filters to focus solely on illegal content, rely on user reports to identify material that should be taken down, bring back political content, and move its trust and safety and content moderation teams from California to Texas (“where there is less concern about the bias of our teams”). He also pledges to work with the incoming president to “push back on governments around the world that are going after American companies and pushing to censor more”.
Journalists and fact-checkers are warning that misinformation and disinformation will be rampant, and many are alarmed by the specifics of the kind of thing people are now allowed to say. Zuckerberg frames all this as a “return” to free expression while acknowledging that, “We’re going to catch less bad stuff”
At Techdirt, Mike Masnick begins as an outlier, arguing that many of these changes are actually sensible, though he calls the reasoning behind the Texas move “stupid”, and deplores Zuckerberg’s claim that this is about “free speech” and removing “censorship”. A day later, after seeing the company’s internal guidelines unearthed by Kate Knibbs at Wired , he deplores the new moderation policy as “hateful people are now welcome”.
More interesting for net.wars purposes is the international aspect. As the Guardian says, Zuckerberg can’t bring these changes across to the EU or UK without colliding headlong with the UK’s Online Safety Act and the EU’s Digital Markets Act. Both lay down requirements for content moderation on the largest platforms.
And yet, it’s possible that Zuckerberg may also think these changes help lay the groundwork to meet the EU/UK requirements. Meta will still remove illegal content, which it’s required to do anyway. But he may think there’s a benefit in dialing back users expectations about what else Meta will remove, in that platforms must conform to the rules they set in their terms and conditions. Notice-and-takedown is an easier standard to meet than performance indicators for automated filters. It’s also likely cheaper. This approach is, however, the opposite of what critics like Open Rights Group have predicted the law will bring; ORG believes that platforms will instead over-moderate in order to stay out of trouble, chilling free speech.
Related is an interesting piece by Henry Farrell at his Programmable Matter newsletter, who argues that the more important social media speech issue is that what we read there determines how we imagine others think rather than how we ourselves think. In other words, misinformation, disinformation, and hate speech change what we think is normal, expanding the window of what we think other people find acceptable. That has resonance for me: the worst thing about prominent trolls is they give everyone else permission to behave as badly as they do.
***
It’s now 25 years since I heard a privacy advocate predict that the EU’s then-new data protection rights could become the basis of a trade war with the US. While instead the EU and US have kept trying to find a bypass that will withstand a legal challenge from Max Schrems, the approaches seem to be continuing to diverge, and in more ways.
For example, last week, longrunning battle over network neutralityjudges on the US Sixth Circuit Court of Appeals ruled that the Federal Communications Commission was out of line when it announced rules in 2023 that classified broadband suppliers as common carriers under Title II of the Communications Act (1934). This judgment is the result of the Supreme Court’s 2024 decision to overturn the Chevron deference, setting courts free to overrule government agencies’ expertise. And that means the end in the US (until or unless Congress legislates) of network neutrality, the principle that all data flowing across the Internet was created equal and should be transmitted without fear or favor. Network neutrality persists in California, Washington, and Colorado, whose legislatures have passed laws to protect it.
China has taught us that the Internet is more divisible by national law than many thought in the 1990s. Copyright law may be the only thing everyone agrees on.
Illustrations: Drunk parrot in a South London garden (by Simon Bisson; used by permission).
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.