In the US this week the Supreme Court heard arguments in two cases centered on Section 230, the US law that shields online platforms from liability for third-party content. In Paris, UNESCO convened Internet for Trust to bring together governments and civil society to contemplate global solutions to the persistent problems of Internet regulation. And in the business of cyberspace, in what looks like desperation to stay afloat Twitter began barring non-paying users (that is, the 99.8% of its user base that *doesn’t* subscribe to Twitter Blue) from using two-factor authentication via SMS and Meta announced plans for a Twitter Blue-like subscription service for its Facebook, Instagram, and WhatsApp platforms.
In other words, the above policy discussions are happening exactly at the moment when, for the first time in nearly two decades, two of the platforms whose influence everyone is most worried about may be beginning to implode. Twitter’s issues are well-known. Meta’s revenues are big enough that there’s a long way for them to fall…but the company is spending large fortunes on developing the Metaverse, which no one may want, and watching its ad sales shrink and data protection fines rise.
The SCOTUS hearings – Gonzalez v. Google, experts’ live blog, Twitter v. Taamneh – have been widely covered in detail. In most cases, writers note that trying to discern the court’s eventual ruling from the justices’ questions is about as accurate as reading tea leaves. Nonetheless, Columbia professor Tim Wu predicts that Gonzalez will lose but that Taamneh could be very close.
In Gonzalez, the parents of a 23-year-old student killed in a 2015 ISIS attack in Paris argue that YouTube should be liable for radicalizing individuals via videos found and recommended on its platform. In Taamneh, the family of a Jordanian citizen who died in a 2017 ISIS attack in Istanbul sued Twitter, Google, and Facebook for failing to control terrorist content on their sites under anti-terrorism laws. A ruling assigning liability in either case could be consequential for S230. At TechDirt, Mike Masnick has an excellent summary of the Gonzalez hearing, as well as a preview of both cases.
Taamneh, on the other hand, asks whether social media sites are “aiding and abetting” terrorism via their recommendations engines under Section 2333 of the Antiterrorism and Effective Death Penalty Act (1996). Under the Justice Against Sponsors of Terrorism Act (2016) any US national who is injured by an act of international terorrism can sue anyone who “aids and abets by knowingly providing substantial assistance” to anyone committing such an act. The case turns on how much Twitter knows about its individual users and what constitutes substantial assistance. There has been some concern, expressed in amicus briefs, that making online intermediaries liable for terrorist content will result in overzealous content moderation. Lawfare has a good summary of the cases and the amicus briefs they’ve attracted.
Contrary to what many people seem to think, while S230 allows content moderation, it’s not a law that disproportionately protects large platforms, which didn’t exist when it was enacted. As Kosseff tells Gizmodo: without liability protection a local newspaper or personal blog could not risk publishing reader comments, and Wikipedia could not function. Justice Elena Kagan has been mocked for saying the justices are “not the nine greatest experts on the Internet”, but she grasped perfectly that undermining S230 could create “a world of lawsuits”.
For the last few years, both Democrats and Republicans have called for S230 reform, but for different reasons. Democrats fret about the proliferation of misinformation; Republicans complain that they (“conservative voices”) are being censored. The global level seen at the UNESCO event took a broader view in trying to draft a framework for self-regulation. While it wouldn’t be binding, there’s some value in having an multi-stakeholder-agreed standard against which individual governmental proposals can be evaluated. One of the big gaps in the UK’s Online Safety bill;, for example, is the failure to tackle misinformation or disinformation campaigns. Neither reforming S230 nor a framework for self-regulation will solve that problem either: over the last few years too much of the most widely-disseminated disinformation has been posted from official accounts belonging to world leaders.
One interesting aspect is how many new types of “content” have been created since S230’s passage in 1996, when the dominant web analogy was print publishing. It’s not just recommendation algorithms; are “likes” third-party content? Are the thumbnails YouTube’s algorithm selects to show each visitor on its front page to entice viewers presentation or publishing?
In his biography of S230, The Twenty-Six Words That Created the Internet, Jeff Kosseff notes that although similar provisions exist in other legislation across the world, S230 is unique in that only America privileges freedom of speech to such an extreme extent. Most other countries aim for more of a balance between freedom of expression and privacy. In 1997, it was easy to believe that S230 enabled the Internet to export the US’s First Amendment around the world like a stowaway. Today, it seems more like the first answer to an eternally-recurring debate. Despite its problems, like democracy itself, it may continue to be the least-worst option.
Illustrations: US senator and S230 co-author Ron Wyden (D-OR) in 2011 (by JS Lasica via Wikimedia.
Wendy M. Grossman is the 2013 winner of the Enigma Award. Her Web site has an archive of earlier columns backj to 2001. Follow on Mastodon or Twitter.