A quarter-century after Congress decided to hold publishers harmless for third-party content posted on their websites, we are headed for a legal and constitutional showdown over Section 230, part of the Communications Decency Act of 1996.
Before the law was passed, publishers worried that if they removed some harmful content they might be held liable for failing to take down other content, which gave them a legal incentive to leave libel, obscenity, hate speech and misinformation in place. Section 230 solved that by including a so-called Good Samaritan provision that allowed publishers to pick and choose without incurring liability.
Back in those early days, of course, we weren’t dealing with behemoths like Facebook, YouTube and Twitter, which use algorithms to boost content that keeps their users engaged — which, in turn, usually means speech that makes them angry or upset. In the mid-1990s, the publishers that were seeking protection were generally newspapers that had opened up online comments and nascent online services like Prodigy and AOL. Publishers are fully liable for any content over which they have direct control, including news stories, advertisements and letters to the editor. Congress understood that the flood of content being posted online raised different issues.
But after Twitter booted Donald Trump off its service and Facebook suspended him for inciting violence during and after the attempted insurrection of Jan. 6, 2021, Trump-aligned Republicans began agitating against what they called censorship by the tech giants. The idea that private companies are even legally capable of engaging in censorship is something that can be disputed, but it’s gained some traction in legal circles, as we shall see.
Meanwhile, Democrats and liberals argued that the platforms weren’t acting aggressively enough to remove dangerous and harmful posts, especially those promoting disinformation around COVID-19 such as anti-masking and anti-vaccine propaganda.
A lot of this comes down to whether the platforms are common carriers or true publishers. Common carriers are legally forbidden from discriminating against any type of user or traffic. Providers of telephone service would be one example. Another example would be the broader internet of which the platforms are a part. Alex Jones was thoroughly deplatformed in recent years — you can’t find him on Facebook, Twitter or anywhere else. But you can find his infamous InfoWars site on the web, and, according to SimilarWeb, it received some 9.4 million visits in July of this year. You can’t kick Jones off the internet; at most, you can pressure his hosting service to drop him. But even if they did, he’d just move on to the next service, which, by the way, needn’t be based in the U.S.
True publishers, by the way, enjoy near-absolute leeway over what they choose to publish or not publish. A landmark case in this regard is Miami Herald v. Tornillo (1974), in which the Supreme Court ruled that a Florida law requiring newspapers to publish responses from political figures who’d been criticized was unconstitutional. Should platforms be treated as publishers? Certainly it seems ludicrous to hold them fully responsible for the millions of pieces of content that their users post on their sites. Yet the use of algorithms to promote some content in order to sell more advertising and earn more profits involves editorial discretion, even if those editors are robots. In that regard, they start to look more like publishers.
Maybe it’s time to move past the old categories altogether. In a recent appearance on WBUR Radio’s “On Point,” University of Minnesota law professor Alan Rozenshtein said that platforms have some qualities of common carriers and some qualities of publishers. What we really need, he said, is a new paradigm that recognizes we’re dealing with something unlike anything we’ve seen before.
Which brings me to two legal cases, both of which are hurtling toward a collision.
Recently the U.S. Court of Appeals for the 5th Circuit upheld a Texas law that, among other things, forbids platforms from removing any third-party speech that’s based on viewpoint. Many legal observers had believed the law would be decisively overturned since it interferes with the ability of private companies to conduct their business as they see fit, and to exercise their own First Amendment right to delete content they regard as harmful. But the court didn’t see it that way, with Judge Andrew Oldham writing: “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.” This is a view of the platforms as common carriers.
As Rozenshtein said, the case is almost certainly headed for the Supreme Court because it clashes with an opinion by the 11th Circuit, which overturned a similar law in Florida, and because it’s unimaginable that any part of the internet can be regulated on a state-by-state basis. Such regulations need to be hashed out by Congress and apply to all 50 states, Rozenshtein said.
Meanwhile, the Supreme Court has agreed to hear a case coming from the opposite direction. The case, brought by the family of a 23-year-old student who was killed in an ISIS attack in Paris in 2014, argues that YouTube, owned by Google, should be held liable for using algorithms to boost terrorist videos, thus helping to incite the attack. “Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” according to the lawsuit.
Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.
The ISIS case is especially interesting because it’s the use of algorithms to boost speech that are at issue — again, something that was, at most, in its embryonic stages at the time that Section 230 was enacted. Eric Goldman, a law professor at Santa Clara University, put it this way in an interview with The Washington Post: “The question presented creates a false dichotomy that recommending content is not part of the traditional editorial functions. The question presented goes to the very heart of Section 230 and that makes it a very risky case for the internet.”
I’ve suggested that one way to reform Section 230 might be to remove protections for any algorithmically boosted speech, which might actually be where we’re heading.
All of this comes at a time when the Supreme Court’s turn to the right has called its legitimacy into question. Two of the justices, Clarence Thomas and Neil Gorsuch, have even suggested that the libel protections afforded the press under the landmark Times v. Sulllivan decision be overturned or scaled back. After 26 years, it may well be time for some changes to Section 230. But can we trust the Supremes to get it right? I guess we’ll just have to wait and see.