By Dan Kennedy • The press, politics, technology, culture and other passions

A quarter-century after its passage, Section 230 is up for grabs

A quarter-century after Congress decided to hold publishers harmless for third-party content posted on their websites, we are headed for a legal and constitutional showdown over Section 230, part of the Communications Decency Act of 1996.

Before the law was passed, publishers worried that if they removed some harmful content they might be held liable for failing to take down other content, which gave them a legal incentive to leave libel, obscenity, hate speech and misinformation in place. Section 230 solved that by including a so-called Good Samaritan provision that allowed publishers to pick and choose without incurring liability.

Back in those early days, of course, we weren’t dealing with behemoths like Facebook, YouTube and Twitter, which use algorithms to boost content that keeps their users engaged — which, in turn, usually means speech that makes them angry or upset. In the mid-1990s, the publishers that were seeking protection were generally newspapers that had opened up online comments and nascent online services like Prodigy and AOL. Publishers are fully liable for any content over which they have direct control, including news stories, advertisements and letters to the editor. Congress understood that the flood of content being posted online raised different issues.

But after Twitter booted Donald Trump off its service and Facebook suspended him for inciting violence during and after the attempted insurrection of Jan. 6, 2021, Trump-aligned Republicans began agitating against what they called censorship by the tech giants. The idea that private companies are even legally capable of engaging in censorship is something that can be disputed, but it’s gained some traction in legal circles, as we shall see.

Meanwhile, Democrats and liberals argued that the platforms weren’t acting aggressively enough to remove dangerous and harmful posts, especially those promoting disinformation around COVID-19 such as anti-masking and anti-vaccine propaganda.

A lot of this comes down to whether the platforms are common carriers or true publishers. Common carriers are legally forbidden from discriminating against any type of user or traffic. Providers of telephone service would be one example. Another example would be the broader internet of which the platforms are a part. Alex Jones was thoroughly deplatformed in recent years — you can’t find him on Facebook, Twitter or anywhere else. But you can find his infamous InfoWars site on the web, and, according to SimilarWeb, it received some 9.4 million visits in July of this year. You can’t kick Jones off the internet; at most, you can pressure his hosting service to drop him. But even if they did, he’d just move on to the next service, which, by the way, needn’t be based in the U.S.

True publishers, by the way, enjoy near-absolute leeway over what they choose to publish or not publish. A landmark case in this regard is Miami Herald v. Tornillo (1974), in which the Supreme Court ruled that a Florida law requiring newspapers to publish responses from political figures who’d been criticized was unconstitutional. Should platforms be treated as publishers? Certainly it seems ludicrous to hold them fully responsible for the millions of pieces of content that their users post on their sites. Yet the use of algorithms to promote some content in order to sell more advertising and earn more profits involves editorial discretion, even if those editors are robots. In that regard, they start to look more like publishers.

Maybe it’s time to move past the old categories altogether. In a recent appearance on WBUR Radio’s “On Point,” University of Minnesota law professor Alan Rozenshtein said that platforms have some qualities of common carriers and some qualities of publishers. What we really need, he said, is a new paradigm that recognizes we’re dealing with something unlike anything we’ve seen before.

Which brings me to two legal cases, both of which are hurtling toward a collision.

Recently the U.S. Court of Appeals for the 5th Circuit upheld a Texas law that, among other things, forbids platforms from removing any third-party speech that’s based on viewpoint. Many legal observers had believed the law would be decisively overturned since it interferes with the ability of private companies to conduct their business as they see fit, and to exercise their own First Amendment right to delete content they regard as harmful. But the court didn’t see it that way, with Judge Andrew Oldham writing: “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.” This is a view of the platforms as common carriers.

As Rozenshtein said, the case is almost certainly headed for the Supreme Court because it clashes with an opinion by the 11th Circuit, which overturned a similar law in Florida, and because it’s unimaginable that any part of the internet can be regulated on a state-by-state basis. Such regulations need to be hashed out by Congress and apply to all 50 states, Rozenshtein said.

Meanwhile, the Supreme Court has agreed to hear a case coming from the opposite direction. The case, brought by the family of a 23-year-old student who was killed in an ISIS attack in Paris in 2014, argues that YouTube, owned by Google, should be held liable for using algorithms to boost terrorist videos, thus helping to incite the attack. “Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” according to the lawsuit.

Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

The ISIS case is especially interesting because it’s the use of algorithms to boost speech that are at issue — again, something that was, at most, in its embryonic stages at the time that Section 230 was enacted. Eric Goldman, a law professor at Santa Clara University, put it this way in an interview with The Washington Post: “The question presented creates a false dichotomy that recommending content is not part of the traditional editorial functions. The question presented goes to the very heart of Section 230 and that makes it a very risky case for the internet.”

I’ve suggested that one way to reform Section 230 might be to remove protections for any algorithmically boosted speech, which might actually be where we’re heading.

All of this comes at a time when the Supreme Court’s turn to the right has called its legitimacy into question. Two of the justices, Clarence Thomas and Neil Gorsuch, have even suggested that the libel protections afforded the press under the landmark Times v. Sulllivan decision be overturned or scaled back. After 26 years, it may well be time for some changes to Section 230. But can we trust the Supremes to get it right? I guess we’ll just have to wait and see.


Discover more from Media Nation

Subscribe to get the latest posts to your email.

Previous

Catching up with Isabel Wilkerson’s masterpiece about the Great Migration

Next

How Anne Galloway built VTDigger into Vermont’s largest news organization

4 Comments

  1. What keeps striking me is that Facebook and Twitter et al are the biggest publishers in the nation — making money by publishing content that affects public debate — and yet has LESS legal responsibility for what it covers than does a 500-circulation county weekly paper or a local news website. Aside from the complex issue of whether they’re also common carries, as publishers they should be held legally responsible for what they publish (and profit from). Just imagine if Elon Musk could be sued for publishing incitements to commit violence or libelous personal attacks published by a certain former president, Alex Jones, or their armies of trolls. We small publishers too should have to review every comment we post (just as with letters to the editor in the old days) and face legal punishment if we knowingly publish libel.

  2. Of course, self-consorship is a separate but related issue. You have a graphic of AOL at the top. In 1996, the Boston Phoenix wouldn’t let me print something negative about AOL in an article I wrote for them about the internet; turns out AOL was giving them free web space, and, well…That’s what set me out on a life of crime (pirate radio).

  3. Lex

    A couple of points, Dan:

    Alan Rozenshtein is wrong, in the sense that we’re not dealing with something unlike anything we’ve seen before. In fact, we’re dealing with something like *multiple* things we’ve seen before that currently are handled in different ways from one another. Those separate ways of handling them arose for good, practical reasons. That said, I tend to agree with your position (if I understand it correctly) that platforms be held liable only for tortious material the circulation of which their algorithm boosts.

    As for hoping the Supreme Court gets it right, forget it. In several decisions during the term just past, the high court, and particularly its Republican majority, showed itself either unwilling or unable to write opinions based on science, accurate/contextual history, and the very factual record of some cases. Right now, I think, there’s no situation so bad that the conservative majority can’t make it worse.

  4. Robert Consalvo

    Not in the business but enjoy reading ur take on goings on. Have an idea for ur class: I’ve been an observer of media coverage of Boston and it’s schools since ’84 when I went to work for Ray Flynn. At that time there were 2 offices for the press on the 9th floor of city hall, one each for Globe and Herald, each with 3-4 reporters assigned to them. (I think even the Patriot Ledger had a reporter assigned.) Flynn was kind of like Trump in that he generated news, real stuff not like the crap Trump pulls. When Tommy Menino came in, it quieted down. I remember Tommy saying he even had trouble getting coverage for a press conference. Ur students could measure the coverage from White, through Flynn and Menino and following. Same with BPS. Once the appointed Board took office, coverage of the schools declined especially after I left my position as Secretary to the School Committee, except for sensational things like shootings and stabbings. I attribute the continued failure of the BPS to this lack of serious coverage. Anyway, that’s my idea for a class assignment. Bob Consalvo (Not Rob)

    Robert W. Consalvo Sent from my Galaxy Tab® S2 Get Outlook for Androidhttps://aka.ms/AAb9ysg ________________________________

Powered by WordPress & Theme by Anders Norén