The Elon Musk-ization of Twitter and the rise a Republican House controlled by its most extreme right-wing elements probably doom any chance for intelligent reform to Section 230. That’s the 1996 law that holds harmless any online publisher for third-party content posted on its site, whether it be a libelous comment on a newspaper’s website (one of the original concerns) or dangerous disinformation about vaccines on Facebook.
It is worth repeating for those who don’t understand the issues: a publisher is legally responsible for every piece of content — articles, advertisements, photos, cartoons, letters to the editor and the like — with the sole exception of third-party material posted online. The idea behind 230 was that it would be impossible to vet everything and that the growth of online media depended on an updated legal structure.
Over the years, as various bad actors have come along and abused Section 230, a number of ideas have emerged for curtailing it without doing away with it entirely. Some time back, I proposed that social media platforms that use algorithms to boost certain types of content should not enjoy any 230 protections — an admittedly blunt instrument that would pretty much destroy the platforms’ business model. My logic was that increased engagement is associated with content that makes you angry and upset, and that the platforms profit mightily by keeping your eyes glued to their site.
Now a couple of academics, Robert Kozinets and Jon Pfeiffer, have come along with a more subtle approach to Section 230 reform. Their proposal was first published in The Conversation, though I saw it at Nieman Lab. They offer what I think is a pretty brilliant analogy as to why certain types of third-party content don’t deserve protection:
One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.
But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls — which contains not just porn but also misinformation and hate speech — the absolutist stance that they have total protection and total legal “immunity” is untenable.
Kozinets and Pfeiffer offer three ideas that are worth reading in full. In summary, though, here is what they are proposing.
- A “verification trigger,” which takes effect when a platform profits from bad speech — the idea I tried to get at with my proposal for removing protections for algorithmic boosting. Returning to the restaurant analogy, Kozinets and Pfeiffer write, “When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.” They cite an extreme example: Elon Musk’s decision to sell blue-check verification, thus directly monetizing whatever falsehoods those with blue checks may choose to perpetrate.
- “Transparent liability caps” that would “specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it.” Platforms that violate those standards would lose 230 protections. We can only imagine what this would look like once Marjorie Taylor Greene and Matt Gaetz get hold of it, but, well, it’s a thought.
- A system of “neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform.” Kozinets and Pfeiffer call this “Twitter court,” and platforms that don’t play along could be sued for libel or invasion of privacy by aggrieved parties.
I wouldn’t expect any of these ideas to become law in the near or intermediate future. Currently, the law appears to be entirely up for grabs. For instance, last year a federal appeals court upheld a Texas law that forbids platforms from removing any third-party speech that’s based on viewpoint. At the same time, the U.S. Supreme Court is hearing a case that could result in 230 being overturned in its entirety. Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.
Still, Kozinets and Pfeiffer have provided us with some useful ways of how we might reform Section 230 in order to protect online publishers without giving them carte blanche to profit from their own bad behavior.
Discover more from Media Nation
Subscribe to get the latest posts sent to your email.
I still don’t understand why the biggest and most profitable publishers in the world, the social media platforms, are not considered publishers! I don’t think this is complicated. Just kill Section 230 and make it fair. They are responsible for what they publish and make obscene amounts of money on.