Why it matters that The New York Times got it wrong on Section 230

The U.S. Supreme Court will rule on two cases involving Section 230. Photo (cc) 2006 by OZinOH.

Way back in 1996, when Section 230 was enacted into law, it was designed to protect all web publishers, most definitely including newspapers, from being sued over third-party content posted in their comment sections. It would be another eight years before Facebook was launched, and longer than that before algorithms would be used to boost certain types of content.

But that didn’t stop David McCabe of The New York Times — who, we are told, “has reported for five years on the policy debate over online speech” — from including this howler in a story about two cases regarding Section 230 that are being heard by the U.S. Supreme Court:

While newspapers and magazines can be sued over what they publish, Section 230 shields online platforms from lawsuits over most content posted by their users.

No. I have to assume that McCabe and maybe even his editors know better, and that this was their inept way of summarizing the issue for a general readership. But it perpetuates the harmful and wrong notion that this is only about Facebook, Twitter and other social media platforms. It’s not. Newspapers and magazines are liable for everything they publish except third-party online comments, which means that they are treated exactly the same as the giant platforms.

Though it is true that an early case testing Section 230 involved comments posted at AOL rather than on a news website, the principle that online publishers can’t be held liable for what third parties post on their platforms is as valuable to, oh, let’s say The New York Times as it is to Facebook.

That’s not to say 230 can’t be reformed and restricted; and, as I wrote recently, it probably should be. But it’s important that the public understand exactly what’s at stake.

Some common-sense ideas for reforming Section 230

Photo (cc) 2005 by mac jordan

The Elon Musk-ization of Twitter and the rise a Republican House controlled by its most extreme right-wing elements probably doom any chance for intelligent reform to Section 230. That’s the 1996 law that holds harmless any online publisher for third-party content posted on its site, whether it be a libelous comment on a newspaper’s website (one of the original concerns) or dangerous disinformation about vaccines on Facebook.

It is worth repeating for those who don’t understand the issues: a publisher is legally responsible for every piece of content — articles, advertisements, photos, cartoons, letters to the editor and the like — with the sole exception of third-party material posted online. The idea behind 230 was that it would be impossible to vet everything and that the growth of online media depended on an updated legal structure.

Over the years, as various bad actors have come along and abused Section 230, a number of ideas have emerged for curtailing it without doing away with it entirely. Some time back, I proposed that social media platforms that use algorithms to boost certain types of content should not enjoy any 230 protections — an admittedly blunt instrument that would pretty much destroy the platforms’ business model. My logic was that increased engagement is associated with content that makes you angry and upset, and that the platforms profit mightily by keeping your eyes glued to their site.

Now a couple of academics, Robert Kozinets and Jon Pfeiffer, have come along with a more subtle approach to Section 230 reform. Their proposal was first published in The Conversation, though I saw it at Nieman Lab. They offer what I think is a pretty brilliant analogy as to why certain types of third-party content don’t deserve protection:

One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls — which contains not just porn but also misinformation and hate speech — the absolutist stance that they have total protection and total legal “immunity” is untenable.

Kozinets and Pfeiffer offer three ideas that are worth reading in full. In summary, though, here is what they are proposing.

  • A “verification trigger,” which takes effect when a platform profits from bad speech — the idea I tried to get at with my proposal for removing protections for algorithmic boosting. Returning to the restaurant analogy, Kozinets and Pfeiffer write, “When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.” They cite an extreme example: Elon Musk’s decision to sell blue-check verification, thus directly monetizing whatever falsehoods those with blue checks may choose to perpetrate.
  • “Transparent liability caps” that would “specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it.” Platforms that violate those standards would lose 230 protections. We can only imagine what this would look like once Marjorie Taylor Greene and Matt Gaetz get hold of it, but, well, it’s a thought.
  • A system of “neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform.” Kozinets and Pfeiffer call this “Twitter court,” and platforms that don’t play along could be sued for libel or invasion of privacy by aggrieved parties.

I wouldn’t expect any of these ideas to become law in the near or intermediate future. Currently, the law appears to be entirely up for grabs. For instance, last year a federal appeals court upheld a Texas law that forbids platforms from removing any third-party speech that’s based on viewpoint. At the same time, the U.S. Supreme Court is hearing a case that could result in 230 being overturned in its entirety. Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

Still, Kozinets and Pfeiffer have provided us with some useful ways of how we might reform Section 230 in order to protect online publishers without giving them carte blanche to profit from their own bad behavior.

The numbers show why it’s difficult for many to walk away from Twitter

Thanks for the traffic, Bob! Photo (cc) 2011 by Francisco Antunes.

I was looking at my WordPress statistics for 2022, and one number really leaped out at me. Twitter was the third-largest source of traffic to Media Nation in 2022. Search engines were responsible for 70,626 views, Facebook was second at 27,126, and Twitter was right behind at 25,371.

As you probably know, I’ve stopped using Twitter. But it shows you why walking away is pretty close to impossible for self-employed journalists and marginal operators who can’t afford to spurn any service that drives traffic to their site. Although I have a voluntary membership program for $5 a month (please consider!), my livelihood is not dependent on Media Nation.

Search, Facebook and Twitter were the big three, followed by LinkedIn at 4,047 and, in fifth place, an unexpected source: Editor & Publisher, the news industry trade publication, at 3,827. E&P has been kind enough to feature my posts in its daily newsletter on a fairly regular basis, so I guess that’s the explanation. Other notable entries in the top 10 were Universal Hub and Expecting Rain, a site for fans of Bob Dylan, who I’ve been known to write about from time to time. From there it quickly dribbles down to double and single digits.

I’ve taken most of my Twitter-like posts to Mastodon, so I was curious to see that there was nothing. The explanation, I found out, is that Mastodon contains code that makes referrals invisible, which is supposedly some sort of privacy protection. I don’t quite get it, and I’ve learned about a workaround that will supposedly make Mastodon referrals show up. I am getting some referrals from Post News, which, like Mastodon, is emerging as a leading Twitter replacement.

Amazon is moving away from Kindle newspapers and magazines

Photo (cc) 2009 by Brian Dewey

I was sorry to hear that Amazon plans to cut back on selling newspapers and magazines for the Kindle sometime next year, according to Jim Milliot of Publishers Weekly. The reason, I think, was the combination of a really bad deal for readers along with a recognition that the Kindle can’t compete with the whiz-bang color photos and multimedia that newspapers and magazines offer in their regular digital products.

Why are Kindle newspapers and magazines a loser for readers? Because you have to pay for the Kindle version over and above what you’re already paying for your digital subscription. A subscription to The New York Times on Kindle, for instance, costs $20 a month, and it makes no difference whether you’re already a Times subscriber.

On the rare occasions when I fly or take the Amtrak, I’ll buy that day’s Times for Kindle for $1. It downloads fully, so you don’t need wifi once it’s on your device. And I found it to be a pleasurable reading experience. Now, I like photography, and the small black-and-white photos you get on Kindle are no match for reading the Times on my iPad, or in print. But the Kindle provides a focused reading experience more akin to print than to digital, without the constant temptation to check your email or share an article on social media. Yet it is certainly not worth a separate subscription over and above what I’m already paying.

The Publishers Weekly article says that Kindle newspapers and magazines aren’t going away entirely. Reportedly “hundreds” of titles will be available for members of Kindle Unlimited, who pay $10 a month for access to a wide range of books and periodicals. But I think it’s still to be determined if you’ll be able to download a quality newspaper every day as part of that fee, especially since that’s only half what you’d pay for the Times alone right now.

Back in 2009, I suggested that The Boston Globe give away Kindles to subscribers. Instead, two years later the Globe started making its move toward paid digital subscriptions, which has been the paper’s salvation. I still like using my Kindle to read books, but most of us are far more likely to consume news on our phones.

I won’t call the semi-demise of Kindle newspapers a lost opportunity; it’s more a matter of changes in what we expect from our devices. The next time I take the Amtrak, though, I guess I’m going to have to find a Hudson News so that I can buy a print paper.

Sticking Twitter in the freezer

Photo (cc) 2014 by Monteregina

When making ethical decisions, we all have to decide where we’re going to draw the line. I’ve been watching Elon Musk’s behavior closely since he purchased Twitter in late October and thinking about where I ought to draw my own line.

It’s different for everyone, and I’m not going to criticize anyone else’s judgment. For Jelani Cobb, it came when Musk restored Donald Trump’s Twitter account, which had been locked because he incited violence during the Jan. 6 insurrection. I semi-shrugged my shoulders. No, I wasn’t thrilled that Musk had brought back Trump and his merry band of Q-adjacent loons, including the loathsome Marjorie Taylor Greene. But my goodness, have you seen the internet? Twitter’s a big place, and I didn’t see any particular reason why we couldn’t all co-exist in our own spaces.

Then there are the deeply stupid “Twitter Files,” promoted by house journalists Matt Taibbi and Bari Weiss, internal documents given to them by Musk that show evidence of some mistakes in moderation but that mainly demonstrate Twitter was attempting to enforce its publicly stated policies about hate speech, incitement and misinformation. There’s some big-time hyperventilating going on about one of those mistakes — the decision to suppress the New York Post’s story about Hunter Biden’s laptop. But that decision was reversed within 24 hours, and it’s worth noting that it was based on an actual policy not to share hacked information. This is a scandal? (Brian Fung of CNN has more.)

What has brought me to this moment, though, is Musk’s own behavior. In late November, Twitter announced that it would no longer take action against misinformation about COVID-19, in accordance with the Chief Twit’s wishes. And then, within the past few days, came the end of the line, at least for me. First Musk attacked Yoel Roth, his former head of trust and safety. Musk tweeted out a short section of Roth’s Ph.D. dissertation to make it appear, falsely, that Roth supports the sexualization of children. “Looks like Yoel is arguing in favor of children being able to access adult Internet services in his PhD thesis,” Musk tweeted. (If you’re interested in the particulars, see this piece at Business Insider by Sawdah Bhaimiya.)

Then, on Sunday, Musk tweeted, “My pronouns are Prosecute/Fauci,” and followed that up with a meme from some fantasy movie (“Lord of the Rings”?) of Fauci whispering in President Biden’s ear, “Just one more lockdown my king.” (Details from Jesse O’Neill in the New York Post.)

At what point does indifference morph into complicity? What we have now is the head of Twitter, with 121 million followers, tweeting out messages that are putting actual people and their families at risk. In what should have been a surprise to no one, Roth has had to flee his home and go into hiding, according to Donie Sullivan of CNN. Fauci, as you no doubt know, has been facing death threats throughout the pandemic, and Musk’s amplifying a bogus call to arrest and prosecute him could make matters worse. I realized that was my line, and Musk had crossed it.

I’ve downloaded my Twitter archive and will no longer be posting there except to help those who contact me and are looking for an alternative. I’ll set my account to private as soon as I’ve tweeted this out. I considered deleting my account altogether, but who knows what’s going to happen? Maybe next week Musk will enter a monastery and donate Twitter to the Wikimedia Foundation. Yes, that’s pretty unlikely — as unlikely as one of Musk’s SpaceX rocket ships safely taking you to Mars and back. For the moment, though, I don’t want to do anything that I can’t reverse if conditions change.

This was not an easy decision. I’ve been a heavy Twitter user since I joined in 2008. I’ve got more than 19,000 followers, and I know that not all of them are going to move to other platforms. But here are some alternatives below. You might also want to check out this roundup from Laurel Wamsley at NPR.

  • If you’re not doing so already, you can sign up to receive new posts to Media Nation by email. It’s free. Just scroll down the right-hand rail on the homepage, enter your email address and click on “Follow.”
  • The most promising Twitter alternative is Mastodon, which is a decentralized network of networks that — once you get past the clumsiness of figuring out how to sign up — works very much like Twitter. I joined in early November, and more than 1,300 people are following me there already. I’m at @dankennedy_nu@journa.host. There are various guides on how to get started. Here’s one from CUNY journalism professor Jeff Jarvis.
  • If Mastodon is the earthy-crunchy alternative to Twitter, then Post News is the corporate version. Like Mastodon, Post News is promoting itself as a civil environment free of abuse and trolling. I know that some Mastodon folks are criticizing Post News for being just another venture-capital play that may eventually come to as bad an end as Twitter. They’re not wrong. For now, though, I’m looking at Mastodon as a place where I can connect mainly with journalists, academics and the extremely online, and then mosey over to Post News to engage with normal people. The interface is simple and attractive; the site is still in beta and will continue to improve. You can follow me at dankennedy_nu.
  • Let’s not forget that Facebook isn’t going anywhere. If we don’t know each other, please don’t send me a friend request; follow my public feed instead. Here’s where you can find me.
  • I’m also on LinkedIn and Instagram, but I prefer not to use those to engage the way I do on the other platforms.

There are a million takes on what has happened to Twitter that I could point you to, and believe me, there are very few that are worth reading. But this one is worthwhile. It’s by Ezra Klein, and he questions whether any of these platforms, even the nice new ones, are doing us any good.

Finally, what we need more than anything on Mastodon and Post News is some diversity, which, at its pre-Musk best, is what was great about Twitter. Black Twitter needs a home, and I really miss my non-Trumpy conservative followers and the less politically engaged. I invite you all to take the plunge. Join one of the alternatives. Cut down or eliminate your Twitter activity. And discover the joys of de-Muskifying your life.

Three insights into Elon Musk’s brief, narcissism-fueled reign over Twitter

Photo (cc) 2013 by Scott Beale / Laughing Squid

When we learned last spring that Elon Musk might buy Twitter and transform it into some sort of troll- and bot-infested right-wing hellhole, my first thought was: Bring it on. Although I’m a heavy user, I had no great affection for the service, which was already something of a mess. If Musk ran it into a ditch, well, what of it?

On second thought, I realized I would miss it — and so would a lot of other people. In particular, Twitter has become an important service in calling out injustice around the world as well as a forum that gives Black users a voice they might not have anywhere else. My friend Callie Crossley was talking about Black Twitter on the late, lamented “Beat the Press with Emily Rooney” ages ago. Black Twitter could go elsewhere, of course, but it would be hard to recreate on the same scale that it exists now.

Please support this free source of news and commentary for just $5 a month by becoming a member of Media Nation. Just click here.

For now, I’m staying, but I’m also playing around. Mastodon meets a lot of my needs (I’m @dankennedy_nu@journa.host), mainly because a lot of media and political people I want to follow immediately made the move. But, so far, I see none of the non-Trump conservatives whose presence I value and very few Black users. That may be my fault, and it may change. I’m also skeptical of Mastodon’s extreme decentralization, with each server (called an instance) having its own rules of engagement. I’m also on Post News at @dankennedy_nu, but I really don’t like the micropayment scheme on which it’s staked its future, explained at Nieman Lab by Laura Hazard Owen.

Twitter really does matter. It may be the smallest of the social platforms, but it’s a place where people in media and politics have to be. I’m not sure it can be replicated. So much has been written and said about Twitter over the past few weeks, and no one could possible keep up with it all. Here, though, are three pieces that I think cut through the murk as well as any.

The first is from Dr. Meredith Clark, my colleague at Northeastern’s School of Journalism. Professor Clark is a leading authority on Black Twitter and the author of the forthcoming book “We Tried to Tell Y’all: Black Twitter and the Rise of Digital Counternarratives.” Meredith says she’s staying. In a recent interview with Michel Martin of NPR, she explained why:

We’re digging in our heels. We’ve been on this platform. We’ve contributed so much to it that we’ve made it valuable in the way that it is today. We’ve made it an asset, and so no, we’re not going anywhere. And then I see other people, honestly, who have more privilege, a number of academics who are saying, nope, we’re going somewhere else. We’re leaving for other platforms.

But I do really think that there are limits to those relationships because there aren’t many platforms that allow many speakers to talk to one another all at the same time in the same place. My use hasn’t changed all that much. I don’t plan to be one of those people who migrate. I just tweeted the other day that I’ll be the last one to turn the lights off if that’s what I need to be, because I’m certainly not going either.

By the way, Meredith was a guest earlier this year on “What Works: The Future of Local News,” a podcast hosted by Ellen Clegg and me. You can listen to our conversation here.

Taking the opposite approach is Jelani Cobb, dean of the Columbia Journalism School, who has suspended his Twitter account in favor of Mastodon — a step that he admits has cut him out of numerous conversations, but that he believed was necessary in order not to be a part of Musk’s transformation of Twitter into a reflection of his own obsessions and ego. Like Clark, Dr. Cobb is Black; unlike Clark, his reasoning makes no mention of Black Twitter per se, although he does note its value in bringing to light racial injustices. “Were it not for social media,” Cobb writes in The New Yorker, “George Floyd — along with Ahmaud Arbery and Breonna Taylor — would likely have joined the long gallery of invisible dead Black people, citizens whose bureaucratized deaths were hidden and ignored.” But that, he emphasizes, was then:

Participating in Twitter — with its world-spanning reach, its potential to radically democratize our discourse along with its virtue mobs and trolls — always required a cost-benefit analysis. That analysis began to change, at least for me, immediately after Musk took over. His reinstatement of Donald Trump’s account made remaining completely untenable. Following an absurd Twitter poll about whether Trump should be allowed to return, Musk reinstated the former President. The implication was clear: if promoting the January 6, 2021, insurrection — which left at least seven people dead and more than a hundred police officers injured — doesn’t warrant suspension to Musk, then nothing else on the platform likely could.

My own view of Trump’s reinstatement is rather complicated. On the one hand, I don’t think it’s easy to justify banning a major presidential candidate, which Trump now surely is. On the other hand, he was banned for fomenting violence — and now that he’s been given another chance, he’s likely to do it again, which means he’ll have to be banned all over again. Except that he won’t be with Musk in charge. (So far, at least, Trump hasn’t tweeted since his reinstatement.) In any case, I respect Cobb’s decision, even if I’m still not quite there.

I’ll close with Josh Marshall, editor of the liberal website Talking Points Memo. Like me, Marshall is dipping his toe into Mastodon’s waters while maintaining his presence on Twitter. And, like me, he’s trying to figure out exactly what Musk is up to. The other day he offered a theory that doesn’t explain all of it, but may explain some of it — especially the part that plays into Musk’s emotions and sense of grievance, which may prove to be the most important in understanding what’s going on.

Marshall sees Musk as traveling a path previously taken by Donald Trump. Like Trump, Musk is a narcissist who can’t imagine a world that doesn’t revolve around his every need and want. Also like Trump in, say 2015, Musk was until recently someone with vague right-wing proclivities who has hardened his views and openly embraced white supremacy and antisemitism because we liberals hurt his feelings. Trump and Musk have both taken up with horrible people because they were offering support and friendship when no one else would. With Trump, it’s Nick Fuentes and Kanye West. With Musk, it’s, well, Trump and his sycophants. Marshall writes:

I doubt very much that in mid-2015 Trump had any real familiarity with the arcana of racist and radical right groups, their keywords or ideological touch-points. But they knew he was one of them, perhaps even more than he did. They pledged their undying devotion and his narcissism did the rest.

Elon Musk is on the same path. There are various theories purporting to explain Musk’s hard right turn: a childhood in apartheid South Africa, his connection with Peter Thiel, disappointments in his personal life. Whatever the truth of the matter, whatever right-leaning tendencies he may have had before a couple years ago appear to have been latent or unformed. Now the transformation is almost complete. He’s done with general “free speech” grievance and springing for alternative viewpoints. He’s routinely pushing all the far right storylines from woke groomers to Great Replacement.

If anything good can come of this it may be that we hit peak social media a few years ago. Facebook is shrinking, especially among anyone younger than 60. TikTok is huge, but as a number of observers have pointed out, it isn’t really a social platform — it’s a broadcaster with little in the way of user interaction. Now Twitter is splitting apart.

This may be temporary. Maybe Mark Zuckerberg or (most likely) someone else will be able to reassemble social media around the metaverse. For now, though, social media may be broken in a way we couldn’t have imagined in, say, 2020. Perhaps that’s not such a bad thing — although I wouldn’t mind if someone put Twitter back together again, only this time minus the trolls, the bots and the personal abuse that defined the site long before Musk came along.

Kara Swisher can’t make sense out of what Elon Musk is doing, either

Elon Musk. Photo (cc) 2019 by Daniel Oberhaus.

If you are trying to make sense out of what Elon Musk is doing with (or, rather, to) Twitter, I recommend this podcast in which the tech journalist Kara Swisher talks about her interactions with the billionaire over the years.

Swisher is appalled as any of us, but she’s more sad than angry — she says she genuinely believed Musk might be the right person to fix the money-losing platform. She doesn’t attribute any nefarious motives to his brief reign, which has been marked by chaos and performative cruelty toward Twitter’s employees. But she can’t make sense of it, either.

Toward the end, her producer, Nayeema Raza, asks Swisher what she’d like to ask Musk if they were back on speaking terms — which they’re currently not. Swisher’s four-word answer: “What are you doing?”

A bit about Mastodon

Photo (cc) 2007 by Benjamin Golub

I’ve opened an account on Mastodon in the hopes that it will prove to be a good alternative to Twitter, now in the midst of an astonishing implosion.

What I’m hoping for is something like Twitter pre-Elon Musk, only without the trolls and bots, the personal abuse and the piling-on. I don’t think any of us believed Twitter was a wonderful place before Musk lit it on fire. So far, Mastodon sort of fits the bill, but it’s also something different. The culture is more polite — maybe excessively so, though that might just be a first impression.

In any case, there doesn’t seem to be any going back. I wouldn’t be surprised if Twitter is essentially gone in a few weeks. You can follow me on Mastodon at @dankennedy_nu@journa.host. And for a really good explanation of Mastodon and how its decentralized governance works, I highly recommended this Lawfare podcast.

The shame of Musk’s takeover is that Twitter was starting to get (a little) better

Elon Musk. Photo (cc) 2019 by Daniel Oberhaus.

The shame of it is that Twitter was starting to get a little better. Some months back I decided to spend $3 a month for Twitter Blue. You had up to a minute to pull back a tweet if you saw a typo or if a picture didn’t display properly. More recently, they added an actual edit button, good for 30 minutes. Best of all is something called “Top Articles,” which shows stories that are most widely shared by your network and their networks. I almost always find a couple of stories worth reading — including the one from The Verge that I’ve shared below.

Anyway, here we are. Billionaire Elon Musk is now the sole owner of a social media platform that I check in with multiple times during the day and post to way too much. Twitter is much smaller than Facebook and YouTube, and smaller than TikTok and Instagram, too. In fact, it’s smaller than just about everything else. But it punches above its weight because it’s the preferred outlet for media and political people. It’s also a cesspool of sociopathy. We’re all worried that Musk will make it worse, but let’s be honest — it’s already pretty bad.

The smartest take I’ve seen so far is by Nilay Patel in The Verge. Headlined “Welcome to hell, Elon,” the piece argues that Musk isn’t going to be able to change Twitter as much as he might like to because to do so will drive advertisers away — something that’s already playing out in General Motors’ decision to suspend its ads until its executives can get a better handle on what the Chief Twit has in mind. Patel also points out that Musk is going to receive a lot of, er, advice about whom to ban on Twitter from countries where his electric car company, Tesla, does business, including Germany, China and India. Those are three very different cultures, but all of them have more restrictive laws regarding free speech than the United States. Patel writes:

The essential truth of every social network is that the product is content moderation, and everyone hates the people who decide how content moderation works. Content moderation is what Twitter makes — it is the thing that defines the user experience. It’s what YouTube makes, it’s what Instagram makes, it’s what TikTok makes. They all try to incentivize good stuff, disincentivize bad stuff, and delete the really bad stuff…. The longer you fight it or pretend that you can sell something else, the more Twitter will drag you into the deepest possible muck of defending indefensible speech.

Indeed, Twitter has already reinstated the noted antisemite formerly known as Kanye West, although Musk, weirdly enough, says he had nothing to do with it.

My approach to tweeting in Elon Musk’s private garden will be to do what I’ve always done and see what happens. I use it too much to walk away, but I don’t like it enough to wring my hands.

A quarter-century after its passage, Section 230 is up for grabs

A quarter-century after Congress decided to hold publishers harmless for third-party content posted on their websites, we are headed for a legal and constitutional showdown over Section 230, part of the Communications Decency Act of 1996.

Before the law was passed, publishers worried that if they removed some harmful content they might be held liable for failing to take down other content, which gave them a legal incentive to leave libel, obscenity, hate speech and misinformation in place. Section 230 solved that by including a so-called Good Samaritan provision that allowed publishers to pick and choose without incurring liability.

Back in those early days, of course, we weren’t dealing with behemoths like Facebook, YouTube and Twitter, which use algorithms to boost content that keeps their users engaged — which, in turn, usually means speech that makes them angry or upset. In the mid-1990s, the publishers that were seeking protection were generally newspapers that had opened up online comments and nascent online services like Prodigy and AOL. Publishers are fully liable for any content over which they have direct control, including news stories, advertisements and letters to the editor. Congress understood that the flood of content being posted online raised different issues.

But after Twitter booted Donald Trump off its service and Facebook suspended him for inciting violence during and after the attempted insurrection of Jan. 6, 2021, Trump-aligned Republicans began agitating against what they called censorship by the tech giants. The idea that private companies are even legally capable of engaging in censorship is something that can be disputed, but it’s gained some traction in legal circles, as we shall see.

Meanwhile, Democrats and liberals argued that the platforms weren’t acting aggressively enough to remove dangerous and harmful posts, especially those promoting disinformation around COVID-19 such as anti-masking and anti-vaccine propaganda.

A lot of this comes down to whether the platforms are common carriers or true publishers. Common carriers are legally forbidden from discriminating against any type of user or traffic. Providers of telephone service would be one example. Another example would be the broader internet of which the platforms are a part. Alex Jones was thoroughly deplatformed in recent years — you can’t find him on Facebook, Twitter or anywhere else. But you can find his infamous InfoWars site on the web, and, according to SimilarWeb, it received some 9.4 million visits in July of this year. You can’t kick Jones off the internet; at most, you can pressure his hosting service to drop him. But even if they did, he’d just move on to the next service, which, by the way, needn’t be based in the U.S.

True publishers, by the way, enjoy near-absolute leeway over what they choose to publish or not publish. A landmark case in this regard is Miami Herald v. Tornillo (1974), in which the Supreme Court ruled that a Florida law requiring newspapers to publish responses from political figures who’d been criticized was unconstitutional. Should platforms be treated as publishers? Certainly it seems ludicrous to hold them fully responsible for the millions of pieces of content that their users post on their sites. Yet the use of algorithms to promote some content in order to sell more advertising and earn more profits involves editorial discretion, even if those editors are robots. In that regard, they start to look more like publishers.

Maybe it’s time to move past the old categories altogether. In a recent appearance on WBUR Radio’s “On Point,” University of Minnesota law professor Alan Rozenshtein said that platforms have some qualities of common carriers and some qualities of publishers. What we really need, he said, is a new paradigm that recognizes we’re dealing with something unlike anything we’ve seen before.

Which brings me to two legal cases, both of which are hurtling toward a collision.

Recently the U.S. Court of Appeals for the 5th Circuit upheld a Texas law that, among other things, forbids platforms from removing any third-party speech that’s based on viewpoint. Many legal observers had believed the law would be decisively overturned since it interferes with the ability of private companies to conduct their business as they see fit, and to exercise their own First Amendment right to delete content they regard as harmful. But the court didn’t see it that way, with Judge Andrew Oldham writing: “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.” This is a view of the platforms as common carriers.

As Rozenshtein said, the case is almost certainly headed for the Supreme Court because it clashes with an opinion by the 11th Circuit, which overturned a similar law in Florida, and because it’s unimaginable that any part of the internet can be regulated on a state-by-state basis. Such regulations need to be hashed out by Congress and apply to all 50 states, Rozenshtein said.

Meanwhile, the Supreme Court has agreed to hear a case coming from the opposite direction. The case, brought by the family of a 23-year-old student who was killed in an ISIS attack in Paris in 2014, argues that YouTube, owned by Google, should be held liable for using algorithms to boost terrorist videos, thus helping to incite the attack. “Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” according to the lawsuit.

Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

The ISIS case is especially interesting because it’s the use of algorithms to boost speech that are at issue — again, something that was, at most, in its embryonic stages at the time that Section 230 was enacted. Eric Goldman, a law professor at Santa Clara University, put it this way in an interview with The Washington Post: “The question presented creates a false dichotomy that recommending content is not part of the traditional editorial functions. The question presented goes to the very heart of Section 230 and that makes it a very risky case for the internet.”

I’ve suggested that one way to reform Section 230 might be to remove protections for any algorithmically boosted speech, which might actually be where we’re heading.

All of this comes at a time when the Supreme Court’s turn to the right has called its legitimacy into question. Two of the justices, Clarence Thomas and Neil Gorsuch, have even suggested that the libel protections afforded the press under the landmark Times v. Sulllivan decision be overturned or scaled back. After 26 years, it may well be time for some changes to Section 230. But can we trust the Supremes to get it right? I guess we’ll just have to wait and see.