What does it mean to ‘publish’ in the age of Section 230? Plus, Olivia Nuzzi update, and media notes

Royalty-free photo via PickPik

What does it mean to “publish” something? In the pre-social media era, that question was easy enough to answer. It became a little more complicated in 1996, when Congress passed a law called Section 230, which protects internet providers from liability for any third-party content that might be posted on their sites.

But those early online publishers were newspapers and other news organizations as well as early online services such as CompuServe, AOL and Prodigy. None of them was trying to promote certain types of third-party content in order to drive up engagement and, thus, ad revenues.

Today, of course, that’s the whole point. Algorithms employed by social media companies such as Meta (Facebook, Instagram and Threads), Twitter and TikTok use sophisticated software that figures out what kind of content you are more likely to engage with with so they can show you more of it. Such practices have been linked to, among other things, genocide in Myanmar as well as depression and other mental health issues.

Become a supporter of Media Nation for just $5 a month. You’ll receive a weekly newsletter with exclusive content, a wrap-up of the week’s posts and more.

So again, what does it mean to “publish”? I’ve argued since as far back as 2017 that elevating some third-party content over others could be considered publication rather than simply acting as a passive receptacle of whatever stuff comes in over the digital transom.

A print publication, after all, is legally responsible for everything it encompasses, including ads (the landmark Times v. Sullivan libel decision involved an advertisement) and letters to the editor. It would be neither practical nor desirable to hold social media companies responsible for all third-party content. But again, if they are boosting some content to make it more visible because they (or, rather, their unblinking algorithms) think it will get them more engagement and make them more money, how is that not an act of publishing? Why should it be protected by federal law?

Earlier this week, investigative journalist Julia Angwin wrote an op-ed piece for The New York Times (gift link) arguing that the tide may be turning against the social media giants, in part because of TikTok’s aggressive use of its algorithmic “For You” feed, which has been emulated by the other platforms. A showdown over Section 230 may be headed for the Supreme Court. She writes:

If tech platforms are actively shaping our experiences, after all, maybe they should be held liable for creating experiences that damage our bodies, our children, our communities and our democracy….

My hope is that the erection of new legal guardrails would create incentives to build platforms that give control back to users. It could be a win-win: We get to decide what we see, and they get to limit their liability.

I don’t think there’s a good-faith argument to be made that reforming Section 230 would harm the First Amendment. We would still have the right to publish freely, subject to long-existing prohibitions against libel, incitement, serious breaches of national security and obscenity. And internet providers would still be held harmless for any content posted by their users. But it would end the legal absurdity that a tech platform can boost harmful content and then claim immunity because that content originated with someone else. (Ironically, those third-party posters are fully liable for their content if they can be identified and tracked down.)

As Angwin notes, Ethan Zuckerman of UMass Amherst, a respected thinker about all things digital, is suing Meta for the right to develop software that would allow users to control their own experience on Facebook. Angwin also touts Bluesky, a Twitter alternative that allows its users to design their own feeds (you can find me at @dankennedy-nu.bsky.social).

We should all have the right to freedom of speech and freedom of the press. But the platforms that control so much of our lives should should have the same freedoms that the rest of us have — and that should not include the freedom to boost harmful content without any legal consequences because of the fiction that they are not engaged in an act of publishing. It’s long past time to make some changes to Section 230.

Olivia Nuzzi departs

Olivia Nuzzi’s separation agreement with New York magazine was heavily lawyered, according to reports, and that shouldn’t come as a surprise to anyone. But the magazine’s statement that its law firm found “no inaccuracies nor evidence of bias” in her work needs to be placed in context. Liam Reilly and Hadas Gold of CNN report on Nuzzi’s departure.

Nuzzi, you may recall, was involved in some sort of sexual (but not physical) relationship with Robert F. Kennedy Jr. that may have encompassed sexting and nude selfies — we still don’t know.

But as I wrote last month, after Nuzzi’s relationship with Kennedy became public, she wrote a very tough piece about President Biden’s alleged age-related infirmities while Kennedy was still a presidential candidate and an oddly sympathetic profile of Donald Trump after Kennedy had left the race, endorsed Trump and made it clear that he was hoping for a high-level job in a Trump White House.

Maybe Nuzzi would have written those two stories exactly the same way even if she had never met Kennedy. But we’ll never know.

Media notes

• Billionaire ambitions. Benjamin Mullin of The New York Times reports (gift link) that a Florida billionaire named David Hoffmann has bought 5% of the cost-cutting Lee Enterprises newspaper chain, and that he hopes to help revive the local news business. “These local newspapers are really important to these communities,” Hoffman told Mullin. “With the digital age and technology, it’s changing rapidly. But I think there’s room for both, and we’d like to be a part of that.” Lee owns media properties in 73 U.S. markets, including well-known titles such as the St. Louis Post-Dispatch and The Buffalo News.

• Silent treatment. Patrick Soon-Shiong, whose ownership of the Los Angeles Times has been defined by vaulting ambitions and devastating cuts, has stumbled once again. Max Tani of Semafor reports that the Times will not endorse in this year’s presidential content, even though it published endorsements in state and local races just last week. The decision to abstain from choosing between Kamala Harris and Donald Trump, Tani writes, came straight from Soon-Shiong, who made his wealth in the health-care sector. Closer to home, The Boston Globe endorsed Harris earlier this week.

• Reaching young voters. Santa Cruz Local, a digital nonprofit, has announced an ambitious idea to engage with young people: news delivered by text messages and Instagram. “We want to reach thousands of students with civic news and help first time voters get to the ballot box,” writes Kara Meyberg Guzman, the Local’s co-founder and CEO. The Local’s Instagram-first election guide will be aimed at 18- to 29-year-olds in Santa Cruz County, with an emphasis on reaching local college students; Guzman is attempting to raise $10,000 in order to fund it. Santa Cruz Local was one of 205 local news organizations to receive a $100,000 grant from Press Forward last week. Guzman was also interviewed in the book that Ellen Clegg and I wrote, “What Works in Community News,” and on our podcast.

A lawsuit aims to let Facebook users turn off the News Feed

Mark Zuckerberg, defender of the algorithm. Photo (cc) 2016 by Alessio Jacona.

Imagine that you could log onto Facebook and not be exposed to that infernal, endlessly scrolling News Feed. Imagine, instead, that you could visit your friends and groups as you wished, without any algorithms to determine what you get exposed to. That’s what Facebook was like in the early days — and it’s what it could be like again if a lawsuit filed by longtime internet activist and researcher Ethan Zuckerman succeeds.

Zuckerman has developed a tool called Unfollow Everything 2.0, which would allow users to unfollow their friends, groups and pages. This wouldn’t change who you’re friends with, which means that you’d have no problem checking in with them manually; you can, of course, do that now as well. No longer, though, would everything be served up to you automatically, non-chronologically and bogged down with a ton of crap you didn’t ask for.

So why is Zuckerman suing? Because, several years ago, a Brit named Louis Barclay developed the original Unfollow Everything. Mark Zuckerberg and company threatened to sue him if he didn’t take it down and permanently threw him off Facebook and Instagram. Barclay wrote about his experience on Slate:

I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable.

Zuckerman is claiming that Section 230, a federal law that’s normally used to protect internet publishers like Meta from legal liability with regard to the content their users post, also protects developers of third-party tools such as Unfollow Everything.

“I’m suing Facebook to make it better,” Zuckerman, an associate professor at UMass Amherst, said in a press release. “The major social media companies have too much control over what content their users see and don’t see. We’re bringing this lawsuit to give people more control over their social media experience and data and to expand knowledge about how platforms shape public discourse.”

Zuckerman is being represented by the Knight First Amendment Institute at Columbia University.

Leave a comment | Read comments

Why it matters that The New York Times got it wrong on Section 230

The U.S. Supreme Court will rule on two cases involving Section 230. Photo (cc) 2006 by OZinOH.

Way back in 1996, when Section 230 was enacted into law, it was designed to protect all web publishers, most definitely including newspapers, from being sued over third-party content posted in their comment sections. It would be another eight years before Facebook was launched, and longer than that before algorithms would be used to boost certain types of content.

But that didn’t stop David McCabe of The New York Times — who, we are told, “has reported for five years on the policy debate over online speech” — from including this howler in a story about two cases regarding Section 230 that are being heard by the U.S. Supreme Court:

While newspapers and magazines can be sued over what they publish, Section 230 shields online platforms from lawsuits over most content posted by their users.

No. I have to assume that McCabe and maybe even his editors know better, and that this was their inept way of summarizing the issue for a general readership. But it perpetuates the harmful and wrong notion that this is only about Facebook, Twitter and other social media platforms. It’s not. Newspapers and magazines are liable for everything they publish except third-party online comments, which means that they are treated exactly the same as the giant platforms.

Though it is true that an early case testing Section 230 involved comments posted at AOL rather than on a news website, the principle that online publishers can’t be held liable for what third parties post on their platforms is as valuable to, oh, let’s say The New York Times as it is to Facebook.

That’s not to say 230 can’t be reformed and restricted; and, as I wrote recently, it probably should be. But it’s important that the public understand exactly what’s at stake.

Some common-sense ideas for reforming Section 230

Photo (cc) 2005 by mac jordan

The Elon Musk-ization of Twitter and the rise a Republican House controlled by its most extreme right-wing elements probably doom any chance for intelligent reform to Section 230. That’s the 1996 law that holds harmless any online publisher for third-party content posted on its site, whether it be a libelous comment on a newspaper’s website (one of the original concerns) or dangerous disinformation about vaccines on Facebook.

It is worth repeating for those who don’t understand the issues: a publisher is legally responsible for every piece of content — articles, advertisements, photos, cartoons, letters to the editor and the like — with the sole exception of third-party material posted online. The idea behind 230 was that it would be impossible to vet everything and that the growth of online media depended on an updated legal structure.

Over the years, as various bad actors have come along and abused Section 230, a number of ideas have emerged for curtailing it without doing away with it entirely. Some time back, I proposed that social media platforms that use algorithms to boost certain types of content should not enjoy any 230 protections — an admittedly blunt instrument that would pretty much destroy the platforms’ business model. My logic was that increased engagement is associated with content that makes you angry and upset, and that the platforms profit mightily by keeping your eyes glued to their site.

Now a couple of academics, Robert Kozinets and Jon Pfeiffer, have come along with a more subtle approach to Section 230 reform. Their proposal was first published in The Conversation, though I saw it at Nieman Lab. They offer what I think is a pretty brilliant analogy as to why certain types of third-party content don’t deserve protection:

One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls — which contains not just porn but also misinformation and hate speech — the absolutist stance that they have total protection and total legal “immunity” is untenable.

Kozinets and Pfeiffer offer three ideas that are worth reading in full. In summary, though, here is what they are proposing.

  • A “verification trigger,” which takes effect when a platform profits from bad speech — the idea I tried to get at with my proposal for removing protections for algorithmic boosting. Returning to the restaurant analogy, Kozinets and Pfeiffer write, “When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.” They cite an extreme example: Elon Musk’s decision to sell blue-check verification, thus directly monetizing whatever falsehoods those with blue checks may choose to perpetrate.
  • “Transparent liability caps” that would “specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it.” Platforms that violate those standards would lose 230 protections. We can only imagine what this would look like once Marjorie Taylor Greene and Matt Gaetz get hold of it, but, well, it’s a thought.
  • A system of “neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform.” Kozinets and Pfeiffer call this “Twitter court,” and platforms that don’t play along could be sued for libel or invasion of privacy by aggrieved parties.

I wouldn’t expect any of these ideas to become law in the near or intermediate future. Currently, the law appears to be entirely up for grabs. For instance, last year a federal appeals court upheld a Texas law that forbids platforms from removing any third-party speech that’s based on viewpoint. At the same time, the U.S. Supreme Court is hearing a case that could result in 230 being overturned in its entirety. Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

Still, Kozinets and Pfeiffer have provided us with some useful ways of how we might reform Section 230 in order to protect online publishers without giving them carte blanche to profit from their own bad behavior.

A quarter-century after its passage, Section 230 is up for grabs

A quarter-century after Congress decided to hold publishers harmless for third-party content posted on their websites, we are headed for a legal and constitutional showdown over Section 230, part of the Communications Decency Act of 1996.

Before the law was passed, publishers worried that if they removed some harmful content they might be held liable for failing to take down other content, which gave them a legal incentive to leave libel, obscenity, hate speech and misinformation in place. Section 230 solved that by including a so-called Good Samaritan provision that allowed publishers to pick and choose without incurring liability.

Back in those early days, of course, we weren’t dealing with behemoths like Facebook, YouTube and Twitter, which use algorithms to boost content that keeps their users engaged — which, in turn, usually means speech that makes them angry or upset. In the mid-1990s, the publishers that were seeking protection were generally newspapers that had opened up online comments and nascent online services like Prodigy and AOL. Publishers are fully liable for any content over which they have direct control, including news stories, advertisements and letters to the editor. Congress understood that the flood of content being posted online raised different issues.

But after Twitter booted Donald Trump off its service and Facebook suspended him for inciting violence during and after the attempted insurrection of Jan. 6, 2021, Trump-aligned Republicans began agitating against what they called censorship by the tech giants. The idea that private companies are even legally capable of engaging in censorship is something that can be disputed, but it’s gained some traction in legal circles, as we shall see.

Meanwhile, Democrats and liberals argued that the platforms weren’t acting aggressively enough to remove dangerous and harmful posts, especially those promoting disinformation around COVID-19 such as anti-masking and anti-vaccine propaganda.

A lot of this comes down to whether the platforms are common carriers or true publishers. Common carriers are legally forbidden from discriminating against any type of user or traffic. Providers of telephone service would be one example. Another example would be the broader internet of which the platforms are a part. Alex Jones was thoroughly deplatformed in recent years — you can’t find him on Facebook, Twitter or anywhere else. But you can find his infamous InfoWars site on the web, and, according to SimilarWeb, it received some 9.4 million visits in July of this year. You can’t kick Jones off the internet; at most, you can pressure his hosting service to drop him. But even if they did, he’d just move on to the next service, which, by the way, needn’t be based in the U.S.

True publishers, by the way, enjoy near-absolute leeway over what they choose to publish or not publish. A landmark case in this regard is Miami Herald v. Tornillo (1974), in which the Supreme Court ruled that a Florida law requiring newspapers to publish responses from political figures who’d been criticized was unconstitutional. Should platforms be treated as publishers? Certainly it seems ludicrous to hold them fully responsible for the millions of pieces of content that their users post on their sites. Yet the use of algorithms to promote some content in order to sell more advertising and earn more profits involves editorial discretion, even if those editors are robots. In that regard, they start to look more like publishers.

Maybe it’s time to move past the old categories altogether. In a recent appearance on WBUR Radio’s “On Point,” University of Minnesota law professor Alan Rozenshtein said that platforms have some qualities of common carriers and some qualities of publishers. What we really need, he said, is a new paradigm that recognizes we’re dealing with something unlike anything we’ve seen before.

Which brings me to two legal cases, both of which are hurtling toward a collision.

Recently the U.S. Court of Appeals for the 5th Circuit upheld a Texas law that, among other things, forbids platforms from removing any third-party speech that’s based on viewpoint. Many legal observers had believed the law would be decisively overturned since it interferes with the ability of private companies to conduct their business as they see fit, and to exercise their own First Amendment right to delete content they regard as harmful. But the court didn’t see it that way, with Judge Andrew Oldham writing: “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.” This is a view of the platforms as common carriers.

As Rozenshtein said, the case is almost certainly headed for the Supreme Court because it clashes with an opinion by the 11th Circuit, which overturned a similar law in Florida, and because it’s unimaginable that any part of the internet can be regulated on a state-by-state basis. Such regulations need to be hashed out by Congress and apply to all 50 states, Rozenshtein said.

Meanwhile, the Supreme Court has agreed to hear a case coming from the opposite direction. The case, brought by the family of a 23-year-old student who was killed in an ISIS attack in Paris in 2014, argues that YouTube, owned by Google, should be held liable for using algorithms to boost terrorist videos, thus helping to incite the attack. “Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” according to the lawsuit.

Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

The ISIS case is especially interesting because it’s the use of algorithms to boost speech that are at issue — again, something that was, at most, in its embryonic stages at the time that Section 230 was enacted. Eric Goldman, a law professor at Santa Clara University, put it this way in an interview with The Washington Post: “The question presented creates a false dichotomy that recommending content is not part of the traditional editorial functions. The question presented goes to the very heart of Section 230 and that makes it a very risky case for the internet.”

I’ve suggested that one way to reform Section 230 might be to remove protections for any algorithmically boosted speech, which might actually be where we’re heading.

All of this comes at a time when the Supreme Court’s turn to the right has called its legitimacy into question. Two of the justices, Clarence Thomas and Neil Gorsuch, have even suggested that the libel protections afforded the press under the landmark Times v. Sulllivan decision be overturned or scaled back. After 26 years, it may well be time for some changes to Section 230. But can we trust the Supremes to get it right? I guess we’ll just have to wait and see.

A bogus libel suit raises some interesting questions about the limits of Section 230

Local internet good guy Ron Newman has prevailed in a libel and copyright-infringement suit brought by a plaintiff who claimed Newman had effectively published libelous claims about him by moving the Davis Square Community forum from one hosting service to another.

Adam Gaffin of Universal Hub has all the details, which I’m not going to repeat here. The copyright claim is so ridiculous that I’m going to pass over it entirely. What I do find interesting in the suit, filed by Jonathan Monsarrat, is his allegation that Newman was not protected by Section 230 of the Communications Decency Act because, in switching platforms from LiveJournal to Dreamwidth, he had to copy all the content into the new forum.

Section 230 holds online publishers harmless for any content posted online by third parties, which protects everyone from a small community newspaper whose website has a comments section to tech giants like Facebook and Twitter. The question is whether Newman, by copying content from one platform to another, thereby became the publisher of that content, which could open him to a libel claim. The U.S. Court of Appeals for the First Circuit said no, and put it this way:

Newman copied the allegedly defamatory posts from LiveJournal to Dreamwidth verbatim. He did not encourage or compel the original authors to produce the libelous information. And, in the manner and form of republishing the posts, he neither offered nor implied any view of his own about the posts. In short, Newman did nothing to contribute to the posts unlawfulness beyond displaying them on the new Dreamwidth website.

There’s no question that the court ruled correctly, and I hope that Monsarrat, who has been using the legal system to harass Newman for years, brings his ill-considered crusade to an end.

Nevertheless, the idea that a publisher could lose Section 230 protections might be more broadly relevant. Several years ago I wrote for GBH News that Congress ought to consider ending such protections for content that is promoted by algorithms. If Facebook wants to take a hands-off approach to what its users publish and let everything scroll by in reverse chronological order, then 230 would apply. But Facebook’s practice of using algorithms to drive engagement, putting divisive and anger-inducing content in front of its users in order to keep them logged in and looking at advertising, ought not to be rewarded with legal protections.

The futility of Monsarrat’s argument aside, his case raises the question of how much publishers may intervene in third-party content before they lose Section 230 protections. Maybe legislation isn’t necessary. Maybe the courts could decide that Facebook and other platforms that use algorithms become legally responsible publishers of content when they promote it and make it more likely to be seen than it would otherwise.

And congratulations to Ron Newman, a friend to many of us in the local online community. I got to know Ron way back in 1996, when he stepped forward and volunteered to add links to the online version of a story I wrote for The Boston Phoenix on the Church of Scientology and its critics. Ron harks back to the early, idealistic days of the internet. The digital realm would be a better place if there were more people like him.

A tidal wave of documents exposes the depths of Facebook’s depravity

Photo (cc) 2008 by Craig ONeal

Previously published at GBH News.

How bad is it for Facebook right now? The company is reportedly planning to change its name, possibly as soon as this week — thus entering the corporate equivalent of the Witness Protection Program.

Surely, though, Mark Zuckerberg can’t really think anyone is going to be fooled. As the tech publisher Scott Turman told Quartz, “If the general public has a negative and visceral reaction to a brand then it may be time to change the subject. Rebranding is one way to do that, but a fresh coat of lipstick on a pig will not fundamentally change the facts about a pig.”

And the facts are devastating, starting with “The Facebook Files” in The Wall Street Journal at the beginning of the month; accelerating as the Journal’s once-anonymous source, former Facebook executive Frances Haugen, went public, testified before Congress and was interviewed on “60 Minutes”; and then exploding over the weekend as a consortium of news organizations began publishing highlights from a trove of documents Haugen gave the Securities and Exchange Commission.

No one can possibly keep up with everything we’ve learned about Facebook — and, let’s face it, not all that much of it is new except for the revelations that Facebook executives were well aware of what their critics have been saying for years. How did they know? Their own employees told them, and begged them to do something about it to no avail.

If it’s possible to summarize, the meta-critique is that, no matter what the issue, Facebook’s algorithms boost content that enrages, polarizes and even depresses its users — and that Zuckerberg and company simply won’t take the steps that are needed to lower the volume, since that might result in lower profits as well. This is the case across the board, from self-esteem among teenage girls to the Jan. 6 insurrection, from COVID disinformation to factional violence in other countries.

In contrast to past crises, when Facebook executives would issue fulsome apologies and then keep right on doing what they were doing, the company has taken a pugnacious tone this time around, accusing the media of bad faith and claiming it has zillions of documents that contradict the damning evidence in the files Haugen has provided. For my money, though, the quote that will live in infamy is one that doesn’t quite fit the context — it was allegedly spoken by Facebook communications official Tucker Bounds in 2017, and it wasn’t for public consumption. Nevertheless, it is perfect:

“It will be a flash in the pan,” Bounds reportedly said. “Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

Is Facebook still fine? Probably not. At the moment, at least, is difficult to imagine that Facebook won’t be forced to undergo some fundamental changes, either through public pressure or by force of law. A number of news organizations have published overviews to help you make sense of the new documents. One of the better ones was written by Adrienne LaFrance, the executive editor of The Atlantic, who was especially appalled by new evidence of Facebook’s own employees pleading with their superiors to stop amplifying the extremism that led to Jan. 6.

“The documents are astonishing for two reasons: First, because their sheer volume is unbelievable,” she said. “And second, because these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.”

LaFrance offers some possible solutions, most of which revolve around changing the algorithm to optimize safety over growth — that is, not censoring speech, but taking steps to stop the worst of it from going viral. Keep in mind that one of the key findings from the past week involved a test account set up for a fictional conservative mother in North Carolina. Within days, her news feed was loaded with disinformation, including QAnon conspiracy theories, served up because the algorithm had figured out that such content would keep her engaged. As usual, Facebook’s own researchers sounded the alarm while those in charge did nothing.

In assessing what we’ve learned about Facebook, it’s important to differentiate between pure free-speech issues and those that involve amplifying bad speech for profit. Of course, as a private company, Facebook needn’t worry about the First Amendment — it can remove anything it likes for any reason it chooses.

But since Facebook is the closest thing we have to a public square these days, I’m uncomfortable with calls that certain types of harmful content be banned or removed. I’d rather focus on the algorithm. If someone posts, say, vaccine disinformation on the broader internet, people will see it (or not) solely on the basis of whether they visit the website or discussion board where it resides.

That doesn’t trouble me any more than I’m bothered by people handing out pamphlets about the coming apocalypse outside the subway station. Within reason, Facebook ought to be able to do the same. What it shouldn’t be able to do is make it easy for you to like and share such disinformation and keep you engaged by showing you more and — more extreme — versions of it.

And that’s where we might be able to do something useful about Facebook rather than just wring our hands. Reforming Section 230, which provides Facebook and other internet publishers with legal immunity for any content posted by their users, would be a good place to start. If 230 protections were removed for services that use algorithms to boost harmful content, then Facebook would change its practices overnight.

Meanwhile, we wait with bated breath for word on what the new name for Facebook will be. Friendster? Zucky McZuckface? The Social Network That Must Not Be Named?

Zuckerberg has created a two-headed beast. For most of us, Facebook is a fun, safe environment to share news and photos of our family and friends. For a few, it’s a dangerous place that leads them down dark passages from which they may never return.

In that sense, Facebook is like life itself, and it won’t ever be completely safe. But for years now, the public, elected officials and even Facebook’s own employees have called for changes that would make the platform less of a menace to its users as well as to the culture as a whole.

Zuckerberg has shown no inclination to change. It’s long past time to force his hand.

Why Section 230 should be curbed for algorithmically driven platforms

Facebook whistleblower Frances Haugen testifies on Capitol Hill Tuesday.

Facebook in the midst of what we can only hope will prove to be an existential crisis. So I was struck this morning when Boston Globe technology columnist Hiawatha Bray suggested a step that I proposed more than a year ago — eliminating Section 230 protections from social media platforms that use algorithms. Bray writes:

Maybe we should eliminate Section 230 protections for algorithmically powered social networks. For Internet sites that let readers find their own way around, the law would remain the same. But a Facebook or Twitter or YouTube or TikTok could be sued by private citizens — not the government — for postings that defame somebody or which threaten violence.

Here’s what I wrote for GBH News in June 2020:

One possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers.

I hope it’s an idea whose time has come.

Australian libel ruling shows what happens without Section 230 protections

Photo (cc) 2011 by Scott Calleja

I’m not familiar with the fine points of Australian libel law. But a decision this week by the High Court of Australia that publishers are liable for third-party comments posted on their Facebook pages demonstrates the power of Section 230 in the United States.

Section 230, part of the Communications Decency Act of 1996, does two things. First, it carves out an exception to the principle that publishers are legally responsible for all content, including advertisements and letters to the editor. By contrast, publishers are not liable for online comments in any way.

Second, in what is sometimes called the “Good Samaritan” provision, publishers may remove some third-party content without taking on liability for other content. For example, a lawyer might argue that a news organization that removed a libelous comment has taken on an editing role and could therefore be sued for other libelous comments that weren’t removed. Under Section 230, you can’t do that.

The Australian court’s ruling strikes me as a straightforward application of libel law in the absence of Section 230. Mike Cherney of The Wall Street Journal puts it this way:

The High Court of Australia determined that media companies, by creating a public Facebook page and posting content on that page, facilitated and encouraged comments from other users on those posts. That means the media companies should be considered publishers of the comments and are therefore responsible for any defamatory content that appears in them, according to a summary of the judgment from the court.

Over at the Nieman Journalism Lab, Joshua Benton has a markedly different take, arguing that the court is holding publishers responsible for content they did not publish. Benton writes:

Pandora’s box isn’t big enough to hold all the potential implications of that idea. That a news publisher should be held accountable for the journalism it publishes is obvious. That it should be held accountable for reader comments left on its own website (which it fully controls) is, at a minimum, debatable.

But that it should be held legally liable for the comments of every rando who visits its Facebook page — in other words, the speech of people it doesn’t control, on a platform it doesn’t control — is a big, big step.

I disagree. As I said, publishers are traditionally liable for every piece of content that appears under their name. Section 230 was a deviation from that tradition — a special carve-out providing publishers with immunity they wouldn’t otherwise have. If Benton is right, then we never needed 230. But of course we did. There’s a reason that the Electronic Frontier Foundation calls 230 “the most important law protecting internet speech.”

I also don’t see much difference between comments posted on a publisher’s website or on its Facebook page. A Facebook page is something you set up, add content to and manage. It’s not yours in the same way as your website, but it is part of your brand and under your control. If you should be liable for third-party content on your website, then it’s hardly a stretch to say that you should also be liable for third-party content on your Facebook page.

As the role of social media in our political discourse has become increasingly fraught, there have been a number of calls to abolish or reform 230. Abolition would mean the end of Facebook — and, for that matter, the comments sections on websites. (There are days when I’m tempted…) Personally, I’d look into abolishing 230 protections for sites that use algorithms to drive engagement and, thus, divisiveness. Such a change would make Facebook less profitable, but I think we could live with that.

Australia, meanwhile, has a dilemma on its hands. Maybe Parliament will pass a law equivalent to Section 230, but (I hope) with less sweeping protections. In any case, Australia should serve as an interesting test case to see what happens when toxic, often libelous third-party comments no longer get a free pass.

Thinking through a social-contract framework for reforming Section 230

Mary Anne Franks. Photo (cc) 2014 by the Internet Education Foundation.

The Lawfare podcasts are doing an excellent job of making sense of complicated media-technical issues. Last week I recommended a discussion of Australia’s new law mandating that Facebook and Google pay for news. Today I want to tell you about an interview with Mary Anne Franks, a law professor at the University of Miami, who is calling for the reform of Section 230 of the Communications Decency Act.

The host, Alan Rozenshtein, guides Franks through a paper she’s written titled “Section 230 and the Anti-Social Contract,” which, as he points out, is short and highly readable. Franks’ overriding argument is that Section 230 — which protects internet services, including platform companies such as Facebook and Twitter, from being sued for what their users post — is a way of entrenching the traditional white male power structure.

That might strike you as a bit much, and, as you’ll hear, Rozenshtein challenges her on it, pointing out that some members of disenfranchised communities have been adamant about retaining Section 230 in order to protect their free-speech rights. Nevertheless, her thesis is elegant, encompassing everyone from Thomas Jefferson to John Perry Barlow, the author of the 1996 document “A Declaration of the Independence of Cyberspace,” of which she takes a dim view. Franks writes:

Section 230 serves as an anti-social contract, replicating and perpetuating long-standing inequalities of gender, race, and class. The power that tech platforms have over individuals can be legitimized only by rejecting the fraudulent contract of Section 230 and instituting principles of consent, reciprocity, and collective responsibility.

So what is to be done? Franks pushes back on Rozenshtein’s suggestion that Section 230 reform has attracted bipartisan support. Republicans such as Donald Trump and Sen. Josh Hawley, she notes, are talking about changes that would force the platforms to publish content whether they want to or not — a nonstarter, since that would be a violation of the First Amendment.

Democrats, on the other hand, are seeking to find ways of limiting the Section 230 protections that the platform companies now enjoy without tearing down the entire law. Again, she writes:

Specifically, a true social contract would require tech platforms to offer transparent and comprehensive information about their products so that individuals can make informed choices about whether to use them. It would also require tech companies to be held accountable for foreseeable harms arising from the use of their platforms and services, instead of being granted preemptive immunity for ignoring or profiting from those harms. Online intermediaries must be held to similar standards as other private businesses, including duty of care and other collective responsibility principles.

Putting a little more meat on the bones, Franks adds that Section 230 should be reformed so as to “deny immunity to any online intermediary that exhibits deliberate indifference to harmful conduct.”

Today’s New York Times offers some details as to what that might look like:

One bill introduced last month would strip the protections from content the companies are paid to distribute, like ads, among other categories. A different proposal, expected to be reintroduced from the last congressional session, would allow people to sue when a platform amplified content linked to terrorism. And another that is likely to return would exempt content from the law only when a platform failed to follow a court’s order to take it down.

Since its passage in 1996, Section 230 has been an incredible boon to any internet publisher who opens its gates to third-party content. They’re under no obligation to take down material that is libelous or threatening. Quite the contrary — they can make money from it.

This is hardly what the First Amendment envisioned, since publishers in other spheres are legally responsible for every bit of content they put before their audiences, up to and including advertisements and letters to the editor. The internet as we know it would be an impossibility if Section 230 didn’t exist in some form. But it may be time to rein it in, and Franks has put forth a valuable framework for how we might think about that.

Become a member of Media Nation today.