By Dan Kennedy • The press, politics, technology, culture and other passions

Tag: Section 230 Page 1 of 2

Why it matters that The New York Times got it wrong on Section 230

The U.S. Supreme Court will rule on two cases involving Section 230. Photo (cc) 2006 by OZinOH.

Way back in 1996, when Section 230 was enacted into law, it was designed to protect all web publishers, most definitely including newspapers, from being sued over third-party content posted in their comment sections. It would be another eight years before Facebook was launched, and longer than that before algorithms would be used to boost certain types of content.

But that didn’t stop David McCabe of The New York Times — who, we are told, “has reported for five years on the policy debate over online speech” — from including this howler in a story about two cases regarding Section 230 that are being heard by the U.S. Supreme Court:

While newspapers and magazines can be sued over what they publish, Section 230 shields online platforms from lawsuits over most content posted by their users.

No. I have to assume that McCabe and maybe even his editors know better, and that this was their inept way of summarizing the issue for a general readership. But it perpetuates the harmful and wrong notion that this is only about Facebook, Twitter and other social media platforms. It’s not. Newspapers and magazines are liable for everything they publish except third-party online comments, which means that they are treated exactly the same as the giant platforms.

Though it is true that an early case testing Section 230 involved comments posted at AOL rather than on a news website, the principle that online publishers can’t be held liable for what third parties post on their platforms is as valuable to, oh, let’s say The New York Times as it is to Facebook.

That’s not to say 230 can’t be reformed and restricted; and, as I wrote recently, it probably should be. But it’s important that the public understand exactly what’s at stake.

Some common-sense ideas for reforming Section 230

Photo (cc) 2005 by mac jordan

The Elon Musk-ization of Twitter and the rise a Republican House controlled by its most extreme right-wing elements probably doom any chance for intelligent reform to Section 230. That’s the 1996 law that holds harmless any online publisher for third-party content posted on its site, whether it be a libelous comment on a newspaper’s website (one of the original concerns) or dangerous disinformation about vaccines on Facebook.

It is worth repeating for those who don’t understand the issues: a publisher is legally responsible for every piece of content — articles, advertisements, photos, cartoons, letters to the editor and the like — with the sole exception of third-party material posted online. The idea behind 230 was that it would be impossible to vet everything and that the growth of online media depended on an updated legal structure.

Over the years, as various bad actors have come along and abused Section 230, a number of ideas have emerged for curtailing it without doing away with it entirely. Some time back, I proposed that social media platforms that use algorithms to boost certain types of content should not enjoy any 230 protections — an admittedly blunt instrument that would pretty much destroy the platforms’ business model. My logic was that increased engagement is associated with content that makes you angry and upset, and that the platforms profit mightily by keeping your eyes glued to their site.

Now a couple of academics, Robert Kozinets and Jon Pfeiffer, have come along with a more subtle approach to Section 230 reform. Their proposal was first published in The Conversation, though I saw it at Nieman Lab. They offer what I think is a pretty brilliant analogy as to why certain types of third-party content don’t deserve protection:

One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls — which contains not just porn but also misinformation and hate speech — the absolutist stance that they have total protection and total legal “immunity” is untenable.

Kozinets and Pfeiffer offer three ideas that are worth reading in full. In summary, though, here is what they are proposing.

  • A “verification trigger,” which takes effect when a platform profits from bad speech — the idea I tried to get at with my proposal for removing protections for algorithmic boosting. Returning to the restaurant analogy, Kozinets and Pfeiffer write, “When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.” They cite an extreme example: Elon Musk’s decision to sell blue-check verification, thus directly monetizing whatever falsehoods those with blue checks may choose to perpetrate.
  • “Transparent liability caps” that would “specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it.” Platforms that violate those standards would lose 230 protections. We can only imagine what this would look like once Marjorie Taylor Greene and Matt Gaetz get hold of it, but, well, it’s a thought.
  • A system of “neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform.” Kozinets and Pfeiffer call this “Twitter court,” and platforms that don’t play along could be sued for libel or invasion of privacy by aggrieved parties.

I wouldn’t expect any of these ideas to become law in the near or intermediate future. Currently, the law appears to be entirely up for grabs. For instance, last year a federal appeals court upheld a Texas law that forbids platforms from removing any third-party speech that’s based on viewpoint. At the same time, the U.S. Supreme Court is hearing a case that could result in 230 being overturned in its entirety. Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

Still, Kozinets and Pfeiffer have provided us with some useful ways of how we might reform Section 230 in order to protect online publishers without giving them carte blanche to profit from their own bad behavior.

A quarter-century after its passage, Section 230 is up for grabs

A quarter-century after Congress decided to hold publishers harmless for third-party content posted on their websites, we are headed for a legal and constitutional showdown over Section 230, part of the Communications Decency Act of 1996.

Before the law was passed, publishers worried that if they removed some harmful content they might be held liable for failing to take down other content, which gave them a legal incentive to leave libel, obscenity, hate speech and misinformation in place. Section 230 solved that by including a so-called Good Samaritan provision that allowed publishers to pick and choose without incurring liability.

Back in those early days, of course, we weren’t dealing with behemoths like Facebook, YouTube and Twitter, which use algorithms to boost content that keeps their users engaged — which, in turn, usually means speech that makes them angry or upset. In the mid-1990s, the publishers that were seeking protection were generally newspapers that had opened up online comments and nascent online services like Prodigy and AOL. Publishers are fully liable for any content over which they have direct control, including news stories, advertisements and letters to the editor. Congress understood that the flood of content being posted online raised different issues.

But after Twitter booted Donald Trump off its service and Facebook suspended him for inciting violence during and after the attempted insurrection of Jan. 6, 2021, Trump-aligned Republicans began agitating against what they called censorship by the tech giants. The idea that private companies are even legally capable of engaging in censorship is something that can be disputed, but it’s gained some traction in legal circles, as we shall see.

Meanwhile, Democrats and liberals argued that the platforms weren’t acting aggressively enough to remove dangerous and harmful posts, especially those promoting disinformation around COVID-19 such as anti-masking and anti-vaccine propaganda.

A lot of this comes down to whether the platforms are common carriers or true publishers. Common carriers are legally forbidden from discriminating against any type of user or traffic. Providers of telephone service would be one example. Another example would be the broader internet of which the platforms are a part. Alex Jones was thoroughly deplatformed in recent years — you can’t find him on Facebook, Twitter or anywhere else. But you can find his infamous InfoWars site on the web, and, according to SimilarWeb, it received some 9.4 million visits in July of this year. You can’t kick Jones off the internet; at most, you can pressure his hosting service to drop him. But even if they did, he’d just move on to the next service, which, by the way, needn’t be based in the U.S.

True publishers, by the way, enjoy near-absolute leeway over what they choose to publish or not publish. A landmark case in this regard is Miami Herald v. Tornillo (1974), in which the Supreme Court ruled that a Florida law requiring newspapers to publish responses from political figures who’d been criticized was unconstitutional. Should platforms be treated as publishers? Certainly it seems ludicrous to hold them fully responsible for the millions of pieces of content that their users post on their sites. Yet the use of algorithms to promote some content in order to sell more advertising and earn more profits involves editorial discretion, even if those editors are robots. In that regard, they start to look more like publishers.

Maybe it’s time to move past the old categories altogether. In a recent appearance on WBUR Radio’s “On Point,” University of Minnesota law professor Alan Rozenshtein said that platforms have some qualities of common carriers and some qualities of publishers. What we really need, he said, is a new paradigm that recognizes we’re dealing with something unlike anything we’ve seen before.

Which brings me to two legal cases, both of which are hurtling toward a collision.

Recently the U.S. Court of Appeals for the 5th Circuit upheld a Texas law that, among other things, forbids platforms from removing any third-party speech that’s based on viewpoint. Many legal observers had believed the law would be decisively overturned since it interferes with the ability of private companies to conduct their business as they see fit, and to exercise their own First Amendment right to delete content they regard as harmful. But the court didn’t see it that way, with Judge Andrew Oldham writing: “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.” This is a view of the platforms as common carriers.

As Rozenshtein said, the case is almost certainly headed for the Supreme Court because it clashes with an opinion by the 11th Circuit, which overturned a similar law in Florida, and because it’s unimaginable that any part of the internet can be regulated on a state-by-state basis. Such regulations need to be hashed out by Congress and apply to all 50 states, Rozenshtein said.

Meanwhile, the Supreme Court has agreed to hear a case coming from the opposite direction. The case, brought by the family of a 23-year-old student who was killed in an ISIS attack in Paris in 2014, argues that YouTube, owned by Google, should be held liable for using algorithms to boost terrorist videos, thus helping to incite the attack. “Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” according to the lawsuit.

Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

The ISIS case is especially interesting because it’s the use of algorithms to boost speech that are at issue — again, something that was, at most, in its embryonic stages at the time that Section 230 was enacted. Eric Goldman, a law professor at Santa Clara University, put it this way in an interview with The Washington Post: “The question presented creates a false dichotomy that recommending content is not part of the traditional editorial functions. The question presented goes to the very heart of Section 230 and that makes it a very risky case for the internet.”

I’ve suggested that one way to reform Section 230 might be to remove protections for any algorithmically boosted speech, which might actually be where we’re heading.

All of this comes at a time when the Supreme Court’s turn to the right has called its legitimacy into question. Two of the justices, Clarence Thomas and Neil Gorsuch, have even suggested that the libel protections afforded the press under the landmark Times v. Sulllivan decision be overturned or scaled back. After 26 years, it may well be time for some changes to Section 230. But can we trust the Supremes to get it right? I guess we’ll just have to wait and see.

A bogus libel suit raises some interesting questions about the limits of Section 230

Local internet good guy Ron Newman has prevailed in a libel and copyright-infringement suit brought by a plaintiff who claimed Newman had effectively published libelous claims about him by moving the Davis Square Community forum from one hosting service to another.

Adam Gaffin of Universal Hub has all the details, which I’m not going to repeat here. The copyright claim is so ridiculous that I’m going to pass over it entirely. What I do find interesting in the suit, filed by Jonathan Monsarrat, is his allegation that Newman was not protected by Section 230 of the Communications Decency Act because, in switching platforms from LiveJournal to Dreamwidth, he had to copy all the content into the new forum.

Section 230 holds online publishers harmless for any content posted online by third parties, which protects everyone from a small community newspaper whose website has a comments section to tech giants like Facebook and Twitter. The question is whether Newman, by copying content from one platform to another, thereby became the publisher of that content, which could open him to a libel claim. The U.S. Court of Appeals for the First Circuit said no, and put it this way:

Newman copied the allegedly defamatory posts from LiveJournal to Dreamwidth verbatim. He did not encourage or compel the original authors to produce the libelous information. And, in the manner and form of republishing the posts, he neither offered nor implied any view of his own about the posts. In short, Newman did nothing to contribute to the posts unlawfulness beyond displaying them on the new Dreamwidth website.

There’s no question that the court ruled correctly, and I hope that Monsarrat, who has been using the legal system to harass Newman for years, brings his ill-considered crusade to an end.

Nevertheless, the idea that a publisher could lose Section 230 protections might be more broadly relevant. Several years ago I wrote for GBH News that Congress ought to consider ending such protections for content that is promoted by algorithms. If Facebook wants to take a hands-off approach to what its users publish and let everything scroll by in reverse chronological order, then 230 would apply. But Facebook’s practice of using algorithms to drive engagement, putting divisive and anger-inducing content in front of its users in order to keep them logged in and looking at advertising, ought not to be rewarded with legal protections.

The futility of Monsarrat’s argument aside, his case raises the question of how much publishers may intervene in third-party content before they lose Section 230 protections. Maybe legislation isn’t necessary. Maybe the courts could decide that Facebook and other platforms that use algorithms become legally responsible publishers of content when they promote it and make it more likely to be seen than it would otherwise.

And congratulations to Ron Newman, a friend to many of us in the local online community. I got to know Ron way back in 1996, when he stepped forward and volunteered to add links to the online version of a story I wrote for The Boston Phoenix on the Church of Scientology and its critics. Ron harks back to the early, idealistic days of the internet. The digital realm would be a better place if there were more people like him.

A tidal wave of documents exposes the depths of Facebook’s depravity

Photo (cc) 2008 by Craig ONeal

Previously published at GBH News.

How bad is it for Facebook right now? The company is reportedly planning to change its name, possibly as soon as this week — thus entering the corporate equivalent of the Witness Protection Program.

Surely, though, Mark Zuckerberg can’t really think anyone is going to be fooled. As the tech publisher Scott Turman told Quartz, “If the general public has a negative and visceral reaction to a brand then it may be time to change the subject. Rebranding is one way to do that, but a fresh coat of lipstick on a pig will not fundamentally change the facts about a pig.”

And the facts are devastating, starting with “The Facebook Files” in The Wall Street Journal at the beginning of the month; accelerating as the Journal’s once-anonymous source, former Facebook executive Frances Haugen, went public, testified before Congress and was interviewed on “60 Minutes”; and then exploding over the weekend as a consortium of news organizations began publishing highlights from a trove of documents Haugen gave the Securities and Exchange Commission.

No one can possibly keep up with everything we’ve learned about Facebook — and, let’s face it, not all that much of it is new except for the revelations that Facebook executives were well aware of what their critics have been saying for years. How did they know? Their own employees told them, and begged them to do something about it to no avail.

If it’s possible to summarize, the meta-critique is that, no matter what the issue, Facebook’s algorithms boost content that enrages, polarizes and even depresses its users — and that Zuckerberg and company simply won’t take the steps that are needed to lower the volume, since that might result in lower profits as well. This is the case across the board, from self-esteem among teenage girls to the Jan. 6 insurrection, from COVID disinformation to factional violence in other countries.

In contrast to past crises, when Facebook executives would issue fulsome apologies and then keep right on doing what they were doing, the company has taken a pugnacious tone this time around, accusing the media of bad faith and claiming it has zillions of documents that contradict the damning evidence in the files Haugen has provided. For my money, though, the quote that will live in infamy is one that doesn’t quite fit the context — it was allegedly spoken by Facebook communications official Tucker Bounds in 2017, and it wasn’t for public consumption. Nevertheless, it is perfect:

“It will be a flash in the pan,” Bounds reportedly said. “Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

Is Facebook still fine? Probably not. At the moment, at least, is difficult to imagine that Facebook won’t be forced to undergo some fundamental changes, either through public pressure or by force of law. A number of news organizations have published overviews to help you make sense of the new documents. One of the better ones was written by Adrienne LaFrance, the executive editor of The Atlantic, who was especially appalled by new evidence of Facebook’s own employees pleading with their superiors to stop amplifying the extremism that led to Jan. 6.

“The documents are astonishing for two reasons: First, because their sheer volume is unbelievable,” she said. “And second, because these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.”

LaFrance offers some possible solutions, most of which revolve around changing the algorithm to optimize safety over growth — that is, not censoring speech, but taking steps to stop the worst of it from going viral. Keep in mind that one of the key findings from the past week involved a test account set up for a fictional conservative mother in North Carolina. Within days, her news feed was loaded with disinformation, including QAnon conspiracy theories, served up because the algorithm had figured out that such content would keep her engaged. As usual, Facebook’s own researchers sounded the alarm while those in charge did nothing.

In assessing what we’ve learned about Facebook, it’s important to differentiate between pure free-speech issues and those that involve amplifying bad speech for profit. Of course, as a private company, Facebook needn’t worry about the First Amendment — it can remove anything it likes for any reason it chooses.

But since Facebook is the closest thing we have to a public square these days, I’m uncomfortable with calls that certain types of harmful content be banned or removed. I’d rather focus on the algorithm. If someone posts, say, vaccine disinformation on the broader internet, people will see it (or not) solely on the basis of whether they visit the website or discussion board where it resides.

That doesn’t trouble me any more than I’m bothered by people handing out pamphlets about the coming apocalypse outside the subway station. Within reason, Facebook ought to be able to do the same. What it shouldn’t be able to do is make it easy for you to like and share such disinformation and keep you engaged by showing you more and — more extreme — versions of it.

And that’s where we might be able to do something useful about Facebook rather than just wring our hands. Reforming Section 230, which provides Facebook and other internet publishers with legal immunity for any content posted by their users, would be a good place to start. If 230 protections were removed for services that use algorithms to boost harmful content, then Facebook would change its practices overnight.

Meanwhile, we wait with bated breath for word on what the new name for Facebook will be. Friendster? Zucky McZuckface? The Social Network That Must Not Be Named?

Zuckerberg has created a two-headed beast. For most of us, Facebook is a fun, safe environment to share news and photos of our family and friends. For a few, it’s a dangerous place that leads them down dark passages from which they may never return.

In that sense, Facebook is like life itself, and it won’t ever be completely safe. But for years now, the public, elected officials and even Facebook’s own employees have called for changes that would make the platform less of a menace to its users as well as to the culture as a whole.

Zuckerberg has shown no inclination to change. It’s long past time to force his hand.

Why Section 230 should be curbed for algorithmically driven platforms

Facebook whistleblower Frances Haugen testifies on Capitol Hill Tuesday.

Facebook in the midst of what we can only hope will prove to be an existential crisis. So I was struck this morning when Boston Globe technology columnist Hiawatha Bray suggested a step that I proposed more than a year ago — eliminating Section 230 protections from social media platforms that use algorithms. Bray writes:

Maybe we should eliminate Section 230 protections for algorithmically powered social networks. For Internet sites that let readers find their own way around, the law would remain the same. But a Facebook or Twitter or YouTube or TikTok could be sued by private citizens — not the government — for postings that defame somebody or which threaten violence.

Here’s what I wrote for GBH News in June 2020:

One possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers.

I hope it’s an idea whose time has come.

Australian libel ruling shows what happens without Section 230 protections

Photo (cc) 2011 by Scott Calleja

I’m not familiar with the fine points of Australian libel law. But a decision this week by the High Court of Australia that publishers are liable for third-party comments posted on their Facebook pages demonstrates the power of Section 230 in the United States.

Section 230, part of the Communications Decency Act of 1996, does two things. First, it carves out an exception to the principle that publishers are legally responsible for all content, including advertisements and letters to the editor. By contrast, publishers are not liable for online comments in any way.

Second, in what is sometimes called the “Good Samaritan” provision, publishers may remove some third-party content without taking on liability for other content. For example, a lawyer might argue that a news organization that removed a libelous comment has taken on an editing role and could therefore be sued for other libelous comments that weren’t removed. Under Section 230, you can’t do that.

The Australian court’s ruling strikes me as a straightforward application of libel law in the absence of Section 230. Mike Cherney of The Wall Street Journal puts it this way:

The High Court of Australia determined that media companies, by creating a public Facebook page and posting content on that page, facilitated and encouraged comments from other users on those posts. That means the media companies should be considered publishers of the comments and are therefore responsible for any defamatory content that appears in them, according to a summary of the judgment from the court.

Over at the Nieman Journalism Lab, Joshua Benton has a markedly different take, arguing that the court is holding publishers responsible for content they did not publish. Benton writes:

Pandora’s box isn’t big enough to hold all the potential implications of that idea. That a news publisher should be held accountable for the journalism it publishes is obvious. That it should be held accountable for reader comments left on its own website (which it fully controls) is, at a minimum, debatable.

But that it should be held legally liable for the comments of every rando who visits its Facebook page — in other words, the speech of people it doesn’t control, on a platform it doesn’t control — is a big, big step.

I disagree. As I said, publishers are traditionally liable for every piece of content that appears under their name. Section 230 was a deviation from that tradition — a special carve-out providing publishers with immunity they wouldn’t otherwise have. If Benton is right, then we never needed 230. But of course we did. There’s a reason that the Electronic Frontier Foundation calls 230 “the most important law protecting internet speech.”

I also don’t see much difference between comments posted on a publisher’s website or on its Facebook page. A Facebook page is something you set up, add content to and manage. It’s not yours in the same way as your website, but it is part of your brand and under your control. If you should be liable for third-party content on your website, then it’s hardly a stretch to say that you should also be liable for third-party content on your Facebook page.

As the role of social media in our political discourse has become increasingly fraught, there have been a number of calls to abolish or reform 230. Abolition would mean the end of Facebook — and, for that matter, the comments sections on websites. (There are days when I’m tempted…) Personally, I’d look into abolishing 230 protections for sites that use algorithms to drive engagement and, thus, divisiveness. Such a change would make Facebook less profitable, but I think we could live with that.

Australia, meanwhile, has a dilemma on its hands. Maybe Parliament will pass a law equivalent to Section 230, but (I hope) with less sweeping protections. In any case, Australia should serve as an interesting test case to see what happens when toxic, often libelous third-party comments no longer get a free pass.

Thinking through a social-contract framework for reforming Section 230

Mary Anne Franks. Photo (cc) 2014 by the Internet Education Foundation.

The Lawfare podcasts are doing an excellent job of making sense of complicated media-technical issues. Last week I recommended a discussion of Australia’s new law mandating that Facebook and Google pay for news. Today I want to tell you about an interview with Mary Anne Franks, a law professor at the University of Miami, who is calling for the reform of Section 230 of the Communications Decency Act.

The host, Alan Rozenshtein, guides Franks through a paper she’s written titled “Section 230 and the Anti-Social Contract,” which, as he points out, is short and highly readable. Franks’ overriding argument is that Section 230 — which protects internet services, including platform companies such as Facebook and Twitter, from being sued for what their users post — is a way of entrenching the traditional white male power structure.

That might strike you as a bit much, and, as you’ll hear, Rozenshtein challenges her on it, pointing out that some members of disenfranchised communities have been adamant about retaining Section 230 in order to protect their free-speech rights. Nevertheless, her thesis is elegant, encompassing everyone from Thomas Jefferson to John Perry Barlow, the author of the 1996 document “A Declaration of the Independence of Cyberspace,” of which she takes a dim view. Franks writes:

Section 230 serves as an anti-social contract, replicating and perpetuating long-standing inequalities of gender, race, and class. The power that tech platforms have over individuals can be legitimized only by rejecting the fraudulent contract of Section 230 and instituting principles of consent, reciprocity, and collective responsibility.

So what is to be done? Franks pushes back on Rozenshtein’s suggestion that Section 230 reform has attracted bipartisan support. Republicans such as Donald Trump and Sen. Josh Hawley, she notes, are talking about changes that would force the platforms to publish content whether they want to or not — a nonstarter, since that would be a violation of the First Amendment.

Democrats, on the other hand, are seeking to find ways of limiting the Section 230 protections that the platform companies now enjoy without tearing down the entire law. Again, she writes:

Specifically, a true social contract would require tech platforms to offer transparent and comprehensive information about their products so that individuals can make informed choices about whether to use them. It would also require tech companies to be held accountable for foreseeable harms arising from the use of their platforms and services, instead of being granted preemptive immunity for ignoring or profiting from those harms. Online intermediaries must be held to similar standards as other private businesses, including duty of care and other collective responsibility principles.

Putting a little more meat on the bones, Franks adds that Section 230 should be reformed so as to “deny immunity to any online intermediary that exhibits deliberate indifference to harmful conduct.”

Today’s New York Times offers some details as to what that might look like:

One bill introduced last month would strip the protections from content the companies are paid to distribute, like ads, among other categories. A different proposal, expected to be reintroduced from the last congressional session, would allow people to sue when a platform amplified content linked to terrorism. And another that is likely to return would exempt content from the law only when a platform failed to follow a court’s order to take it down.

Since its passage in 1996, Section 230 has been an incredible boon to any internet publisher who opens its gates to third-party content. They’re under no obligation to take down material that is libelous or threatening. Quite the contrary — they can make money from it.

This is hardly what the First Amendment envisioned, since publishers in other spheres are legally responsible for every bit of content they put before their audiences, up to and including advertisements and letters to the editor. The internet as we know it would be an impossibility if Section 230 didn’t exist in some form. But it may be time to rein it in, and Franks has put forth a valuable framework for how we might think about that.

Become a member of Media Nation today.

We can leverage Section 230 to limit algorithmically driven disinformation

Mark Zuckerberg. Photo (cc) 2012 by JD Lasica.

Josh Bernoff responds.

How can we limit the damage that social media — and especially Facebook — are doing to democracy? We all know what the problem is. The platforms make money by keeping you logged on and engaged. And they keep you engaged by feeding you content that their algorithms have determined makes you angry and upset. How do we break that chain?

Josh Bernoff, writing in The Boston Globe, offers an idea similar to one I suggested a few months ago: leverage Section 230 of the Telecommunications Act of 1996, which holds digital publishers harmless for any content posted by third-party users. Under Section 230, publishers can’t be sued if a commenter libels someone, which amounts to a huge benefit not available in other contexts. For instance, a newspaper publisher is liable for every piece of content that it runs, from news articles to ads and letters to the editor — but not for comments posted on the newspaper’s website.

Bernoff suggests what strikes me as a rather convoluted system that would require Facebook (that is, if Mark Zuckerberg wants to continue benefiting from Section 230) to run ads calling attention to ideologically diverse content. Using the same algorithms that got us into trouble in the first place, Facebook would serve up conservative content to liberal users and liberal content to conservative users.

There are, I think, some problems with Bernoff’s proposal, starting with this: He writes that Facebook and the other platforms “would be required to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals.”

But that elides dealing the reality of what has happened to political discourse over the past several decades, accelerated by the Trump era. Liberals and Democrats haven’t changed all that much. Conservatives and Republicans, on the other hand, have become deeply radical, supporting the overturning of a landslide presidential election and espousing dangerous conspiracy theories about COVID-19. Given that, what is a “mainstream conservative news site”?

Bernoff goes so far as to suggest that MSNBC and Fox News are liberal and conservative equivalents. In their prime-time programming, though, the liberal MSNBC — despite its annoyingly doctrinaire, hectoring tone — remains tethered to reality, whereas Fox’s right-wing prime-time hosts are moving ever closer to QAnon territory. The latest is Tucker Carlson’s anti-vax outburst. Who knew that he would think killing his viewers was a good business strategy?

Moving away from the fish-in-a-barrel examples of MSNBC and Fox, what about The New York Times and The Wall Street Journal? Well, the Times’ editorial pages are liberal and the Journal’s are conservative. But if we’re talking about news coverage, they’re really not all that different. So that doesn’t work, either.

I’m not sure that my alternative, which I wrote about for GBH News back in June, is workable, but it does have the advantage of being simple: eliminate Section 230 protections for any platform that uses algorithms to boost engagement. Facebook would have to comply; if it didn’t, it would be sued into oblivion in a matter of weeks or months. As I wrote at the time:

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Unlike Bernoff’s proposal, mine wouldn’t attempt to regulate speech by identifying the news sites that are worthy of putting in front of users so that they’ll be exposed to views they disagree with. I would let it rip as long as artificial intelligence isn’t being used to boost the most harmful content.

Needless to say, Zuckerberg and his fellow Big Tech executives can be expected to fight like crazed weasels in order to keep using algorithms, which are incredibly valuable to their bottom line. Just this week The New York Times reported that Facebook temporarily tweaked its algorithms to emphasize quality news in the runup to the election and its aftermath — but it has now quietly reverted to boosting divisive slime, because that’s what keeps the ad money rolling in.

Donald Trump has been crusading against 230 during the final days of his presidency, even though he doesn’t seem to understand that he would be permanently banned from Twitter and every other platform — even Parler — if they had to worry about being held legally responsible for what he posts.

Still, that’s no reason not to do something about Section 230, which was approved in the earliest days of the commercial web and has warped digital discourse in ways we couldn’t have imagined back then. Hate speech and disinformation driven by algorithms have become the bane of our time. Why not modify 230 in order to do something about it?

Comments are open. Please include your full name, first and last, and speak with a civil tongue.

We shouldn’t let Trump’s Twitter tantrum stop us from taking a new look at online speech protections

Photo (cc) 2019 by Trending Topics 2019

Previously published at WGBHNews.org.

It’s probably not a good idea for us to talk about messing around with free speech on the internet at a moment when the reckless authoritarian in the White House is threatening to dismantle safeguards that have been in place for nearly a quarter of a century.

On the other hand, maybe there’s no time like right now. President Donald Trump is not wrong in claiming there are problems with Section 230 of the Telecommunications Act of 1996. Of course, he’s wrong about the particulars — that is, he’s wrong about its purpose, and he’s wrong about what would happen if it were repealed. But that shouldn’t stop us from thinking about the harmful effects of 230 and what we might do to lessen them.

Simply put, Section 230 says that online publishers can’t be held legally responsible for most third-party content. In just the past week Trump took to Twitter and falsely claimed that MSNBC host Joe Scarborough had murdered a woman who worked in his office and that violent protesters should be shot in the street. At least in theory, Trump, but not Twitter, could be held liable for both of those tweets — the first for libeling Scarborough, the second for inciting violence.

Ironically, without 230, Twitter no doubt would have taken Trump’s tweets down immediately rather than merely slapping warning labels on them, the action that provoked his childish rage. It’s only because of 230 that Trump is able to lie freely to his 24 million (not 80 million, as is often reported) followers without Twitter executives having to worry about getting sued.

As someone who’s been around since the earliest days of online culture, I have some insight into why we needed Section 230, and what’s gone wrong in the intervening years.

Back in the 1990s, the challenge that 230 was meant to address had as much to do with news websites as it did with early online services such as Prodigy and AOL. Print publications such as newspapers are legally responsible for everything they publish, including letters to the editor and advertisements. After all, the landmark 1964 libel case of New York Times v. Sullivan involved an ad, not the paper’s journalism.

But, in the digital world, holding publications strictly liable for their content proved to be impractical. Even in the era of dial-up modems, online comments poured in too rapidly to be monitored. Publishers worried that if they deleted some of the worst comments on their sites, that would mean they would be seen as exercising editorial control and were thus legally responsible for all comments.

The far-from-perfect solution: take a hands-off approach and not delete anything, not even the worst of the worst. At least to some extent, Section 230 solved that dilemma. Not only did it immunize publishers for third-party content, but it also contained what is called a “Good Samaritan” provision — publishers were now free to remove some bad content without making themselves liable for other, equally bad content that they might have missed.

Section 230 created an uneasy balance. Users could comment freely, which seemed to many of us in those more optimistic times like a step forward in allowing news consumers to be part of the conversation. (That’s where Jay Rosen’s phrase “the people formerly known as the audience” comes from.) But early hopes faded to pessimism and cynicism once we saw how terrible most of those comments were. So we ignored them.

That balance was disrupted by the rise of the platforms, especially Facebook and Twitter. And that’s because they had an incentive to keep users glued to their sites for as long as possible. By using computer algorithms to feed users more of what keeps them engaged, the platforms are able to show more advertising to them. And the way you keep them engaged is by showing them content that makes them angry and agitated, regardless of its truthfulness. The technologist Jaron Lanier, in his 2018 book “Ten Arguments for Deleting Your Social Media Accounts Right Now,” calls this “continuous behavior modification on a titanic scale.”

Which brings us to the tricky question of whether government should do something to remove these perverse incentives.

Earlier this year, Heidi Legg, then at Harvard’s Shorenstein Center on Media, Politics and Public Policy, published an op-ed in The Boston Globe arguing that Section 230 should be modified so that the platforms are held to the same legal standards as other publishers. “We should not allow the continued free-wheeling and profiteering of this attention economy to erode democracy through hyper-polarization,” she wrote.

Legg told me she hoped her piece would spark a conversation about what Section 230 reform might look like. “I do not have a solution,” she said in a text exchange on (what else?) Twitter, “but I have ideas and I am urging the nation and Congress to get ahead of this.”

Well, I’ve been thinking about it, too. And one possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers. Dorsey would quickly find that his tentative half-steps are insufficient — and Zuckerberg would have to abandon his smug refusal to do anything about Trump’s vile comments.

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Let me concede that I don’t know how practical my idea would be. Like Legg, I offer it out of a sense that we need to have a conversation about the harm that social media are doing to our democracy. I’m a staunch believer in the First Amendment, so I think it’s vital to address that harm in a way that doesn’t violate anyone’s free-speech rights. Ending special regulatory favors for certain types of toxic corporate behavior seems like one way of doing that with a relatively light touch.

And if that meant Trump could no longer use Twitter as a megaphone for hate speech, wild conspiracy theories and outright disinformation, well, so much the better.

Talk about this post on Facebook.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén