We can leverage Section 230 to limit algorithmically driven disinformation

Mark Zuckerberg. Photo (cc) 2012 by JD Lasica.

Josh Bernoff responds.

How can we limit the damage that social media — and especially Facebook — are doing to democracy? We all know what the problem is. The platforms make money by keeping you logged on and engaged. And they keep you engaged by feeding you content that their algorithms have determined makes you angry and upset. How do we break that chain?

Josh Bernoff, writing in The Boston Globe, offers an idea similar to one I suggested a few months ago: leverage Section 230 of the Telecommunications Act of 1996, which holds digital publishers harmless for any content posted by third-party users. Under Section 230, publishers can’t be sued if a commenter libels someone, which amounts to a huge benefit not available in other contexts. For instance, a newspaper publisher is liable for every piece of content that it runs, from news articles to ads and letters to the editor — but not for comments posted on the newspaper’s website.

Bernoff suggests what strikes me as a rather convoluted system that would require Facebook (that is, if Mark Zuckerberg wants to continue benefiting from Section 230) to run ads calling attention to ideologically diverse content. Using the same algorithms that got us into trouble in the first place, Facebook would serve up conservative content to liberal users and liberal content to conservative users.

There are, I think, some problems with Bernoff’s proposal, starting with this: He writes that Facebook and the other platforms “would be required to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals.”

But that elides dealing the reality of what has happened to political discourse over the past several decades, accelerated by the Trump era. Liberals and Democrats haven’t changed all that much. Conservatives and Republicans, on the other hand, have become deeply radical, supporting the overturning of a landslide presidential election and espousing dangerous conspiracy theories about COVID-19. Given that, what is a “mainstream conservative news site”?

Bernoff goes so far as to suggest that MSNBC and Fox News are liberal and conservative equivalents. In their prime-time programming, though, the liberal MSNBC — despite its annoyingly doctrinaire, hectoring tone — remains tethered to reality, whereas Fox’s right-wing prime-time hosts are moving ever closer to QAnon territory. The latest is Tucker Carlson’s anti-vax outburst. Who knew that he would think killing his viewers was a good business strategy?

Moving away from the fish-in-a-barrel examples of MSNBC and Fox, what about The New York Times and The Wall Street Journal? Well, the Times’ editorial pages are liberal and the Journal’s are conservative. But if we’re talking about news coverage, they’re really not all that different. So that doesn’t work, either.

I’m not sure that my alternative, which I wrote about for GBH News back in June, is workable, but it does have the advantage of being simple: eliminate Section 230 protections for any platform that uses algorithms to boost engagement. Facebook would have to comply; if it didn’t, it would be sued into oblivion in a matter of weeks or months. As I wrote at the time:

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Unlike Bernoff’s proposal, mine wouldn’t attempt to regulate speech by identifying the news sites that are worthy of putting in front of users so that they’ll be exposed to views they disagree with. I would let it rip as long as artificial intelligence isn’t being used to boost the most harmful content.

Needless to say, Zuckerberg and his fellow Big Tech executives can be expected to fight like crazed weasels in order to keep using algorithms, which are incredibly valuable to their bottom line. Just this week The New York Times reported that Facebook temporarily tweaked its algorithms to emphasize quality news in the runup to the election and its aftermath — but it has now quietly reverted to boosting divisive slime, because that’s what keeps the ad money rolling in.

Donald Trump has been crusading against 230 during the final days of his presidency, even though he doesn’t seem to understand that he would be permanently banned from Twitter and every other platform — even Parler — if they had to worry about being held legally responsible for what he posts.

Still, that’s no reason not to do something about Section 230, which was approved in the earliest days of the commercial web and has warped digital discourse in ways we couldn’t have imagined back then. Hate speech and disinformation driven by algorithms have become the bane of our time. Why not modify 230 in order to do something about it?

Comments are open. Please include your full name, first and last, and speak with a civil tongue.

We shouldn’t let Trump’s Twitter tantrum stop us from taking a new look at online speech protections

Photo (cc) 2019 by Trending Topics 2019

Previously published at WGBHNews.org.

It’s probably not a good idea for us to talk about messing around with free speech on the internet at a moment when the reckless authoritarian in the White House is threatening to dismantle safeguards that have been in place for nearly a quarter of a century.

On the other hand, maybe there’s no time like right now. President Donald Trump is not wrong in claiming there are problems with Section 230 of the Telecommunications Act of 1996. Of course, he’s wrong about the particulars — that is, he’s wrong about its purpose, and he’s wrong about what would happen if it were repealed. But that shouldn’t stop us from thinking about the harmful effects of 230 and what we might do to lessen them.

Simply put, Section 230 says that online publishers can’t be held legally responsible for most third-party content. In just the past week Trump took to Twitter and falsely claimed that MSNBC host Joe Scarborough had murdered a woman who worked in his office and that violent protesters should be shot in the street. At least in theory, Trump, but not Twitter, could be held liable for both of those tweets — the first for libeling Scarborough, the second for inciting violence.

Ironically, without 230, Twitter no doubt would have taken Trump’s tweets down immediately rather than merely slapping warning labels on them, the action that provoked his childish rage. It’s only because of 230 that Trump is able to lie freely to his 24 million (not 80 million, as is often reported) followers without Twitter executives having to worry about getting sued.

As someone who’s been around since the earliest days of online culture, I have some insight into why we needed Section 230, and what’s gone wrong in the intervening years.

Back in the 1990s, the challenge that 230 was meant to address had as much to do with news websites as it did with early online services such as Prodigy and AOL. Print publications such as newspapers are legally responsible for everything they publish, including letters to the editor and advertisements. After all, the landmark 1964 libel case of New York Times v. Sullivan involved an ad, not the paper’s journalism.

But, in the digital world, holding publications strictly liable for their content proved to be impractical. Even in the era of dial-up modems, online comments poured in too rapidly to be monitored. Publishers worried that if they deleted some of the worst comments on their sites, that would mean they would be seen as exercising editorial control and were thus legally responsible for all comments.

The far-from-perfect solution: take a hands-off approach and not delete anything, not even the worst of the worst. At least to some extent, Section 230 solved that dilemma. Not only did it immunize publishers for third-party content, but it also contained what is called a “Good Samaritan” provision — publishers were now free to remove some bad content without making themselves liable for other, equally bad content that they might have missed.

Section 230 created an uneasy balance. Users could comment freely, which seemed to many of us in those more optimistic times like a step forward in allowing news consumers to be part of the conversation. (That’s where Jay Rosen’s phrase “the people formerly known as the audience” comes from.) But early hopes faded to pessimism and cynicism once we saw how terrible most of those comments were. So we ignored them.

That balance was disrupted by the rise of the platforms, especially Facebook and Twitter. And that’s because they had an incentive to keep users glued to their sites for as long as possible. By using computer algorithms to feed users more of what keeps them engaged, the platforms are able to show more advertising to them. And the way you keep them engaged is by showing them content that makes them angry and agitated, regardless of its truthfulness. The technologist Jaron Lanier, in his 2018 book “Ten Arguments for Deleting Your Social Media Accounts Right Now,” calls this “continuous behavior modification on a titanic scale.”

Which brings us to the tricky question of whether government should do something to remove these perverse incentives.

Earlier this year, Heidi Legg, then at Harvard’s Shorenstein Center on Media, Politics and Public Policy, published an op-ed in The Boston Globe arguing that Section 230 should be modified so that the platforms are held to the same legal standards as other publishers. “We should not allow the continued free-wheeling and profiteering of this attention economy to erode democracy through hyper-polarization,” she wrote.

Legg told me she hoped her piece would spark a conversation about what Section 230 reform might look like. “I do not have a solution,” she said in a text exchange on (what else?) Twitter, “but I have ideas and I am urging the nation and Congress to get ahead of this.”

Well, I’ve been thinking about it, too. And one possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers. Dorsey would quickly find that his tentative half-steps are insufficient — and Zuckerberg would have to abandon his smug refusal to do anything about Trump’s vile comments.

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Let me concede that I don’t know how practical my idea would be. Like Legg, I offer it out of a sense that we need to have a conversation about the harm that social media are doing to our democracy. I’m a staunch believer in the First Amendment, so I think it’s vital to address that harm in a way that doesn’t violate anyone’s free-speech rights. Ending special regulatory favors for certain types of toxic corporate behavior seems like one way of doing that with a relatively light touch.

And if that meant Trump could no longer use Twitter as a megaphone for hate speech, wild conspiracy theories and outright disinformation, well, so much the better.

Talk about this post on Facebook.

Does Twitter need Trump? Not as much as you might think.

Statistic: Number of monthly active Twitter users worldwide from 1st quarter 2010 to 1st quarter 2019 (in millions) | Statista
Find more statistics at Statista.

You might think that Twitter would have a financial incentive to cave in to President Trump’s incoherent, unconstitutional threats over the platform’s decision to label some of his false tweets as, you know, false. In fact, Trump’s presence on Twitter is not as big a deal to the company as you might think.

First, we often hear that Trump has 80 million followers. But is that really the case? According to analytics from the Fake Followers Audit, 70.2% of his followers are fake, which is defined as “accounts that are unreachable and will not see the account’s tweets (either because they’re spam, bots, propaganda, etc. or because they’re no longer active on Twitter).”

That’s not especially unusual among high-profile tweeters. For instance, 43% of former President Barack Obama‘s 118 million followers are fake. But it’s important to understand that Trump has about 24 million followers, not 80 million. That’s a big difference.

Even more important, Trump’s presence on Twitter has not had a huge effect on its total audience. According to Statista, the number of worldwide active monthly users hovered between a low of 302 million and a high of 336 million between the first quarter of 2015 and the first quarter of 2019. (Zephoria reports that Twitter hasn’t released similar numbers since then.)

The bottom line is that Twitter chief executive Jack Dorsey could probably afford to throw Trump off the platform for repeatedly violating its terms of service. Still, he probably wouldn’t want to risk the outrage that would ensue from MAGA Country if Trump lost his favorite outlet for smearing the memory of a dead woman with his horrendous lies about MSNBC host Joe Scarborough.

Talk about this post on Facebook.