Previously published at WGBHNews.org.
To illustrate how useless the newly unveiled Facebook oversight board will be, consider the top 10 fake-news stories shared by its users in 2019.
As reported by Business Insider, the list included such classics as “NYC Coroner who Declared Epstein death ‘Suicide’ worked for the Clinton foundation making 500k a year up until 2015,” “Omar [as in U.S. Rep. Ilhan Omar] Holding Secret Fundraisers with Islamic Groups Tied to Terror,” and “Pelosi Diverts $2.4 Billion From Social Security To Cover Impeachment Costs.”
None of these stories was even remotely true. Yet none of them would have been removed by the oversight board. You see, as Mathew Ingram pointed out in his Columbia Journalism Review newsletter, the 20-member board is charged only with deciding whether content that has already been taken down should be restored.
Now, it’s fair to acknowledge that Facebook CEO Mark Zuckerberg has an impossible task in bringing his Frankenstein’s monster under control. But that doesn’t mean any actual good is going to come of this exercise.
The board, which will eventually be expanded to 40, includes a number of distinguished people. Among them: Alan Rusbridger, the respected former editor of The Guardian, as well as international dignitaries and a Nobel Prize laureate. It has independent funding, Zuckerberg has agreed that its decisions will be binding, and eventually its purview may expand to removing false content.
But, fundamentally, this can’t work because Facebook was not designed to be controllable. In The New York Times, technology columnist Kara Swisher explained the problem succinctly. “Facebook’s problems are structural in nature,” she wrote. “It is evolving precisely as it was designed to, much the same way the coronavirus is doing what it is meant to do. And that becomes a problem when some of what flows through the Facebook system — let’s be fair in saying that much of it is entirely benign and anodyne — leads to dangerous and even deadly outcomes.”
It’s not really about the content. Stop me if you’ve heard this before, but what makes Facebook a threat to democracy is the way it serves up that content. Its algorithms — which are not well understood by anyone, even at Facebook — are aimed at keeping you engaged so that you stay on the site. And the most effective way to drive engagement is to show users content that makes them angry and upset.
Are you a hardcore supporter of President Donald Trump? If so, you are likely to see memes suggesting that COVID-19 is some sort of Democratic plot to defeat him for re-election — as was the case with a recent semi-fake-news story reporting that hospitals are being paid to attribute illnesses and deaths to the coronavirus even when they’re not. Or links to the right-wing website PJ Media aimed at stirring up outrage over “weed, opioids, booze and ciggies” being given to homeless people in San Francisco who’ve been quarantined. If you are a Trump opponent, you can count on Occupy Democrats to pop up in your feed and keep you in a constant state of agitation.
Now, keep in mind that all of this — even the fake stuff — is free speech that’s protected by the First Amendment. And all of this, plus much worse, is readily available on the open web. What makes Facebook so pernicious is that it amplifies the most divisive speech so that you’ll stay longer and be exposed to more advertising.
What is the oversight board going to do about this? Nothing.
“The new Facebook review board will have no influence over anything that really matters in the world,” wrote longtime Facebook critic Siva Vaidhyanathan at Wired, adding: “The board can’t say anything about the toxic content that Facebook allows and promotes on the site. It will have no authority over advertising or the massive surveillance that makes Facebook ads so valuable. It won’t curb disinformation campaigns or dangerous conspiracies…. And most importantly, the board will have no say over how the algorithms work and thus what gets amplified or muffled by the real power of Facebook.”
In fact, Facebook’s algorithm has already been trained to ban or post warning labels on some speech. In practice, though, such mechanized censorship is aggravatingly inept. Recently the seal of disapproval was slapped on an ad called “Mourning in America,” by the Lincoln Project, a group of “Never Trump” Republicans, because the fact-checking organization PolitiFact had called it partly false. The Lincoln Project, though, claimed that PolitiFact was wrong.
I recently received a warning for posting a photo of Benito Mussolini as a humorous response to a picture of Trump. No doubt the algorithm was too dumb to understand that I was making a political comment and was not expressing my admiration for Il Duce. Others have told me they’ve gotten warnings for referring to trolls as trolls, or for calling unmasked protesters against COVID-19 restrictions “dumber than dirt.”
So what is Facebook good for? I find it useful for staying in touch with family and friends, for promoting my work and for discussing legitimate news stories. Beyond that, much of it is a cesspool of hate speech, fake news and propaganda.
If it were up to me, I’d ban the algorithm. Let people post what they want, but don’t let Facebook robotically weaponize divisive content in order to drive up its profit margins. Zuckerberg himself has said that he expects the government will eventually impose some regulations. Well, this is one way to regulate it without actually making judgments about what speech will be allowed and what speech will be banned.
Meanwhile, I’ll watch with amusement as the oversight board attempts to wrestle this beast into submission. As Kara Swisher said, it “has all the hallmarks of the United Nations, except potentially much less effective.”
The real goal, I suspect, is to provide cover for Zuckerberg and make it appear that Facebook is doing something. In that respect, this initiative may seem harmless — unless it lulls us into complacency about more comprehensive steps that could be taken to reduce the harm that is being inflicted on all of us.