From the Department of Unintended Consequences

The Washington Post reports:

Right-wing groups on chat apps like Telegram are swelling with new members after Parler disappeared and a backlash against Facebook and Twitter, making it harder for law enforcement to track where the next attack could come from….

Trump supporters looking for communities of like-minded people will likely find Telegram to be more extreme than the Facebook groups and Twitter feeds they are used to, said Amarasingam. [Amarnath Amarasingam is described as a researcher who specializes in terrorism and extremism.]

“It’s not simply pro-Trump content, mildly complaining about election fraud. Instead, it’s openly anti-Semitic, violent, bomb making materials and so on. People coming to Telegram may be in for a surprise in that sense,” Amarasingam said.

Entirely predictable, needless to say.

Amazon’s move against Parler is worrisome in a way that Apple’s and Google’s are not

It’s one thing for Apple and Google to throw the right-wing Twitter competitor Parler out if its app stores. It’s another thing altogether for Amazon Web Services to deplatform Parler. Yet that’s what will happen by midnight today, according to BuzzFeed.

Parler deserves no sympathy, obviously. The service proudly takes even less responsibility for the garbage its members post than Twitter and Facebook do, and it was one of the places where planning for the insurrectionist riots took place. But Amazon’s actions raise some important free-speech concerns.

Think of the internet as a pyramid. Twitter and Facebook, as well as Google and Apple’s app stores, are at the top of that pyramid — they are commercial enterprises that may govern themselves as they choose. Donald Trump is far from the first person to be thrown off social networks, and Parler isn’t even remotely the first app to be punished.

But Amazon Web Services, or AWS, exists somewhere below the top of the pyramid. It is foundational; its servers are the floor upon which other things are built. AWS isn’t the bottom layer of the pyramid — it is, in its own way, a commercial enterprise. But it has a responsibility to respecting the free-speech rights of its clients that Twitter and Facebook do not.

Yet AWS has an acceptable-use policy that reads in part:

You may not use, or encourage, promote, facilitate or instruct others to use, the Services or AWS Site for any illegal, harmful, fraudulent, infringing or offensive use, or to transmit, store, display, distribute or otherwise make available content that is illegal, harmful, fraudulent, infringing or offensive.

For AWS to cut off Parler would be like the phone company blocking all calls from a person or organization it deems dangerous. Yet there’s little doubt that Parler violated AWS’s acceptable-use policy. Look for Parler to re-establish itself on an overseas server. Is that what we want?

Meanwhile, Paul Moriarty, a member of the New Jersey State Assembly, wants Comcast to stop carrying Fox News and Newsmax, according to CNN’s “Reliable Sources” newsletter. And CNN’s Oliver Darcy is cheering him on, writing:

Moriarty has a point. We regularly discuss what the Big Tech companies have done to poison the public conversation by providing large platforms to bad-faith actors who lie, mislead, and promote conspiracy theories. But what about TV companies that provide platforms to networks such as Newsmax, One America News — and, yes, Fox News? [Darcy’s boldface]

Again, Comcast and other cable providers are not obligated to carry any particular service. Just recently we received emails from Verizon warning that it might drop WCVB-TV (Channel 5) over a fee dispute. Several years ago, Al Jazeera America was forced to throw in the towel following its unsuccessful efforts to get widespread distribution on cable.

But the power of giant telecom companies to decide what channels will be carried and what will not is immense, and something we ought to be concerned about.

I have no solutions. But I think it’s worth pointing out that AWS’s action against Parler is considerably more ominous than Google’s and Apple’s, and that for elected officials to call on Comcast to drop certain channels is more ominous still.

We have some thinking to do as a society.

Earlier:

Please consider becoming a paid member of Media Nation for just $5 a month. You’ll receive a weekly newsletter with exclusive content. Click here for details.

We can leverage Section 230 to limit algorithmically driven disinformation

Mark Zuckerberg. Photo (cc) 2012 by JD Lasica.

Josh Bernoff responds.

How can we limit the damage that social media — and especially Facebook — are doing to democracy? We all know what the problem is. The platforms make money by keeping you logged on and engaged. And they keep you engaged by feeding you content that their algorithms have determined makes you angry and upset. How do we break that chain?

Josh Bernoff, writing in The Boston Globe, offers an idea similar to one I suggested a few months ago: leverage Section 230 of the Telecommunications Act of 1996, which holds digital publishers harmless for any content posted by third-party users. Under Section 230, publishers can’t be sued if a commenter libels someone, which amounts to a huge benefit not available in other contexts. For instance, a newspaper publisher is liable for every piece of content that it runs, from news articles to ads and letters to the editor — but not for comments posted on the newspaper’s website.

Bernoff suggests what strikes me as a rather convoluted system that would require Facebook (that is, if Mark Zuckerberg wants to continue benefiting from Section 230) to run ads calling attention to ideologically diverse content. Using the same algorithms that got us into trouble in the first place, Facebook would serve up conservative content to liberal users and liberal content to conservative users.

There are, I think, some problems with Bernoff’s proposal, starting with this: He writes that Facebook and the other platforms “would be required to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals.”

But that elides dealing the reality of what has happened to political discourse over the past several decades, accelerated by the Trump era. Liberals and Democrats haven’t changed all that much. Conservatives and Republicans, on the other hand, have become deeply radical, supporting the overturning of a landslide presidential election and espousing dangerous conspiracy theories about COVID-19. Given that, what is a “mainstream conservative news site”?

Bernoff goes so far as to suggest that MSNBC and Fox News are liberal and conservative equivalents. In their prime-time programming, though, the liberal MSNBC — despite its annoyingly doctrinaire, hectoring tone — remains tethered to reality, whereas Fox’s right-wing prime-time hosts are moving ever closer to QAnon territory. The latest is Tucker Carlson’s anti-vax outburst. Who knew that he would think killing his viewers was a good business strategy?

Moving away from the fish-in-a-barrel examples of MSNBC and Fox, what about The New York Times and The Wall Street Journal? Well, the Times’ editorial pages are liberal and the Journal’s are conservative. But if we’re talking about news coverage, they’re really not all that different. So that doesn’t work, either.

I’m not sure that my alternative, which I wrote about for GBH News back in June, is workable, but it does have the advantage of being simple: eliminate Section 230 protections for any platform that uses algorithms to boost engagement. Facebook would have to comply; if it didn’t, it would be sued into oblivion in a matter of weeks or months. As I wrote at the time:

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Unlike Bernoff’s proposal, mine wouldn’t attempt to regulate speech by identifying the news sites that are worthy of putting in front of users so that they’ll be exposed to views they disagree with. I would let it rip as long as artificial intelligence isn’t being used to boost the most harmful content.

Needless to say, Zuckerberg and his fellow Big Tech executives can be expected to fight like crazed weasels in order to keep using algorithms, which are incredibly valuable to their bottom line. Just this week The New York Times reported that Facebook temporarily tweaked its algorithms to emphasize quality news in the runup to the election and its aftermath — but it has now quietly reverted to boosting divisive slime, because that’s what keeps the ad money rolling in.

Donald Trump has been crusading against 230 during the final days of his presidency, even though he doesn’t seem to understand that he would be permanently banned from Twitter and every other platform — even Parler — if they had to worry about being held legally responsible for what he posts.

Still, that’s no reason not to do something about Section 230, which was approved in the earliest days of the commercial web and has warped digital discourse in ways we couldn’t have imagined back then. Hate speech and disinformation driven by algorithms have become the bane of our time. Why not modify 230 in order to do something about it?

Comments are open. Please include your full name, first and last, and speak with a civil tongue.

How Google and Facebook destroyed the value of digital advertising

To what extent have Google and Facebook destroyed the digital ad model for news organizations? I came across a telling data point the other day from Josh Marshall, the editor and founder of Talking Points Memo, a liberal political site that’s one of the oldest outposts on the web. In an email to subscribers explaining why he’s raising rates, Marshall wrote:

The high watermark of advertising revenue for TPM was in 2014. That year we had a little over $2.5 million in ad revenue and $165,000 in membership revenue. In 2020, we’re on pace for $538,000 in ad revenue and $2.1 million in membership revenue.

What Marshall describes is a successful business venture that has boosted reader revenue by a factor of 13 over the past six years — but that at the same time has seen its ad income plummet to about a fifth of what it was.

Google’s auction system has destroyed the value of digital ads. Meanwhile, more than 90% of all new spending on digital advertising goes to Google and Facebook, which works out nicely for them because of sheer volume and the fact that most of their operations are automated.

It’s great for TPM that it’s been able to induce so many readers to pay. But with more and more publishers asking for subscription money (including all those individual journalists who’ve decamped for Substack), the ceiling is going to be hit fairly soon.

We need a way to bring digital advertising back for news publishers.

Correction: Post updated to fix several math errors.

Comments are open. Please include your full name, first and last, and speak with a civil tongue.

Everything you know is wrong (Facebook edition)

Like many observers, I have often cited Facebook, along with Fox News, as one of the most dangerous forces promoting disinformation and polarization. Its algorithms feed you what keeps you engaged, and what keeps you engaged is what makes you angry and upset.

But what if most Facebook users don’t even see news? Nieman Lab editor Laura Hazard Owen conducted a real-world experiment. And what she found ought to give us pause:

Even using a very generous definition of news (“Guy rollerblades with 75-pound dog on his back”), the majority of people in our survey (54%) saw no news within the first 10 posts in their feeds at all.

Moreover, the top three most frequently seen news sources weren’t the likes of Newsmax, Breitbart and Infowars — they were CNN, The New York Times and NBC News, which epitomize the mainstream.

I asked Owen to clarify whether her definition of news popping up in people’s feeds was restricted to content that came directly from news organizations or whether it included news stories shared by friends. “It was ALL news,” she replied, “whether shared by a news organization or a friend.”

Is it possible that we all misunderstand the effect that Facebook is having (or not having) on our democracy?

Comments are open. Please include your full name, first and last, and speak with a civil tongue.

In 2016, moving comments to Facebook seemed like a great idea. Now it’s a problem.

A little more than four years after turning off comments and directing everyone to Facebook, I’ve turned them back on. The move comes at a time when we’re all questioning our dependence on Facebook given the social-media giant’s role in spreading disinformation and subverting democracy across the world.

I will continue to post links on Facebook, and readers will be able to comment either there or here. But if you’d like to reduce your own use of the platform, I urge you to sign up for email delivery of Media Nation (click on “Follow This Blog” in the right-hand rail) and post your comments here. Your real name, first and last, is required.

Comments are open. Please include your full name, first and last, and speak with a civil tongue.

 

We shouldn’t let Trump’s Twitter tantrum stop us from taking a new look at online speech protections

Photo (cc) 2019 by Trending Topics 2019

Previously published at WGBHNews.org.

It’s probably not a good idea for us to talk about messing around with free speech on the internet at a moment when the reckless authoritarian in the White House is threatening to dismantle safeguards that have been in place for nearly a quarter of a century.

On the other hand, maybe there’s no time like right now. President Donald Trump is not wrong in claiming there are problems with Section 230 of the Telecommunications Act of 1996. Of course, he’s wrong about the particulars — that is, he’s wrong about its purpose, and he’s wrong about what would happen if it were repealed. But that shouldn’t stop us from thinking about the harmful effects of 230 and what we might do to lessen them.

Simply put, Section 230 says that online publishers can’t be held legally responsible for most third-party content. In just the past week Trump took to Twitter and falsely claimed that MSNBC host Joe Scarborough had murdered a woman who worked in his office and that violent protesters should be shot in the street. At least in theory, Trump, but not Twitter, could be held liable for both of those tweets — the first for libeling Scarborough, the second for inciting violence.

Ironically, without 230, Twitter no doubt would have taken Trump’s tweets down immediately rather than merely slapping warning labels on them, the action that provoked his childish rage. It’s only because of 230 that Trump is able to lie freely to his 24 million (not 80 million, as is often reported) followers without Twitter executives having to worry about getting sued.

As someone who’s been around since the earliest days of online culture, I have some insight into why we needed Section 230, and what’s gone wrong in the intervening years.

Back in the 1990s, the challenge that 230 was meant to address had as much to do with news websites as it did with early online services such as Prodigy and AOL. Print publications such as newspapers are legally responsible for everything they publish, including letters to the editor and advertisements. After all, the landmark 1964 libel case of New York Times v. Sullivan involved an ad, not the paper’s journalism.

But, in the digital world, holding publications strictly liable for their content proved to be impractical. Even in the era of dial-up modems, online comments poured in too rapidly to be monitored. Publishers worried that if they deleted some of the worst comments on their sites, that would mean they would be seen as exercising editorial control and were thus legally responsible for all comments.

The far-from-perfect solution: take a hands-off approach and not delete anything, not even the worst of the worst. At least to some extent, Section 230 solved that dilemma. Not only did it immunize publishers for third-party content, but it also contained what is called a “Good Samaritan” provision — publishers were now free to remove some bad content without making themselves liable for other, equally bad content that they might have missed.

Section 230 created an uneasy balance. Users could comment freely, which seemed to many of us in those more optimistic times like a step forward in allowing news consumers to be part of the conversation. (That’s where Jay Rosen’s phrase “the people formerly known as the audience” comes from.) But early hopes faded to pessimism and cynicism once we saw how terrible most of those comments were. So we ignored them.

That balance was disrupted by the rise of the platforms, especially Facebook and Twitter. And that’s because they had an incentive to keep users glued to their sites for as long as possible. By using computer algorithms to feed users more of what keeps them engaged, the platforms are able to show more advertising to them. And the way you keep them engaged is by showing them content that makes them angry and agitated, regardless of its truthfulness. The technologist Jaron Lanier, in his 2018 book “Ten Arguments for Deleting Your Social Media Accounts Right Now,” calls this “continuous behavior modification on a titanic scale.”

Which brings us to the tricky question of whether government should do something to remove these perverse incentives.

Earlier this year, Heidi Legg, then at Harvard’s Shorenstein Center on Media, Politics and Public Policy, published an op-ed in The Boston Globe arguing that Section 230 should be modified so that the platforms are held to the same legal standards as other publishers. “We should not allow the continued free-wheeling and profiteering of this attention economy to erode democracy through hyper-polarization,” she wrote.

Legg told me she hoped her piece would spark a conversation about what Section 230 reform might look like. “I do not have a solution,” she said in a text exchange on (what else?) Twitter, “but I have ideas and I am urging the nation and Congress to get ahead of this.”

Well, I’ve been thinking about it, too. And one possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers. Dorsey would quickly find that his tentative half-steps are insufficient — and Zuckerberg would have to abandon his smug refusal to do anything about Trump’s vile comments.

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Let me concede that I don’t know how practical my idea would be. Like Legg, I offer it out of a sense that we need to have a conversation about the harm that social media are doing to our democracy. I’m a staunch believer in the First Amendment, so I think it’s vital to address that harm in a way that doesn’t violate anyone’s free-speech rights. Ending special regulatory favors for certain types of toxic corporate behavior seems like one way of doing that with a relatively light touch.

And if that meant Trump could no longer use Twitter as a megaphone for hate speech, wild conspiracy theories and outright disinformation, well, so much the better.

Talk about this post on Facebook.

Conspiracy Nation: Why Trump Jr.’s smear of Biden was even worse than it seemed

WGBH News illustration by Emily Judem.

Previously published at WGBHNews.org.

Over the weekend, Donald Trump Jr. posted a shockingly offensive message on Instagram claiming that former Vice President Joe Biden is a child molester. Next to an image of Biden appeared the words “See you later, alligator!” Below was a photo of an alligator with the retort “In a while, pedophile!” (No, I won’t link to it.)

Outrage came swiftly. “The dangerous and untrue charge of pedophilia is the new marker — so far — of how low the Trump campaign will go to smear Biden,” wrote Chris Cillizza at CNN.com. Jonathan Martin of The New York Times called it “an incendiary and baseless charge.” In The Guardian, Martin Pengelly said “most observers” (was that qualifier really necessary?) regarded it as “beyond the pale even in America’s toxic political climate.”

What few analysts noticed, though, was that Trump Jr.’s vile accusation, which he later claimed was a joke, lined up perfectly with a conspiracy theory known as QAnon. Bubbling out of the darkest corners of the internet, the theory claims, in broad strokes, that President Donald Trump is secretly working to destroy a plot led by the Clintons — but of course! — and other Democrats who engage in child abuse and cannibalism. And in order to defeat these malign forces we must heed the cryptic messages of Q, an insider who is helping Trump rout the forces of evil and save the world.

QAnon, in effect, is the ur-theory connecting everything from Pizzagate to paranoia about the “deep state” to regarding impeachment as a “hoax,” as Trump has put it. The Trumps have dabbled in QAnon from time to time as a way of signaling their most wild-eyed supporters that they’re on board. But there’s no exaggerating how dangerous all of this is.

We are living, unfortunately, in a golden age of conspiracy theories. Some, like Alex Jones of Infowars infamy, claim that mass shootings are actually carried out by “crisis actors” in order to give the government a rationale to seize everyone’s guns. Then there’s the anti-vaccine movement, currently standing in the way of any rational response to the COVID-19 epidemic. Indeed, a widely watched video called “Plandemic” falsely claims, among other things, that face masks make you sick and that people who’ve had flu shots are more likely to get COVID.

There’s nothing new about conspiracy theories, just as there’s nothing new about so-called fake news. Never mind the assassination of John F. Kennedy, the subject of a new, weirdly compelling 17-minute song-poem by Bob Dylan called “Murder Most Foul.” A century earlier, there were those who blamed (take your pick) Confederate President Jefferson Davis or Pope Pius IX for the assassination of Abraham Lincoln.

But conspiracy theorizing in the 21st century is supercharged by the internet, with a significant assist from Trump. Trump has indulged not just QAnon but also Alex Jones, the anti-vaxxers and all manner of foolishness about the deep state — the belief that the U.S. government is run by a shadowy cabal of bureaucrats and military officials who are seeking to undermine the president. At its heart, that’s what Trump seems to be referring to when he tweets about “Obamagate!,” a scandalous crime lacking both a scandal and a crime. And let’s not forget that Trump began his political career with a conspiracy theory that he made his own: falsely claiming that Barack Obama was not born in the United States and was thus ineligible to serve as president.

In recent days, the media have converged in an attempt to explain and debunk these various conspiracy theories. Last week, public radio’s “On the Media” devoted a segment to QAnon and “Plandemic.” The investigative website ProPublica has published a guide on how to reason with believers. The American Press Institute has offered tips for reporters. The Conversation, which brings academic research to a wider public, has posted an article headlined “Coronavirus, ‘Plandemic’ and the seven traits of conspiratorial thinking.”

By far the most ambitious journalistic effort is a special project published by The Atlantic called “Shadowland.” And the heart of it is a nearly 10,000-word article by the executive editor, Adrienne LaFrance, profiling the QAnon phenomenon and how it has infected thousands of ordinary people.

“QAnon is emblematic of modern America’s susceptibility to conspiracy theories, and its enthusiasm for them,” LaFrance writes. “But it is also already much more than a loose collection of conspiracy-minded chat-room inhabitants. It is a movement united in mass rejection of reason, objectivity, and other Enlightenment values. And we are likely closer to the beginning of its story than the end.”

What makes QAnon, “Plandemic” and other conspiracies so powerful is that believers have an explanation for every countervailing truth. Experts and others in a position of authority are automatically cast as part of the conspiracy, whether you’re talking about Dr. Anthony Fauci, Hillary Clinton or Joe Biden.

“For QAnon, every contradiction can be explained away; no form of argument can prevail against it,” LaFrance writes. This type of belief system is sometimes referred to as “epistemic closure” — the idea is that believers live in a self-contained bubble that explains everything and that can’t be penetrated by contrary facts.

What can the media do in the face of such intense beliefs? In all likelihood, the answer is: not much. There is a school of thought among some press critics that if only news organizations would push harder, prevaricate less and devote themselves more fully to truth-telling rather than to reporting “both sides,” then a new dawn of rationality would surely follow. But that fundamentally misunderstands the problem, because the mainstream, reality-based media are regarded as part of the conspiracy. Journalism is grounded in the Enlightenment values that LaFrance invokes — the expectation that false beliefs will give way when confronted by facts and truth. Unfortunately, that’s not the world we live in today.

It should be noted that after Donald Trump Jr. posted his hideous attack on Joe Biden, Instagram neither deleted his post nor took down his account. Instagram, as you probably know, is owned by Facebook and is thus firmly ensconced within the Zuckerborg, which wants us all to believe that it is so very much concerned about truth and hate speech.

Thus does such garbage become normalized. You see a reference to Biden as a pedophile, and it seems off the wall. But then you remember he’s apologized for being handsy with women. And wasn’t he accused of sexual assault? And now look — there’s something on the internet about Democrats and pedophilia. Gosh, how are we supposed to know what to think?

Welcome to our nightmare.

Talk about this post on Facebook.

Why Facebook’s new oversight board is destined to be an exercise in futility

Former Guardian editor Alan Rusbridger is among the board members. Photo (cc) 2012 by Internaz.

Previously published at WGBHNews.org.

To illustrate how useless the newly unveiled Facebook oversight board will be, consider the top 10 fake-news stories shared by its users in 2019.

As reported by Business Insider, the list included such classics as “NYC Coroner who Declared Epstein death ‘Suicide’ worked for the Clinton foundation making 500k a year up until 2015,” “Omar [as in U.S. Rep. Ilhan Omar] Holding Secret Fundraisers with Islamic Groups Tied to Terror,” and “Pelosi Diverts $2.4 Billion From Social Security To Cover Impeachment Costs.”

None of these stories was even remotely true. Yet none of them would have been removed by the oversight board. You see, as Mathew Ingram pointed out in his Columbia Journalism Review newsletter, the 20-member board is charged only with deciding whether content that has already been taken down should be restored.

Now, it’s fair to acknowledge that Facebook CEO Mark Zuckerberg has an impossible task in bringing his Frankenstein’s monster under control. But that doesn’t mean any actual good is going to come of this exercise.

The board, which will eventually be expanded to 40, includes a number of distinguished people. Among them: Alan Rusbridger, the respected former editor of The Guardian, as well as international dignitaries and a Nobel Prize laureate. It has independent funding, Zuckerberg has agreed that its decisions will be binding, and eventually its purview may expand to removing false content.

But, fundamentally, this can’t work because Facebook was not designed to be controllable. In The New York Times, technology columnist Kara Swisher explained the problem succinctly. “Facebook’s problems are structural in nature,” she wrote. “It is evolving precisely as it was designed to, much the same way the coronavirus is doing what it is meant to do. And that becomes a problem when some of what flows through the Facebook system — let’s be fair in saying that much of it is entirely benign and anodyne — leads to dangerous and even deadly outcomes.”

It’s not really about the content. Stop me if you’ve heard this before, but what makes Facebook a threat to democracy is the way it serves up that content. Its algorithms — which are not well understood by anyone, even at Facebook — are aimed at keeping you engaged so that you stay on the site. And the most effective way to drive engagement is to show users content that makes them angry and upset.

Are you a hardcore supporter of President Donald Trump? If so, you are likely to see memes suggesting that COVID-19 is some sort of Democratic plot to defeat him for re-election — as was the case with a recent semi-fake-news story reporting that hospitals are being paid to attribute illnesses and deaths to the coronavirus even when they’re not. Or links to the right-wing website PJ Media aimed at stirring up outrage over “weed, opioids, booze and ciggies” being given to homeless people in San Francisco who’ve been quarantined. If you are a Trump opponent, you can count on Occupy Democrats to pop up in your feed and keep you in a constant state of agitation.

Now, keep in mind that all of this — even the fake stuff — is free speech that’s protected by the First Amendment. And all of this, plus much worse, is readily available on the open web. What makes Facebook so pernicious is that it amplifies the most divisive speech so that you’ll stay longer and be exposed to more advertising.

What is the oversight board going to do about this? Nothing.

“The new Facebook review board will have no influence over anything that really matters in the world,” wrote longtime Facebook critic Siva Vaidhyanathan at Wired, adding: “The board can’t say anything about the toxic content that Facebook allows and promotes on the site. It will have no authority over advertising or the massive surveillance that makes Facebook ads so valuable. It won’t curb disinformation campaigns or dangerous conspiracies…. And most importantly, the board will have no say over how the algorithms work and thus what gets amplified or muffled by the real power of Facebook.”

In fact, Facebook’s algorithm has already been trained to ban or post warning labels on some speech. In practice, though, such mechanized censorship is aggravatingly inept. Recently the seal of disapproval was slapped on an ad called “Mourning in America,” by the Lincoln Project, a group of “Never Trump” Republicans, because the fact-checking organization PolitiFact had called it partly false. The Lincoln Project, though, claimed that PolitiFact was wrong.

I recently received a warning for posting a photo of Benito Mussolini as a humorous response to a picture of Trump. No doubt the algorithm was too dumb to understand that I was making a political comment and was not expressing my admiration for Il Duce. Others have told me they’ve gotten warnings for referring to trolls as trolls, or for calling unmasked protesters against COVID-19 restrictions “dumber than dirt.”

So what is Facebook good for? I find it useful for staying in touch with family and friends, for promoting my work and for discussing legitimate news stories. Beyond that, much of it is a cesspool of hate speech, fake news and propaganda.

If it were up to me, I’d ban the algorithm. Let people post what they want, but don’t let Facebook robotically weaponize divisive content in order to drive up its profit margins. Zuckerberg himself has said that he expects the government will eventually impose some regulations. Well, this is one way to regulate it without actually making judgments about what speech will be allowed and what speech will be banned.

Meanwhile, I’ll watch with amusement as the oversight board attempts to wrestle this beast into submission. As Kara Swisher said, it “has all the hallmarks of the United Nations, except potentially much less effective.”

The real goal, I suspect, is to provide cover for Zuckerberg and make it appear that Facebook is doing something. In that respect, this initiative may seem harmless — unless it lulls us into complacency about more comprehensive steps that could be taken to reduce the harm that is being inflicted on all of us.

Talk about this post at Facebook.

How Google destroyed the value of digital advertising

New York Times media columnist Ben Smith reports on efforts to compel Google and Facebook to turn over some of their advertising revenues to the news organizations whose content they repurpose without compensation.

The debate over what platform companies owe the news business goes back many years and has come to resemble a theological dispute in its passions and the certainty expressed by those on either side. Indeed, longtime digital-news pundit Jeff Jarvis immediately weighed in with a smoking hot Twitter thread responding to Smith.

I’m not going to resolve that debate here. Rather, I want to offer some context. First, something like 90% of all new spending on digital advertising goes to Google and Facebook. Second, Google’s auction system for brokering ads destroyed any hopes news publishers had of making actual money from online advertising. How bad is it? Here’s an except from my 2018 book, “The Return of the Moguls”:

Nicco Mele, the former senior vice president and deputy publisher of the Los Angeles Times, who’s now the director of the Shorenstein Center on Media, Politics and Public Policy at Harvard’s Kennedy School [he has since moved on], explained at a Shorenstein seminar why a digital advertising strategy based on clicks simply doesn’t work for news organizations that are built around original (which is to say expensive) journalism. “Google has fundamentally shaped the future of advertising by charging on a performance basis — cost per click,” he said. “And that has been a giant, unimaginable anchor weight dragging down all advertising pricing.”

For example, Mele said that a full-page weekday ad in the LA Times, which would reach 500,000 people, costs about $50,000. To reach the same 500,000 people on LATimes.com costs about $7,000. And if that ad appeared on LATimes.com via Google, it might bring in no more than $20. “Models built on scale make zero sense to me,” Mele said, “because I just don’t see any future there.” Yet it has led even our best newspapers to supplement their high-quality journalism with a pursuit of clicks for the sake of clicks.

From $50,000 to $7,000 to $20. This is why the advertising model for digital news is broken, and it’s why newspapers have gone all-in on paid subscriptions.

Talk about this post on Facebook.