Facebook could have made itself less toxic. It chose profit and Trump instead.

Locked down following the Jan. 6 insurrection. Photo (cc) 2021 by Geoff Livingston.

Previously published at GBH News.

Working for Facebook can be pretty lucrative. According to PayScale, the average salary of a Facebook employee is $123,000, with senior software engineers earning more than $200,000. Even better, the job is pandemic-proof. Traffic soared during the early months of COVID (though advertising was down), and the service attracted nearly 2.8 billion active monthly users worldwide during the fourth quarter of 2020.

So employees are understandably reluctant to demand change from their maximum leader, the now-36-year-old Mark Zuckerberg, the man-child who has led them to their promised land.

For instance, last fall Facebook tweaked its algorithm so that users were more likely to see reliable news rather than hyperpartisan propaganda in advance of the election — a very small step in the right direction. Afterwards, some employees thought Facebook ought to do the civic-minded thing and make the change permanent. Management’s answer: Well, no, the change cost us money, so it’s time to resume business as usual. And thus it was.

Joaquin Quiñonero Candela is what you might call an extreme example of this go-along mentality. Quiñonero is the principal subject of a remarkable 6,700-word story in the current issue of Technology Review, published by MIT. As depicted by reporter Karen Hao, Quiñonero is extreme not in the sense that he’s a true believer or a bad actor or anything like that. Quite the contrary; he seems like a pretty nice guy, and the story is festooned with pictures of him outside his home in the San Francisco area, where he lives with his wife and three children, engaged in homey activities like feeding his chickens and, well, checking his phone. (It’s Zuck!)

What’s extreme, rather, is the amount of damage Quiñonero can do. He is the director of artificial intelligence for Facebook, a leading AI scientist who is universally respected for his brilliance, and the keeper of Facebook’s algorithm. He is also the head of an internal initiative called Responsible AI.

Now, you might think that the job of Responsible AI would be to find ways to make Facebook’s algorithm less harmful without chipping away too much at Zuckerberg’s net worth, estimated recently at $97 billion. But no. The way Hao tells it, Quiñonero’s shop was diverted almost from the beginning from its mission of tamping down extremist and false information so that it could take on a more politically important task: making sure that right-wing content kept popping up in users’ news feeds in order to placate Donald Trump, who falsely claimed that Facebook was biased against conservatives.

How pernicious was this? According to Hao, Facebook developed a model called the “Fairness Flow,” among whose principles was that liberal and conservative content should not be treated equally if liberal content was more factual and conservative content promoted falsehoods — which is in fact the case much of the time. But Facebook executives were having none of it, deciding for purely political reasons that the algorithm should result in equal outcomes for liberal and conservative content regardless of truthfulness. Hao writes:

“They took ‘fairness’ to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. ‘There’s no point, then,’ the researcher says. A model modified in that way ‘would have literally no impact on the actual problem’ of misinformation.”

Hao ranges across the hellscape of Facebook’s wreckage, from the Cambridge Analytica scandal to amplifying a genocidal campaign against Muslims in Myanmar to boosting content that could worsen depression and thus lead to suicide. What she shows over and over again is not that Facebook is oblivious to these problems; in fact, it recently banned a number of QAnon, anti-vaccine and Holocaust-denial groups. But, in every case, it is slow to act, placing growth, engagement and, thus, revenue ahead of social responsibility.

It is fair to ask what Facebook’s role is in our current civic crisis, with a sizable minority of the public in thrall to Trump, disdaining vaccines and obsessing over trivia like Dr. Seuss and so-called cancel culture. Isn’t Fox News more to blame than Facebook? Aren’t the falsehoods spouted every night by Tucker Carlson, Sean Hannity and Laura Ingraham ultimately more dangerous than a social network that merely reflects what we’re already interested in?

The obvious answer, I think, is that there’s a synergistic effect between the two. The propaganda comes from Fox and its ilk and moves to Facebook, where it gets distributed and amplified. That, in turn, creates more demand for outrageous content from Fox and, occasionally, fuels the growth of even more extreme outlets like Newsmax and OAN. Dangerous as the Fox effect may be, Facebook makes it worse.

Hao’s final interview with Quiñonero came after the deadly insurrection of Jan. 6. I’m not going to spoil it for you, because it’s a really fine piece of writing, and quoting a few bits wouldn’t do it justice. But Quiñonero comes across as someone who knows, deep in his heart, that he could have played a role in preventing what happened but chose not to act.

It’s devastating — and something for him to think about as he ponders life in his nice home, with his family and his chickens, which are now coming home to roost.

There’s no reason to think that a Nextdoor-like service would have saved local news

Every so often, media observers berate the newspaper business for letting upstarts encroach on their turf rather than innovating themselves.

Weirdly enough, I’ve heard a number of people over the years assert that newspapers should have unveiled a free classified-ad service in order to forestall the rise of Craigslist — as if giving away classified ads was going to help pay for journalism. As of 2019, Craigslist employed a reported 50 full-time people worldwide. The Boston Globe and its related media properties, Stat News and Boston.com employ about 300 full-time journalists. As they say, do the math.

Sometimes you hear the same thing about Facebook, which is different enough from journalism that you might as well say that newspapers should have moved into the food-services industry. Don Graham’s legendary decision to let Mark Zuckerberg walk away from an agreed-upon investment in Facebook changed the course of newspaper history — the Graham family could have kept The Washington Post rather than having to sell to Jeff Bezos. As a bonus, someone with a conscience would have sat on Facebook’s board, although it’s hard to know whether that would have mattered. But journalism and social media are fundamentally different businesses, so it’s not as though there was any sort of natural fit.

More recently, I’ve heard the same thing about Nextdoor, a community-oriented social network that has emerged as the news source of record for reporting lost cats and suspicious-looking people in your neighborhood. I like our Nextdoor and visit it regularly. But when it comes to discussion of local news, I find it less useful than a few of our Facebook groups. Still, you hear critics complain that newspapers should have been there first.

Become a member of Media Nation

Well, maybe they should have. But how good a business is it, really? Like Craigslist, social media thrives by having as few employees as possible. Journalism is labor-intensive. Over the years I’ve watched the original vision for Wicked Local — unveiled, if I’m remembering correctly, by the Old Colony Memorial in Plymouth — shrink from a genuinely interesting collection of local blogs and other community content into a collection of crappy websites for GateHouse Media’s and now Gannett’s newspapers.

The original Boston.com was a vibrant experiment as well, with community blogs and all sorts of interesting content that you wouldn’t find in the Globe. But after the Globe moved to its own paywalled website, Boston.com’s appeal was pretty much shot, although it continues to limp along. For someone who wants a free regional news source, it’s actually not that bad. But the message, as with Wicked Local, is that maybe community content just doesn’t produce enough revenue to support the journalists we need to produce actual news coverage.

Recently Will Oremus of a Medium-backed website called OneZero wrote a lengthy piece about the rise of Nextdoor, which has done especially well in the pandemic. Oremus’ take was admirably balanced — though Nextdoor can be a valuable resource, especially in communities lacking real news coverage, he wrote, it is also opaque in its operations and tilted toward the interests of its presumably affluent users. According to Oremus, Nextdoor sites are available in about 268,000 neighborhoods across the world, and its owners have considered taking the company public.

There’s no question that Nextdoor is taking on the role once played by local newspapers. But is that because people are moving to Nextdoor or because local newspapers are withering away? As Oremus writes, quoting Emily Bell:

In some ways, Nextdoor is filling a gap left by a dearth of local news outlets. “In discussions of how people are finding out about local news, Nextdoor and Facebook Groups are the two online platforms that crop up most in our research,” said Columbia’s Emily Bell. Bell is helping to lead a project examining the crisis in local news and the landscape that’s emerging in its wake.

“When we were scoping out, ‘What does a news desert look like?’ it was clear that there’s often a whole group of hyperlocal platforms that we don’t traditionally consider to be news,” Bell said. They included Nextdoor, Facebook Groups, local Reddit subs, and crime-focused apps such as Citizen and Amazon Ring’s Neighbors. In the absence of a traditional news outlet, “people do share news, they do comment on news,” she said. “But they’re doing it on a platform like Nextdoor that really is not designed for news — may be in the same way that Facebook is not designed for news.”

Look, I’m glad that Nextdoor is around. I’m glad that Patch is around, and in fact our local Patch occasionally publishes some original reporting. But there is no substitute for actual journalism — the hard work of sitting through local meetings, keeping an eye on the police and telling the story of the community. As inadequate as our local Gannett weekly is, there’s more local news in it than in any other source we have.

If local newspapers had developed Nextdoor and offered it as part of their journalism, would it have made a different to the bottom line? It seems unlikely — although it no doubt would have brought in somewhat more revenues than giving away free classifieds.

Nextdoor, like Facebook, makes money by offering low-cost ads and employing as few people as possible. It may add up to a lot of cash in the aggregate. At the local level, though, I suspect it adds up to very little — and, if pursued by newspapers, would distract from the hard work of coming up with genuinely sustainable business models.

We can leverage Section 230 to limit algorithmically driven disinformation

Mark Zuckerberg. Photo (cc) 2012 by JD Lasica.

Josh Bernoff responds.

How can we limit the damage that social media — and especially Facebook — are doing to democracy? We all know what the problem is. The platforms make money by keeping you logged on and engaged. And they keep you engaged by feeding you content that their algorithms have determined makes you angry and upset. How do we break that chain?

Josh Bernoff, writing in The Boston Globe, offers an idea similar to one I suggested a few months ago: leverage Section 230 of the Telecommunications Act of 1996, which holds digital publishers harmless for any content posted by third-party users. Under Section 230, publishers can’t be sued if a commenter libels someone, which amounts to a huge benefit not available in other contexts. For instance, a newspaper publisher is liable for every piece of content that it runs, from news articles to ads and letters to the editor — but not for comments posted on the newspaper’s website.

Bernoff suggests what strikes me as a rather convoluted system that would require Facebook (that is, if Mark Zuckerberg wants to continue benefiting from Section 230) to run ads calling attention to ideologically diverse content. Using the same algorithms that got us into trouble in the first place, Facebook would serve up conservative content to liberal users and liberal content to conservative users.

There are, I think, some problems with Bernoff’s proposal, starting with this: He writes that Facebook and the other platforms “would be required to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals.”

But that elides dealing the reality of what has happened to political discourse over the past several decades, accelerated by the Trump era. Liberals and Democrats haven’t changed all that much. Conservatives and Republicans, on the other hand, have become deeply radical, supporting the overturning of a landslide presidential election and espousing dangerous conspiracy theories about COVID-19. Given that, what is a “mainstream conservative news site”?

Bernoff goes so far as to suggest that MSNBC and Fox News are liberal and conservative equivalents. In their prime-time programming, though, the liberal MSNBC — despite its annoyingly doctrinaire, hectoring tone — remains tethered to reality, whereas Fox’s right-wing prime-time hosts are moving ever closer to QAnon territory. The latest is Tucker Carlson’s anti-vax outburst. Who knew that he would think killing his viewers was a good business strategy?

Moving away from the fish-in-a-barrel examples of MSNBC and Fox, what about The New York Times and The Wall Street Journal? Well, the Times’ editorial pages are liberal and the Journal’s are conservative. But if we’re talking about news coverage, they’re really not all that different. So that doesn’t work, either.

I’m not sure that my alternative, which I wrote about for GBH News back in June, is workable, but it does have the advantage of being simple: eliminate Section 230 protections for any platform that uses algorithms to boost engagement. Facebook would have to comply; if it didn’t, it would be sued into oblivion in a matter of weeks or months. As I wrote at the time:

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Unlike Bernoff’s proposal, mine wouldn’t attempt to regulate speech by identifying the news sites that are worthy of putting in front of users so that they’ll be exposed to views they disagree with. I would let it rip as long as artificial intelligence isn’t being used to boost the most harmful content.

Needless to say, Zuckerberg and his fellow Big Tech executives can be expected to fight like crazed weasels in order to keep using algorithms, which are incredibly valuable to their bottom line. Just this week The New York Times reported that Facebook temporarily tweaked its algorithms to emphasize quality news in the runup to the election and its aftermath — but it has now quietly reverted to boosting divisive slime, because that’s what keeps the ad money rolling in.

Donald Trump has been crusading against 230 during the final days of his presidency, even though he doesn’t seem to understand that he would be permanently banned from Twitter and every other platform — even Parler — if they had to worry about being held legally responsible for what he posts.

Still, that’s no reason not to do something about Section 230, which was approved in the earliest days of the commercial web and has warped digital discourse in ways we couldn’t have imagined back then. Hate speech and disinformation driven by algorithms have become the bane of our time. Why not modify 230 in order to do something about it?

Comments are open. Please include your full name, first and last, and speak with a civil tongue.

We shouldn’t let Trump’s Twitter tantrum stop us from taking a new look at online speech protections

Photo (cc) 2019 by Trending Topics 2019

Previously published at WGBHNews.org.

It’s probably not a good idea for us to talk about messing around with free speech on the internet at a moment when the reckless authoritarian in the White House is threatening to dismantle safeguards that have been in place for nearly a quarter of a century.

On the other hand, maybe there’s no time like right now. President Donald Trump is not wrong in claiming there are problems with Section 230 of the Telecommunications Act of 1996. Of course, he’s wrong about the particulars — that is, he’s wrong about its purpose, and he’s wrong about what would happen if it were repealed. But that shouldn’t stop us from thinking about the harmful effects of 230 and what we might do to lessen them.

Simply put, Section 230 says that online publishers can’t be held legally responsible for most third-party content. In just the past week Trump took to Twitter and falsely claimed that MSNBC host Joe Scarborough had murdered a woman who worked in his office and that violent protesters should be shot in the street. At least in theory, Trump, but not Twitter, could be held liable for both of those tweets — the first for libeling Scarborough, the second for inciting violence.

Ironically, without 230, Twitter no doubt would have taken Trump’s tweets down immediately rather than merely slapping warning labels on them, the action that provoked his childish rage. It’s only because of 230 that Trump is able to lie freely to his 24 million (not 80 million, as is often reported) followers without Twitter executives having to worry about getting sued.

As someone who’s been around since the earliest days of online culture, I have some insight into why we needed Section 230, and what’s gone wrong in the intervening years.

Back in the 1990s, the challenge that 230 was meant to address had as much to do with news websites as it did with early online services such as Prodigy and AOL. Print publications such as newspapers are legally responsible for everything they publish, including letters to the editor and advertisements. After all, the landmark 1964 libel case of New York Times v. Sullivan involved an ad, not the paper’s journalism.

But, in the digital world, holding publications strictly liable for their content proved to be impractical. Even in the era of dial-up modems, online comments poured in too rapidly to be monitored. Publishers worried that if they deleted some of the worst comments on their sites, that would mean they would be seen as exercising editorial control and were thus legally responsible for all comments.

The far-from-perfect solution: take a hands-off approach and not delete anything, not even the worst of the worst. At least to some extent, Section 230 solved that dilemma. Not only did it immunize publishers for third-party content, but it also contained what is called a “Good Samaritan” provision — publishers were now free to remove some bad content without making themselves liable for other, equally bad content that they might have missed.

Section 230 created an uneasy balance. Users could comment freely, which seemed to many of us in those more optimistic times like a step forward in allowing news consumers to be part of the conversation. (That’s where Jay Rosen’s phrase “the people formerly known as the audience” comes from.) But early hopes faded to pessimism and cynicism once we saw how terrible most of those comments were. So we ignored them.

That balance was disrupted by the rise of the platforms, especially Facebook and Twitter. And that’s because they had an incentive to keep users glued to their sites for as long as possible. By using computer algorithms to feed users more of what keeps them engaged, the platforms are able to show more advertising to them. And the way you keep them engaged is by showing them content that makes them angry and agitated, regardless of its truthfulness. The technologist Jaron Lanier, in his 2018 book “Ten Arguments for Deleting Your Social Media Accounts Right Now,” calls this “continuous behavior modification on a titanic scale.”

Which brings us to the tricky question of whether government should do something to remove these perverse incentives.

Earlier this year, Heidi Legg, then at Harvard’s Shorenstein Center on Media, Politics and Public Policy, published an op-ed in The Boston Globe arguing that Section 230 should be modified so that the platforms are held to the same legal standards as other publishers. “We should not allow the continued free-wheeling and profiteering of this attention economy to erode democracy through hyper-polarization,” she wrote.

Legg told me she hoped her piece would spark a conversation about what Section 230 reform might look like. “I do not have a solution,” she said in a text exchange on (what else?) Twitter, “but I have ideas and I am urging the nation and Congress to get ahead of this.”

Well, I’ve been thinking about it, too. And one possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers. Dorsey would quickly find that his tentative half-steps are insufficient — and Zuckerberg would have to abandon his smug refusal to do anything about Trump’s vile comments.

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Let me concede that I don’t know how practical my idea would be. Like Legg, I offer it out of a sense that we need to have a conversation about the harm that social media are doing to our democracy. I’m a staunch believer in the First Amendment, so I think it’s vital to address that harm in a way that doesn’t violate anyone’s free-speech rights. Ending special regulatory favors for certain types of toxic corporate behavior seems like one way of doing that with a relatively light touch.

And if that meant Trump could no longer use Twitter as a megaphone for hate speech, wild conspiracy theories and outright disinformation, well, so much the better.

Talk about this post on Facebook.

Conspiracy Nation: Why Trump Jr.’s smear of Biden was even worse than it seemed

WGBH News illustration by Emily Judem.

Previously published at WGBHNews.org.

Over the weekend, Donald Trump Jr. posted a shockingly offensive message on Instagram claiming that former Vice President Joe Biden is a child molester. Next to an image of Biden appeared the words “See you later, alligator!” Below was a photo of an alligator with the retort “In a while, pedophile!” (No, I won’t link to it.)

Outrage came swiftly. “The dangerous and untrue charge of pedophilia is the new marker — so far — of how low the Trump campaign will go to smear Biden,” wrote Chris Cillizza at CNN.com. Jonathan Martin of The New York Times called it “an incendiary and baseless charge.” In The Guardian, Martin Pengelly said “most observers” (was that qualifier really necessary?) regarded it as “beyond the pale even in America’s toxic political climate.”

What few analysts noticed, though, was that Trump Jr.’s vile accusation, which he later claimed was a joke, lined up perfectly with a conspiracy theory known as QAnon. Bubbling out of the darkest corners of the internet, the theory claims, in broad strokes, that President Donald Trump is secretly working to destroy a plot led by the Clintons — but of course! — and other Democrats who engage in child abuse and cannibalism. And in order to defeat these malign forces we must heed the cryptic messages of Q, an insider who is helping Trump rout the forces of evil and save the world.

QAnon, in effect, is the ur-theory connecting everything from Pizzagate to paranoia about the “deep state” to regarding impeachment as a “hoax,” as Trump has put it. The Trumps have dabbled in QAnon from time to time as a way of signaling their most wild-eyed supporters that they’re on board. But there’s no exaggerating how dangerous all of this is.

We are living, unfortunately, in a golden age of conspiracy theories. Some, like Alex Jones of Infowars infamy, claim that mass shootings are actually carried out by “crisis actors” in order to give the government a rationale to seize everyone’s guns. Then there’s the anti-vaccine movement, currently standing in the way of any rational response to the COVID-19 epidemic. Indeed, a widely watched video called “Plandemic” falsely claims, among other things, that face masks make you sick and that people who’ve had flu shots are more likely to get COVID.

There’s nothing new about conspiracy theories, just as there’s nothing new about so-called fake news. Never mind the assassination of John F. Kennedy, the subject of a new, weirdly compelling 17-minute song-poem by Bob Dylan called “Murder Most Foul.” A century earlier, there were those who blamed (take your pick) Confederate President Jefferson Davis or Pope Pius IX for the assassination of Abraham Lincoln.

But conspiracy theorizing in the 21st century is supercharged by the internet, with a significant assist from Trump. Trump has indulged not just QAnon but also Alex Jones, the anti-vaxxers and all manner of foolishness about the deep state — the belief that the U.S. government is run by a shadowy cabal of bureaucrats and military officials who are seeking to undermine the president. At its heart, that’s what Trump seems to be referring to when he tweets about “Obamagate!,” a scandalous crime lacking both a scandal and a crime. And let’s not forget that Trump began his political career with a conspiracy theory that he made his own: falsely claiming that Barack Obama was not born in the United States and was thus ineligible to serve as president.

In recent days, the media have converged in an attempt to explain and debunk these various conspiracy theories. Last week, public radio’s “On the Media” devoted a segment to QAnon and “Plandemic.” The investigative website ProPublica has published a guide on how to reason with believers. The American Press Institute has offered tips for reporters. The Conversation, which brings academic research to a wider public, has posted an article headlined “Coronavirus, ‘Plandemic’ and the seven traits of conspiratorial thinking.”

By far the most ambitious journalistic effort is a special project published by The Atlantic called “Shadowland.” And the heart of it is a nearly 10,000-word article by the executive editor, Adrienne LaFrance, profiling the QAnon phenomenon and how it has infected thousands of ordinary people.

“QAnon is emblematic of modern America’s susceptibility to conspiracy theories, and its enthusiasm for them,” LaFrance writes. “But it is also already much more than a loose collection of conspiracy-minded chat-room inhabitants. It is a movement united in mass rejection of reason, objectivity, and other Enlightenment values. And we are likely closer to the beginning of its story than the end.”

What makes QAnon, “Plandemic” and other conspiracies so powerful is that believers have an explanation for every countervailing truth. Experts and others in a position of authority are automatically cast as part of the conspiracy, whether you’re talking about Dr. Anthony Fauci, Hillary Clinton or Joe Biden.

“For QAnon, every contradiction can be explained away; no form of argument can prevail against it,” LaFrance writes. This type of belief system is sometimes referred to as “epistemic closure” — the idea is that believers live in a self-contained bubble that explains everything and that can’t be penetrated by contrary facts.

What can the media do in the face of such intense beliefs? In all likelihood, the answer is: not much. There is a school of thought among some press critics that if only news organizations would push harder, prevaricate less and devote themselves more fully to truth-telling rather than to reporting “both sides,” then a new dawn of rationality would surely follow. But that fundamentally misunderstands the problem, because the mainstream, reality-based media are regarded as part of the conspiracy. Journalism is grounded in the Enlightenment values that LaFrance invokes — the expectation that false beliefs will give way when confronted by facts and truth. Unfortunately, that’s not the world we live in today.

It should be noted that after Donald Trump Jr. posted his hideous attack on Joe Biden, Instagram neither deleted his post nor took down his account. Instagram, as you probably know, is owned by Facebook and is thus firmly ensconced within the Zuckerborg, which wants us all to believe that it is so very much concerned about truth and hate speech.

Thus does such garbage become normalized. You see a reference to Biden as a pedophile, and it seems off the wall. But then you remember he’s apologized for being handsy with women. And wasn’t he accused of sexual assault? And now look — there’s something on the internet about Democrats and pedophilia. Gosh, how are we supposed to know what to think?

Welcome to our nightmare.

Talk about this post on Facebook.

Why Facebook’s new oversight board is destined to be an exercise in futility

Former Guardian editor Alan Rusbridger is among the board members. Photo (cc) 2012 by Internaz.

Previously published at WGBHNews.org.

To illustrate how useless the newly unveiled Facebook oversight board will be, consider the top 10 fake-news stories shared by its users in 2019.

As reported by Business Insider, the list included such classics as “NYC Coroner who Declared Epstein death ‘Suicide’ worked for the Clinton foundation making 500k a year up until 2015,” “Omar [as in U.S. Rep. Ilhan Omar] Holding Secret Fundraisers with Islamic Groups Tied to Terror,” and “Pelosi Diverts $2.4 Billion From Social Security To Cover Impeachment Costs.”

None of these stories was even remotely true. Yet none of them would have been removed by the oversight board. You see, as Mathew Ingram pointed out in his Columbia Journalism Review newsletter, the 20-member board is charged only with deciding whether content that has already been taken down should be restored.

Now, it’s fair to acknowledge that Facebook CEO Mark Zuckerberg has an impossible task in bringing his Frankenstein’s monster under control. But that doesn’t mean any actual good is going to come of this exercise.

The board, which will eventually be expanded to 40, includes a number of distinguished people. Among them: Alan Rusbridger, the respected former editor of The Guardian, as well as international dignitaries and a Nobel Prize laureate. It has independent funding, Zuckerberg has agreed that its decisions will be binding, and eventually its purview may expand to removing false content.

But, fundamentally, this can’t work because Facebook was not designed to be controllable. In The New York Times, technology columnist Kara Swisher explained the problem succinctly. “Facebook’s problems are structural in nature,” she wrote. “It is evolving precisely as it was designed to, much the same way the coronavirus is doing what it is meant to do. And that becomes a problem when some of what flows through the Facebook system — let’s be fair in saying that much of it is entirely benign and anodyne — leads to dangerous and even deadly outcomes.”

It’s not really about the content. Stop me if you’ve heard this before, but what makes Facebook a threat to democracy is the way it serves up that content. Its algorithms — which are not well understood by anyone, even at Facebook — are aimed at keeping you engaged so that you stay on the site. And the most effective way to drive engagement is to show users content that makes them angry and upset.

Are you a hardcore supporter of President Donald Trump? If so, you are likely to see memes suggesting that COVID-19 is some sort of Democratic plot to defeat him for re-election — as was the case with a recent semi-fake-news story reporting that hospitals are being paid to attribute illnesses and deaths to the coronavirus even when they’re not. Or links to the right-wing website PJ Media aimed at stirring up outrage over “weed, opioids, booze and ciggies” being given to homeless people in San Francisco who’ve been quarantined. If you are a Trump opponent, you can count on Occupy Democrats to pop up in your feed and keep you in a constant state of agitation.

Now, keep in mind that all of this — even the fake stuff — is free speech that’s protected by the First Amendment. And all of this, plus much worse, is readily available on the open web. What makes Facebook so pernicious is that it amplifies the most divisive speech so that you’ll stay longer and be exposed to more advertising.

What is the oversight board going to do about this? Nothing.

“The new Facebook review board will have no influence over anything that really matters in the world,” wrote longtime Facebook critic Siva Vaidhyanathan at Wired, adding: “The board can’t say anything about the toxic content that Facebook allows and promotes on the site. It will have no authority over advertising or the massive surveillance that makes Facebook ads so valuable. It won’t curb disinformation campaigns or dangerous conspiracies…. And most importantly, the board will have no say over how the algorithms work and thus what gets amplified or muffled by the real power of Facebook.”

In fact, Facebook’s algorithm has already been trained to ban or post warning labels on some speech. In practice, though, such mechanized censorship is aggravatingly inept. Recently the seal of disapproval was slapped on an ad called “Mourning in America,” by the Lincoln Project, a group of “Never Trump” Republicans, because the fact-checking organization PolitiFact had called it partly false. The Lincoln Project, though, claimed that PolitiFact was wrong.

I recently received a warning for posting a photo of Benito Mussolini as a humorous response to a picture of Trump. No doubt the algorithm was too dumb to understand that I was making a political comment and was not expressing my admiration for Il Duce. Others have told me they’ve gotten warnings for referring to trolls as trolls, or for calling unmasked protesters against COVID-19 restrictions “dumber than dirt.”

So what is Facebook good for? I find it useful for staying in touch with family and friends, for promoting my work and for discussing legitimate news stories. Beyond that, much of it is a cesspool of hate speech, fake news and propaganda.

If it were up to me, I’d ban the algorithm. Let people post what they want, but don’t let Facebook robotically weaponize divisive content in order to drive up its profit margins. Zuckerberg himself has said that he expects the government will eventually impose some regulations. Well, this is one way to regulate it without actually making judgments about what speech will be allowed and what speech will be banned.

Meanwhile, I’ll watch with amusement as the oversight board attempts to wrestle this beast into submission. As Kara Swisher said, it “has all the hallmarks of the United Nations, except potentially much less effective.”

The real goal, I suspect, is to provide cover for Zuckerberg and make it appear that Facebook is doing something. In that respect, this initiative may seem harmless — unless it lulls us into complacency about more comprehensive steps that could be taken to reduce the harm that is being inflicted on all of us.

Talk about this post at Facebook.

Political ads on Facebook can be fixed. Is Mark Zuckerberg willing to try?

Photo via Wikimedia Commons

Previously published at WGBHNews.org.

If nothing else, Twitter CEO Jack Dorsey proved himself to be a master of timing when he announced last week that his social network will ban all political ads.

Anger was still raging over Mark Zuckerberg’s recent statement that Facebook would not attempt to fact-check political advertising, thus opening the door to a flood of falsehoods. Taking direct aim at Zuckerberg, Dorsey tweeted: “It‘s not credible for us to say: ‘We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well…they can say whatever they want!’”

Not surprisingly, Twitter’s ad ban won widespread praise.

“This is a good call,” tweeted U.S. Rep. Alexandria Ocasio-Cortez, D-N.Y., who had only recently tormented Zuckerberg at a congressional hearing. “Technology — and social media especially — has a powerful responsibility in preserving the integrity of our elections. Not allowing for paid disinformation is one of the most basic, ethical decisions a company can make.”

Added Hillary Clinton: “This is the right thing to do for democracy in America and all over the world. What say you, @Facebook?”

Oh, but if only it were that simple. Advertising on social media is a cheap and effective way for underfunded candidates seeking less prominent offices to reach prospective voters. No, it’s not good for democracy if we are overwhelmed with lies. But, with some controls in place, Facebook and Twitter can be crucial for political candidates who can’t afford television ads. To get rid of all political advertising would be to favor incumbents over outsiders and longshots.

“Twitter’s ban on political ads disadvantages challengers and political newcomers,” wrote University of Utah communications researcher Shannon C. MacGregor in The Guardian. “Digital ads are much cheaper than television ads, drawing in a wider scope of candidates, especially for down-ballot races.”

And let’s be clear: Facebook, not Twitter, is what really matters. Journalists pay a lot of attention to Twitter because other journalists use it — as do politicians, bots and sociopaths. Facebook, with more than 2 billion active users around the world, is exponentially larger and much richer. For instance, the 2020 presidential candidates so far have spent an estimated $46 million on political ads on Facebook, compared to less than $3 million spent by all candidates on Twitter ads during the 2018 midterms.

But is political advertising on Facebook worth saving given the falsehoods, the attempts to deceive, that go way beyond anything you’re likely to see on TV?

In fact, there are some common-sense steps that might help fix Facebook ads.

Writing in The Boston Globe, technology journalist Josh Bernoff suggested that Facebook ban all targeting for political ads except for geography. In other words, candidates for statewide office ought to be able to target their ads so they’re not paying to reach Facebook users in other states. But they shouldn’t be able to target certain slices of the electorate, like liberals or conservatives, homeowners or renters, white people or African Americans (or “Jew haters,” as ProPublica discovered was possible in a nauseating exposé a couple of years ago.)

Bernoff also suggested that politicians be required to provide documentation to back up the facts in their ads. It’s a good idea, though it may prove impractical.

“Facebook is incapable of vetting political ads effectively and consistently at the global scale. And political ads are essential to maintaining the company’s presence in countries around the world,” wrote Siva Vaidhyanathan, author of “Antisocial Media: How Facebook Disconnects Us and Undermines Democracy,” in The New York Times.

But we may not have to go that far. The reason ads spreading disinformation are so effective on Facebook is that they fly under the radar, seen by tiny slices of the electorate and thus evading broader scrutiny. In an op-ed piece in The Washington Post, Ellen L. Weintraub, chair of the Federal Election Commission, argued that the elimination of microtargeting could result in more truthful, less toxic advertising.

“Ads that are more widely available will contribute to the robust and wide-open debate that is central to our First Amendment values,” Weintraub wrote. “Political advertisers will have greater incentives to be truthful in ads when they can more easily and publicly be called to account for them.”

Calling for political ads to be banned on Facebook is futile. We live our lives on the internet these days, and Facebook has become (God help us) our most important distributor of news and information.

As Supreme Court Justice Louis Brandeis once wrote, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the process of education, the remedy to be applied is more speech, not enforced silence.”

Nonprofit news update

Earlier this week The Salt Lake Tribune reported that the IRS had approved its application to become a nonprofit organization, making it the first daily newspaper to take that step. Unlike The Philadelphia Inquirer and the Tampa Bay Times, for-profit newspapers owned by nonprofit foundations, the Tribune will be fully nonprofit, making it eligible for tax-deductible donations.

Nonprofit news isn’t exactly a novelty. Public media organizations like PBS, NPR and, yes, WGBH are nonprofit organizations. So are a number of pioneering community websites such as the New Haven Independent and Voice of San Diego. And if the Tribune succeeds, it could pave the way for other legacy newspapers.

Last May I wrote about what nonprofit status in Salt Lake could mean for the struggling newspaper business. This week’s announcement is a huge step forward.

Talk about this post on Facebook.

Facebook News may be a boon to big media. But will local news get left behind?

Mark Zuckerberg. Photo (cc) 2012 by JD Lasica.

Previously published at WGBHNews.org.

Imagine for a moment that you run a small community newspaper or website. You have a Facebook page. But people tell you that even though they’ve “liked” it, they almost never see content from your page show up in their News Feed. And thus one of the most important channels for distributing journalism in the social-media era isn’t working for you.

According to some estimates, “organic reach” — that is, the percentage of users who’ve liked your page and who actually see your content — can be as low as 2 percent. What can you do? Well, you can give Mark Zuckerberg access to your credit card, which will boost your reach considerably. But if you can’t afford to pay, you’d be better off handing out refrigerator magnets with your website’s URL on them than depending on Facebook.

Now imagine that you’re the publisher of a major national news organization like The New York Times, The Washington Post or BuzzFeed. The Zuckerborg is about to bestow upon you millions of dollars. That’s because you’ve agreed to be part of Facebook News, a new tab in the service’s mobile app for curated, reliable journalism. (The feature is being rolled out slowly, and I have not seen it yet.)

There are many reasons to be skeptical of Facebook’s latest foray into news, but surely one of the most important is this: At a time when local news is under unprecedented economic pressure, the News Tab will only widen the gap between relatively well-off, highly visible national news organizations and small local projects. The national sites will get paid; the local sites will be billed monthly.

It’s possible that this could change over time. According to Facebook’s announcement. “we’ll showcase local original reporting by surfacing local publications from the largest major metro areas across the country, beginning with New York, Los Angeles, Chicago, Dallas-Fort Worth, Philadelphia, Houston, Washington DC, Miami, Atlanta and Boston. In the coming months, we’ll include local news from Today In, our local news and community information tab, which recently expanded to over 6,000 US towns and cities.”

So, at least at first, it sounds like large regional news organizations will be included. But it’s not clear how or if any of that money will ever trickle down to the laid-off community-news reporter who’s trying to start a hyperlocal site, or to the volunteers who provide coverage that their chain-owned weekly ignores.

There are other potential hazards as well. Let’s start with the conflicts of interest posed by news organizations choosing to do business with our most controversial tech company.

“Payments to publishers for stories that Facebook might otherwise aggregate for free is a boon for journalism,” wrote Emily Bell at the Columbia Journalism Review. “The idea that there will be a daily, regular newsfeed that’s not filled with nonsense is a boon for Facebook users. The delineation of news as a category distinct from other ‘content’ is a boon for democracy. Yet the readiness with which publishers are seemingly embracing this new business arrangement is discomfiting, given Facebook’s track record, and the total lack of regulation. Will News Corp. [parent company of The Wall Street Journal and Fox News, both part of the News Tab] and others disclose their relationship with Facebook when they cover the tech world? One can only hope so.”

Another problem is the very odd presence of Breitbart News as part of the News Tab. It’s one thing to want to include a conservative-leaning news organization; it’s quite another to add weaponized propaganda to a list that is supposed to be comprise factual, verified journalism. More than anything, the inclusion of Breitbart appears to be part of Zuckerberg’s continued efforts to suck up to right-wing critics who accuse Facebook and other social-media platforms of liberal bias.

Finally, there is the question of whether Facebook this time will stick with its newfound embrace of news. Over the years the company has alternately accepted its role as a platform for journalism and walked away from it. About a decade ago, it unveiled a program called the Social Reader, inviting news organizations to use it and set up shop inside Facebook. The Washington Post and The Guardian, in particular, had considerable success with it. And then Zuckerberg changed his mind.

David Beard, a veteran journalist who was working on social-media strategies for the Post at that time, told me in a 2015 interview that he began developing email newsletters for the paper in direct response to the Social Reader fiasco. “For a while, we had tons of readers in India and the Philippines and some other places,” he said. “And then Facebook changed the algorithm, and we suddenly had none. So my learning from that episode was, is there something we can do without a mercenary, where we own the machinery?”

Now, once again, news organizations are relying on Mark Zuckerberg’s machinery. Will it be different this time? I hope so. Zuckerberg is under fire from all directions these days. He may sincerely hope that leading people away from disinformation and toward real news will not only ease the pressure on him and his company, but will be good for democracy as well.

But few things are more vital for fixing democracy than bolstering local news. At the very least, Facebook News is off to an unacceptably slow start at the local level. If that doesn’t change, then Zuckerberg’s latest idea may wind up being just one more example of a promise unfulfilled.

Talk about this post on, well, you know, Facebook.

Overcoming digital distraction. Plus, The New York Times’ $1.1b folly, and saving community access TV.

Previously published at WGBHNews.org.

Do you find it more difficult to read a book these days? Or even a long article? Do you catch yourself pausing every so often (OK, make that every few minutes) to see what’s new on Facebook, scroll through Twitter, check email, or possibly all of the above? Has concentration given way to distraction?

You’re not alone. For years, writers like Nicholas Carr (“The Shallows”) and Virginia Heffernan (“Magic and Loss”) have worried that the internet is rewiring our brains and transforming us from deep readers into jittery skimmers. In “Ten Arguments for Deleting Your Social Media Accounts Right Now,” Jaron Lanier writes that — well, you know.

The latest entry in what has grown into a burgeoning list of digital jeremiads is an essay that appeared in The New York Times over the weekend. The piece, by Kevin Roose, is headlined “Do Not Disturb: How I Ditched My Phone and Unbroke My Brain.” Over the course of nearly 2,500 words, Roose describes in anguished detail how his smartphone had left him “incapable of reading books, watching full-length movies or having long uninterrupted conversations.” Social media, he adds, had made him “angry and anxious.”

Roose’s solution: A detox program overseen by Catherine Price, the author of “How to Break Up with Your Phone.” Without going into detail (after all, you can read about it yourself), by the end of the program our hero is happier, healthier, and less addicted to his phone.

Digital dependency is a real problem, and it’s hard to know what to do about it. I know that as well as anyone. Over the years, my writing has become symbiotically enmeshed with the internet — I look things up and fact-check as I go, and I can’t imagine returning to the days of writing first, checking later, even though the result would probably be more coherent. Social media and email are ever-present impediments to the task at hand.

But it’s a lot easier to describe what we ought to do than to actually do it. I recommend mindful reading either in print or on one of the more primitive Kindles. In reality, I read the news on an iPad while admonishing myself not to tweet any of it — usually without much success. I need to be on social media for professional purposes, which makes it all the harder to stay away from energy-draining non-professional uses.

We are not doing ourselves any favors. “You know the adage that you should choose a partner on the basis of who you become when you’re around the person?” writes Lanier. “That’s a good way to choose technologies, too.”

The problem is that we didn’t choose our technologies. They chose us, backed by the likes of Mark Zuckerberg, whose billions grow every time his engineers figure out a way to keep us more addicted and less able to break ourselves of the habit. We need solutions. I’ll get back to you on that. Right after I check Facebook. Again.

Looking back at a deal gone bad

More than a quarter-century after the New York Times Co. bought The Boston Globe for the unheard-of price of $1.1 billion, the transaction remains a sore point in some circles. As I’m sure you know, Red Sox principal owner John Henry bought the paper for just $70 million in 2013, which turned out to be less than the value of the real estate.

In her new book, “Merchants of Truth,” former New York Times executive editor Jill Abramson is blisteringly critical of the 1993 acquisition. Describing the Times Co.’s strategy of that era, she writes: “Some recent business blunders had made the structural damage inflicted by the internet even more painful. The worst was the purchase of The Boston Globe at precisely the moment the glory days of newspaper franchises were ending.” (My “Beat the Press” colleague Emily Rooney interviewed Abramson for our most recent broadcast, and she did not shy away from asking some tough questions about errors in Abramson’s book as well as credible accusations of plagiarism.)

In a recent interview with the newspaper analyst Ken Doctor, Times Co. CEO Mark Thompson described what he and his fellow executives were up against in late 2012: “The thinking at the top of the company when I arrived was that the Times should sell The Boston Globe, and that it was going to be fantastically difficult to manage the Globe in a way where it wasn’t going to become over time a net depleter of the total business, rather than something that was going to add to the success of the company.”

So was the Times Co.’s decision to pay all that money for the Globe really such a boneheaded move? When I was interviewing people for my book “The Return of the Moguls,” I got some pretty strong pushback to that proposition from former Globe editor Matt Storin and current editor Brian McGrory.

Storin told me that the Globe turned a profit of some $90 million in one of its first years under Times Co. ownership. “Imagine today if you made a $90 million profit,” he said. “I mean, those classified ads were just a gold mine. The Times knew that, and I think that’s one of the reasons why they bought us. They didn’t foresee that that was going to disappear, obviously.”

McGrory sounded a similar theme. “For 15 to 18 years there were Brinks trucks driving down I-95 with tens of millions of dollars every year, amounting to hundreds millions over that time, taking money from Boston to New York,” he said. “They made their investment just fine.”

The reality is most likely somewhere in the middle. From 1993 until about 2005, the Globe earned plenty of money for the Times Co. But then things went seriously south, with the Globe losing $85 million by 2009, a situation so dire that the Times threatened to shut down the paper unless the unions agreed to $20 million worth of givebacks. (They did.)

For the Times Co., the real mistake wasn’t in buying the Globe — it was in keeping it for too long.

Last stand for community access TV

This past November I wrote about an industry-supported effort by the FCC to allow the cable companies to save money by cutting what they spend to support local public-access operations.

Naturally, the FCC is pushing ahead with this anti-consumer proposal. So now advocates of local do-it-yourself media are asking supporters to sign an online petition to Congress asking that lawmakers stop the new rule from taking effect.

“PEG [public, educational, and governmental] access channels provide local content in communities that are not served by the broadcast industry and are increasingly under-served by newspapers,” says the petition. “They help prevent ‘media deserts’ in towns and cities across the U.S. and ensure diversity of opinion at the local level.”

Will it matter? I suspect that elected members of Congress from both parties will prove more amenable to public pressure than FCC chair Ajit Pai, who led the campaign to kill net neutrality. But we won’t know unless we try. So let’s try.

Talk about this post on Facebook.

Facing up to the damage wrought by Facebook

Previously published at The Arts Fuse.

Antisocial Media: How Facebook Disconnects Us and Undermines Democracy, by Siva Vaidhyanathan. Oxford University Press, 288 pages, $24.95.

The reason that Facebook is so evil is that Mark Zuckerberg is so good. According to Siva Vaidhyanathan, a professor of media studies at the University of Virginia, the former wunderkind has drunk deeply of his own Kool-Aid and genuinely believes that his creation is a benevolent force in the world. “Zuckerberg has a vibrant moral passion,” Vaidhyanathan writes in his new book, Antisocial Media. “But he lacks a historical sense of the horrible things that humans are capable of doing to each other and the planet.”

From propagating fake news to violating our privacy, from empowering authoritarian regimes to enabling anti-Semitic advertising, Facebook has become the social network everyone loves to hate. Vaidhyanathan, whose previous books include The Googlization of Everything — and Why We Should Worry (2011), has produced a valuable guide, written in clear, non-academic prose, to the monstrous force Facebook has become. And if his overview of what’s gone wrong with Facebook will seem familiar to those of us who obsess about these things, it nevertheless serves as a worthwhile introduction to the Zuckerborg and all that it has wrought. If only Vaidhyanathan had some compelling ideas on what to do about it. If only any of us did.

Facebook’s malign omnipresence came about quickly. Founded in 2004, it wasn’t until the dawn of the current decade that it became a behemoth. With 2.2 billion active monthly users, Facebook is, for many people, synonymous with the internet itself — the place where your aunt and uncle share photos of their pets, updates from their vacations, and, of course, links to memes and conspiracy theories about George Soros’s non-existent Nazi past and the “deep state” plot to overthrow President Trump.

Such craziness has serious real-world consequences. It may not be an exaggeration to say that Trump became president partly because of Facebook, as Russian propagandists, Cambridge Analytica, and the Trump campaign itself all bought ads to bolster Trump’s message and to persuade possible Hillary Clinton voters to stay home on Election Day. The Facebook effect was probably not as powerful as James Comey’s bizarre obsession with Clinton’s emails — or, for that matter, Electoral College math. But given that Trump was elected by just a handful of votes in a few swing states, it seems plausible that Clinton might otherwise have overcome those obstacles.

There’s nothing new about political advertising, even if Facebook’s tools for microtargeting tiny slices of users based on the information they themselves have provided are unusually precise and pernicious. More ominous, Vaidhyanathan argues, is that the Facebook environment encourages the sort of fragmented thinking and emotional reactions that are antithetical to healthy civic engagement and that helps give rise to an authoritarian figure like Trump. And since Facebook’s algorithm is designed to give you more of the type of content that you interact with, you become increasingly sealed off from viewpoints you don’t agree with. Vaidhyanathan’s attempt to shoehorn Trump into his overarching theory of Facebook is a bit awkward given that Trump’s social-media drug of choice is Twitter. Nevertheless, he is surely on to something in arguing that the reductive discourse that characterizes Facebook helped fuel Trump’s rise.

“After a decade of deep and constant engagement with Facebook, Americans have been conditioned to experience the world Trump style,” Vaidhyanathan writes. “It’s almost as if Trump were designed for Facebook and Facebook were designed for him. Facebook helped make America ready for Trump.”

Vaidhyanathan is not the first to take note of the distractedness that has come to define the digital age. Nicholas Carr, in his 2010 book The Shallows: What the Internet Is Doing to Our Brains, laments that the internet has given rise to a culture of skimming rather than deep reading and warns: “As our window onto the world, and onto ourselves, a popular medium molds what we see and how we see it — and, eventually, if we use it enough, it changes who we are, as individuals and as a society.” Carr barely mentions Facebook, which at the time had not yet become a hegemonic force. But there is little doubt that it has only accelerated those trends.

So what is to be done? In a healthier political climate, Vaidhyanathan writes, we might expect our elected officials to act — by mandating greater privacy protections and by forcing Facebook to sell off some of its related businesses such as Instagram, WhatsApp, and Messenger. But he holds out little hope, even though Europe is moving in that direction. And he identifies a specific reason for his pessimism by describing two competing philosophies of corporate leadership in the United States, neither suited to dealing with the menace we face. One, market fundamentalism, holds that the sole obligation of a corporation is to make as much money as possible for its shareholders. The other, the social responsibility model, sees a role for corporations — but not for government — in addressing environmental and cultural concerns and in helping to make the world better. Vaidhyanathan places Facebook squarely within the latter tradition. Remember, he sees Zuckerberg at root as an earnest if misguided idealist.

The problem is that both of these philosophies are based on differing notions of corporate libertarianism. Each exalts the business leader as the exemplar to which society should aspire. By embracing a binary view of the corporation’s role, we have, Vaidhyanathan argues, essentially eliminated the public sphere from the discussion of how to solve universal problems. Rather than looking to elected leaders, we look to people like Bill Gates, Elon Musk, Laureen Powell Jobs, and, yes, Mark Zuckerberg. We embrace “innovation” rather than real progress that benefits everyone. Given the state of our politics, that might seem like logical behavior. But it’s also behavior based on the nostrum popularized by Ronald Reagan that government is the problem, not the solution. Say something often enough over the course of nearly four decades and it becomes true.

There is some hope. Although Vaidhyanathan doesn’t mention it, there are signs that journalism is becoming less dependent on Facebook. According to the web metrics firm Chartbeat, news organizations are seeing a decreasing amount of referral traffic from Facebook and an increasing amount of direct traffic to their websites and other digital platforms. “The increase in direct traffic matters because it enables publishers to control their own destiny,” writes Lucia Moses of Digiday. “They have more data on reader behavior, which enables them to better target readers with more content and offers for subscriptions and other revenue drivers.” Given the parlous state of the news business, any shift away from Facebook is a positive development.

Moreover, there are signs that we have reached peak Facebook, with young people in particular turning away from the service. According to Hanna Kozlowska, writing in Quartz, Facebook usage among 12- to 24-year-olds is declining, and overall usage in the United States and Canada is starting to shrink as well. That’s not to say Facebook is about to go the way of Friendster or MySpace. But perhaps a shrinking user base, combined with the controversy and legal woes Zuckerberg is dealing over privacy violations and other scandals, will lead to a kinder, gentler Facebook.

Ultimately, Vaidhyanathan says, it’s up to us. “Reviving a healthy social and political life would require a concerted recognition of the damage Facebook has done and a campaign to get beyond its spell,” he writes. “If millions were urged to put Facebook in its proper place, perhaps merely as a source of social and familial contact rather than political knowledge or activism, we could train ourselves out of the habit.” Later he writes: “Resistance is futile. But resistance seems necessary.”

Talk about this post on, well, you know, Facebook.