There’s been a significant new development in the antitrust cases being brought against Google and Facebook.
On Friday, Richard Nieva reported in BuzzFeed News that a lawsuit filed in December 2020 by Texas and several other states claims that Google CEO Sundar Pichai and Facebook CEO Mark Zuckerberg “personally signed off on a secret advertising deal that allegedly gave Facebook special privileges on Google’s ad platform.” That information was recently unredacted.
Nieva writes:
The revelation comes as both Google and Facebook face a crackdown from state and federal officials over antitrust concerns for their business practices. Earlier this week, a judge rejected Facebook’s motion to dismiss a lawsuit by the Federal Trade Commission that accuses the social network of using anticompetitive tactics.
The action being led by Texas is separate from an antitrust suit brought against Google and Facebook by more than 200 newspapers around the country. The suit essentially claims that Google has monopolized the digital ad marketplace in violation of antitrust law and has cut Facebook in on the deal in order to stave off competition. Writing in Business Insider, Martin Coulter puts it this way:
Most of the allegations in the suit hinge on Google’s fear of “header bidding,” an alternative to its own ad auctioning practices described as an “existential threat” to the company.
As I’ve written previously, the antitrust actions are potentially more interesting than the usual complaint made by newspapers — that Google and Facebook have repurposed their journalism and should pay for it. That’s never struck me as an especially strong legal argument, although it’s starting to happen in Australia and Western Europe.
The antitrust claims, on the other hand, are pretty straightforward. You can’t control all aspects of a market, and you can’t give special treatment to a would-be competitor. Google and Facebook, of course, have denied any wrongdoing, and that needs to be taken seriously. But keep an eye on this. It could shake the relationship between the platforms and the publishers to the very core.
How much of a financial hit would it take to force Mark Zuckerberg sit up and pay attention?
We can be reasonably sure he didn’t lose any sleep when British authorities fined Facebook a paltry $70 million earlier this fall for withholding information about its acquisition of Giphy, an app for creating and hosting animated graphics. Maybe he stirred a bit in July 2019, when the Federal Trade Commission whacked the company with a $5 billion penalty for violating its users’ privacy — a punishment described by the FTC as “the largest ever imposed” in such a case. But then he probably rolled over and caught a few more z’s.
OK, how about $150 billion? Would that do it?
We may be about to find out. Because that’s the price tag lawyers for Rohingya refugees placed on a class-action lawsuit they filed in California last week against Facebook — excuse me, make that Meta Platforms. As reported by Kelvin Chan of The Associated Press, the suit claims that Facebook’s actions in Myanmar stirred up violence in a way that “amounted to a substantial cause, and eventual perpetuation of, the Rohingya genocide.”
Even by Zuckerberg’s standards, $150 billion is a lot of money. Facebook’s revenues in 2020 were just a shade under $86 billion. And though the pricetags lawyers affix on lawsuits should always be taken with several large shakers of salt, the case over genocide in Myanmar could be just the first step in holding Facebook to account for the way its algorithms amplify hate speech and disinformation.
The lawsuit is also one of the first tangible consequences of internal documents provided earlier this fall by Frances Haugen, a former Facebook employee turned whistleblower who went public with information showing that company executives knew its algorithms were wreaking worldwide havoc and did little or nothing about it. In addition to providing some 10,000 documents to the U.S. Securities and Exchange Commission, Haugen told her story anonymously to The Wall Street Journal, and later went public by appearing on “60 Minutes” and testifying before Congress.
The lawsuit is a multi-country effort, as Mathew Ingram reports for the Columbia Journalism Review, and the refugees’ lawyers are attempting to apply Myanmar’s laws in order to get around the United States’ First Amendment, which — with few exceptions — protects even the most loathsome speech.
But given that U.S. law may prevail, the lawyers have also taken the step of claiming that Facebook is a “defective” product. According to Tim De Chant, writing at Ars Technica, that claim appears to be targeted at Section 230, which would normally protect Facebook from legal liability for any content posted by third parties.
Facebook’s algorithms are programmed to show you more and more of the content that you engage with, which leads to the amplification of the sort of violent posts that helped drive genocide against the Rohingyas. A legal argument that would presumably find more favor in the U.S. court system is the algorithmic-driven spread of that content, rather than the content itself.
“While the Rohingya have long been the victims of discrimination and persecution, the scope and violent nature of that persecution changed dramatically in the last decade, turning from human rights abuses and sporadic violence into terrorism and mass genocide,” the lawsuit says. “A key inflection point for that change was the introduction of Facebook into Burma in 2011, which materially contributed to the development and widespread dissemination of anti-Rohingya hate speech, misinformation, and incitement of violence—which together amounted to a substantial cause, and perpetuation of, the eventual Rohingya genocide..”
Facebook has previously admitted that its response to the violence in Myanmar was inadequate. “We weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence,” the company said in 2018.
The lawsuit at least theoretically represents an existential threat to Facebook, and no doubt the company will fight back hard. Still, its initial response emphasized its regrets and steps it has taken over the past several years to lessen the damage. A Meta spokesperson recently issued this statement to multiplenewsorganizations: “We’re appalled by the crimes committed against the Rohingya people in Myanmar. We’ve built a dedicated team of Burmese speakers, banned the Tatmadaw [the Burmese armed forces], disrupted networks manipulating public debate and taken action on harmful misinformation to help keep people safe. We’ve also invested in Burmese-language technology to reduce the prevalence of violating content. This work is guided by feedback from experts, civil society organizations and independent reports, including the UN Fact-Finding Mission on Myanmar’s findings and the independent Human Rights Impact Assessment we commissioned and released in 2018.”
No doubt Zuckerberg and company didn’t knowingly set out to contribute to a human-rights disaster that led to a rampage of rape and murder, with nearly 7,000 Rohingyas killed and 750,000 forced out of the country. Yet this tragedy was the inevitable consequence of the way Facebook works, and of its top executives’ obsession with growth over safety.
As University of Virginia media studies professor and author Siva Vaidhyanathan has put it: “The problem with Facebook is Facebook.”
Maybe the prospect of being forced to pay for the damage they have done will, at long last, force Zuckerberg, Sheryl Sandberg and the rest to do something about it.
How bad is it for Facebook right now? The company is reportedly planning to change its name, possibly as soon as this week — thus entering the corporate equivalent of the Witness Protection Program.
Surely, though, Mark Zuckerberg can’t really think anyone is going to be fooled. As the tech publisher Scott Turman told Quartz, “If the general public has a negative and visceral reaction to a brand then it may be time to change the subject. Rebranding is one way to do that, but a fresh coat of lipstick on a pig will not fundamentally change the facts about a pig.”
And the facts are devastating, starting with “The Facebook Files” in The Wall Street Journal at the beginning of the month; accelerating as the Journal’s once-anonymous source, former Facebook executive Frances Haugen, went public, testified before Congress and was interviewed on “60 Minutes”; and then exploding over the weekend as a consortium of news organizations began publishing highlights from a trove of documents Haugen gave the Securities and Exchange Commission.
No one can possibly keep up with everything we’ve learned about Facebook — and, let’s face it, not all that much of it is new except for the revelations that Facebook executives were well aware of what their critics have been saying for years. How did they know? Their own employees told them, and begged them to do something about it to no avail.
If it’s possible to summarize, the meta-critique is that, no matter what the issue, Facebook’s algorithms boost content that enrages, polarizes and even depresses its users — and that Zuckerberg and company simply won’t take the steps that are needed to lower the volume, since that might result in lower profits as well. This is the case across the board, from self-esteem among teenage girls to the Jan. 6 insurrection, from COVID disinformation to factional violence in other countries.
In contrast to past crises, when Facebook executives would issue fulsome apologies and then keep right on doing what they were doing, the company has taken a pugnacious tone this time around, accusing the media of bad faith and claiming it has zillions of documents that contradict the damning evidence in the files Haugen has provided. For my money, though, the quote that will live in infamy is one that doesn’t quite fit the context — it was allegedly spoken by Facebook communications official Tucker Bounds in 2017, and it wasn’t for public consumption. Nevertheless, it is perfect:
“It will be a flash in the pan,” Bounds reportedly said. “Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”
Is Facebook still fine? Probably not. At the moment, at least, is difficult to imagine that Facebook won’t be forced to undergo some fundamental changes, either through public pressure or by force of law. A number of news organizations have published overviews to help you make sense of the new documents. One of the better ones was written by Adrienne LaFrance, the executive editor of The Atlantic, who was especially appalled by new evidence of Facebook’s own employees pleading with their superiors to stop amplifying the extremism that led to Jan. 6.
“The documents are astonishing for two reasons: First, because their sheer volume is unbelievable,” she said. “And second, because these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.”
LaFrance offers some possible solutions, most of which revolve around changing the algorithm to optimize safety over growth — that is, not censoring speech, but taking steps to stop the worst of it from going viral. Keep in mind that one of the key findings from the past week involved a test account set up for a fictional conservative mother in North Carolina. Within days, her news feed was loaded with disinformation, including QAnon conspiracy theories, served up because the algorithm had figured out that such content would keep her engaged. As usual, Facebook’s own researchers sounded the alarm while those in charge did nothing.
In assessing what we’ve learned about Facebook, it’s important to differentiate between pure free-speech issues and those that involve amplifying bad speech for profit. Of course, as a private company, Facebook needn’t worry about the First Amendment — it can remove anything it likes for any reason it chooses.
But since Facebook is the closest thing we have to a public square these days, I’m uncomfortable with calls that certain types of harmful content be banned or removed. I’d rather focus on the algorithm. If someone posts, say, vaccine disinformation on the broader internet, people will see it (or not) solely on the basis of whether they visit the website or discussion board where it resides.
That doesn’t trouble me any more than I’m bothered by people handing out pamphlets about the coming apocalypse outside the subway station. Within reason, Facebook ought to be able to do the same. What it shouldn’t be able to do is make it easy for you to like and share such disinformation and keep you engaged by showing you more and — more extreme — versions of it.
And that’s where we might be able to do something useful about Facebook rather than just wring our hands. Reforming Section 230, which provides Facebook and other internet publishers with legal immunity for any content posted by their users, would be a good place to start. If 230 protections were removed for services that use algorithms to boost harmful content, then Facebook would change its practices overnight.
Meanwhile, we wait with bated breath for word on what the new name for Facebook will be. Friendster? Zucky McZuckface? The Social Network That Must Not Be Named?
Zuckerberg has created a two-headed beast. For most of us, Facebook is a fun, safe environment to share news and photos of our family and friends. For a few, it’s a dangerous place that leads them down dark passages from which they may never return.
In that sense, Facebook is like life itself, and it won’t ever be completely safe. But for years now, the public, elected officials and even Facebook’s own employees have called for changes that would make the platform less of a menace to its users as well as to the culture as a whole.
Zuckerberg has shown no inclination to change. It’s long past time to force his hand.
Could this be the beginning of the end for Facebook?
Even the Cambridge Analytica scandal didn’t bring the sort of white-hot scrutiny the social media giant has been subjected to over the past few weeks — starting with The Wall Street Journal’s “Facebook Files” series, which proved that company officials were well aware their product had gone septic, and culminating in Sunday’s “60 Minutes” interview with the Journal’s source, Frances Haugen.
As we’ve seen over and over, though, these crises have a tendency to blow over. You could say that “this time it feels different,” but I’m not sure it does. Mark Zuckerberg and company have shown an amazing ability to pick themselves up and keep going, mainly because their 2.8 billion engaged monthly users show an amazing ability not to care.
On Monday, New York Times technology columnist Kevin Roose wondered whether the game really is up and argued that Facebook is now on the decline. He wrote:
What I’m talking about is a kind of slow, steady decline that anyone who has ever seen a dying company up close can recognize. It’s a cloud of existential dread that hangs over an organization whose best days are behind it, influencing every managerial priority and product decision and leading to increasingly desperate attempts to find a way out. This kind of decline is not necessarily visible from the outside, but insiders see a hundred small, disquieting signs of it every day — user-hostile growth hacks, frenetic pivots, executive paranoia, the gradual attrition of talented colleagues.
The trouble is, as Roose concedes, it could take Facebook an awfully long time to die, and it may prove to be even more of a threat to our culture during its waning years than it was on the way up.
I suspect what keeps Facebook from imploding is that, for most people, it works as intended. Very few of us are spurning vaccines or killing innocent people in Myanmar because of what we’ve seen on Facebook. Instead, we’re sharing personal updates, family photos and, yes, some news stories we’ve run across. For the most part, I like Facebook, even as I recognize what a toxic effect it’s having.
The very real damage that Facebook is doing seems far removed from the experience most of its customers have. And that is what’s going to make it incredibly difficult to do anything about it.
Facebook has long had a tortured relationship with journalism. When I was reporting for “The Return of the Moguls” in 2015 and ’16, news publishers were embracing Instant Articles, news stories that would load quickly but that would also live on Facebook’s platform rather than the publisher’s.
The Washington Post was so committed to the project that it published every single piece of content as an Instant Article. Shailesh Prakash, the Post’s chief technologist, would talk about the “Facebook barbell,” a strategy that aimed to convert users at the Facebook end of the barbell into paying subscribers at the Post end.
Instant Articles never really went away, but enthusiasm waned — especially when, in 2018, Facebook began downgrading news in its algorithm in favor of posts from family and friends.
Nor was that the first time Facebook pulled a bait-and-switch. Earlier it had something called the Social Reader, inviting news organizations to develop apps that would live within that space. Then, in 2012, it made changes that resulted in a collapse in traffic. Former Post digital editor David Beard told me that’s when he began turning his attention to newsletters, which the Post could control directly rather than having to depend on Mark Zuckerberg’s whims.
Now they’re doing it again. Mathew Ingram of the Columbia Journalism Review reports that Facebook is experimenting with its news feed to see what the effect would be of showing users less political news as well as the way it measures how users interact with the site. The change, needless to say, comes after years of controversy over Facebook’s role in promoting misinformation and disinformation about politics, the Jan. 6 insurrection and the COVID-19 pandemic.
I’m sure Zuckerberg would be very happy if Facebook could serve solely as a platform for people to share uplifting personal news and cat photos. It would make his life a lot easier. But I’m also sure that he would be unwilling to see Facebook’s revenues drop even a little in order to make that happen. Remember that story about Facebook tweaking its algorithm to favor reliable news just before the 2020 election — and then changing it back afterwards because they found that users spent less time on the platform? So he keeps trying this and that, hoping to alight up on the magic formula that will make him and his company less hated, and less likely to be hauled before congressional committees, without hurting his bottom line.
One of the latest efforts is his foray into local news. If Facebook can be a solution to the local news crisis, well, what’s not to like? Earlier this year Facebook and Substack announced initiatives to bring local news projects to their platforms for some very, very short money.
Earlier today, Sarah Scire of the Nieman Journalism Lab profiled some of the 25 local journalists who are setting up shop on Bulletin, Facebook’s new newsletter platform. They seem like an idealistic lot, with about half the newsletters being produced by journalists of color. But there are warning signs. Scire writes:
Facebook says it’s providing “licensing fees” to the local journalists as part of a “multi-year commitment” but spokesperson Erin Miller would not specify how much the company is paying the writers or for how long. The company has said it won’t take a cut of subscription revenue “for the length of these partnerships.” But, again, it’s not saying how long those partnerships will last.
How long will Facebook’s commitment to local news last before it goes the way of the Social Reader and Instant Articles? I don’t like playing the cynic, especially about a program that could help community journalists and the audiences they serve. But cynicism about Facebook is the only stance that seems realistic after years of bad behavior and broken promises.
For researchers, Facebook is something of a black box. It’s hard to know what its 2.8 billion active users across the globe are seeing at any given time because the social media giant keeps most of its data to itself. If some users are seeing ads aimed at “Jew haters,” or Russian-generated memes comparing Hillary Clinton to Satan, well, so be it. Mark Zuckerberg has his strategy down cold: apologize when exposed, then move on to the next appalling scheme.
Some data scientists, though, have managed to pierce the darkness. Among them are Laura Edelson and Damon McCoy of New York University’s Center for Cybersecurity. With a tool called Ad Observer, which volunteers add to their browsers, they were able to track ads that Facebook users were being exposed to and draw some conclusions. For instance, they learned that users are more likely to engage with extreme falsehoods than with truthful material, and that more than 100,000 political ads are missing from an archive Facebook set up for researchers.
As you would expect, Facebook executives took these findings seriously. So what did they do? Did they change the algorithm to make it more likely that users would see reliable information in their news feed? Did they restore the missing ads and take steps to make sure such omissions wouldn’t happen again?
They did not. Instead, they cut off access to Edelson’s and McCoy’s accounts, making it harder for them to dig up such embarrassing facts in the future.
“There is still a lot of important research we want to do,” they wrote in a recent New York Times op-ed. “When Facebook shut down our accounts, we had just begun studies intended to determine whether the platform is contributing to vaccine hesitancy and sowing distrust in elections. We were also trying to figure out what role the platform may have played leading up to the Capitol assault on Jan. 6.”
In other words, they want to find out how responsible Zuckerberg, Sheryl Sandberg and the rest are for spreading a deadly illness and encouraging an armed insurrection. No wonder Facebook looked at what the researchers were doing and told them, gee, you know, we’d love to help, but you’re violating our privacy rules.
But that’s not even a real concern. Writing at the Columbia Journalism Review, Mathew Ingram points out that the privacy rules Facebook agreed to following the Cambridge Analytica scandal apply to Facebook itself, not to users who voluntarily agree to provide information to researchers.
Ingram quotes Princeton professor Jonathan Mayer, an adviser to Vice President Kamala Harris when she was a senator, who tweeted: “Facebook’s legal argument is bogus. The order “restricts how *Facebook* shares user information. It doesn’t preclude *users* from volunteering information about their experiences on the platform, including through a browser extension.”
The way Ingram describes it, as well as Edelson and McCoy themselves, Facebook’s actions didn’t stop their work altogether, but it has slowed it down and made it more difficult. Needless to say, the company should be doing everything it can to help with such research. Then again, Zuckerberg has never shown much regard for such mundane matters as public health and the future of democracy, especially when there’s money to be made.
By contrast, Facebook’s social media competitor Twitter has actually been much more open about making its data available to researchers. My Northeastern colleague John Wihbey, who co-authored an important study several years ago about how journalists use Twitter, says the difference explains why there have been more studies published about Twitter than Facebook. “This is unfortunate,” he says, “as it is a smaller network and less representative of the general public.”
It’s like the old saw about looking for your car keys under a street light because that’s where the light is. Trouble is, with fewer than 400 million active users, Twitter is little more than a rounding error in Facebook’s universe.
Earlier this year, MIT’s Technology Review published a remarkable story documenting how Facebook shied away from cracking down on extremist content, focusing instead on placating Donald Trump and other figures on the political right before the 2020 election. Needless to say, the NYU researchers represent an especially potent threat to the Zuckerborg since they plan to focus on the role that Facebook played in amplifying the disinformation that led to the insurrection, whose aftermath continues to befoul our body politic.
When the history of this ugly era is written, the two media giants that will stand out for their malignity are Fox News, for knowingly poisoning tens of millions of people with toxic falsehoods, and Facebook, for allowing its platform be used to amplify those falsehoods. Eventually, the truth will be told — no matter what steps Zuckerberg takes to slow it down. There should be hell to pay.
The information gap here in Medford is not much different when compared to the situation in hundreds, if not thousands, of communities across the country. Despite having a population of nearly 60,000 and five reasonably healthy business districts, our Gannett weekly has not had a single full-time staff reporter since the fall of 2019.
So we do what people do everywhere — we rely on a few Facebook groups, Nextdoor and Patch. Of course, there is no substitute for a news source that does the unglamorous work of sitting through governmental meetings (which the weekly does on a piecemeal basis), following neighborhood issues, and keeping tabs on the local police. A lot of times we simply ask questions. Why was a helicopter hovering over the Mystic Lakes? When will everyone be allowed back in the school buildings?
Earlier this week, Brandy Zadrozny wrote a lengthy feature for NBC News about what’s happened in Beaver County, Pennsylvania, where Gannett and its predecessor company, GateHouse Media, have decimated the The Times of Beaver County since acquiring it from local ownership in 2017.
In particular, residents have turned to a Facebook group called The News Alerts of Beaver County, an occasionally useful forum with 43,000 members that all too often devolves into a cesspool of false rumors about murders, human trafficking and child molesters. Zadrozny writes:
The News Alerts of Beaver County isn’t home base for a gun-wielding militia, and it isn’t a QAnon fever swamp. In fact, the group’s focus on timely and relevant information for a small real-world community is probably the kind that Chief Executive Mark Zuckerberg envisioned when he pivoted his company toward communities in 2017.
And yet, the kind of misinformation that’s traded in The News Alerts of Beaver County and thousands of other groups just like it poses a unique danger. It’s subtler and in some ways more insidious, because it’s more likely to be trusted. The misinformation — shared in good faith by neighbors, sandwiched between legitimate local happenings and overseen by a community member with no training but good intentions — is still capable of tearing a community apart.
Zadrozny also quotes Jennifer Grygiel, a communications professor at Syracuse University, who tells her: “In a system with inadequate legitimate local news, they may only be able to get information by posting gossip and having the police correct it. One could argue this is what society will look like if we keep going down this road with less journalism and more police and government social media.”
The area does have an independent website, BeaverCountian.com, which took note of the NBC News story and has won a number of awards for its journalism. But it only posts once every couple of days or so, which isn’t enough for county with nearly 164,000 people. Something more comprehensive is needed.
What’s at stake is our civic live and our ability to function in a democracy. This is why the fight to save local news is so important.
Working for Facebook can be pretty lucrative. According to PayScale, the average salary of a Facebook employee is $123,000, with senior software engineers earning more than $200,000. Even better, the job is pandemic-proof. Traffic soared during the early months of COVID (though advertising was down), and the service attracted nearly 2.8 billion active monthly users worldwide during the fourth quarter of 2020.
So employees are understandably reluctant to demand change from their maximum leader, the now-36-year-old Mark Zuckerberg, the man-child who has led them to their promised land.
For instance, last fall Facebook tweaked its algorithm so that users were more likely to see reliable news rather than hyperpartisan propaganda in advance of the election — a very small step in the right direction. Afterwards, some employees thought Facebook ought to do the civic-minded thing and make the change permanent. Management’s answer: Well, no, the change cost us money, so it’s time to resume business as usual. And thus it was.
Joaquin Quiñonero Candela is what you might call an extreme example of this go-along mentality. Quiñonero is the principal subject of a remarkable 6,700-word story in the current issue of Technology Review, published by MIT. As depicted by reporter Karen Hao, Quiñonero is extreme not in the sense that he’s a true believer or a bad actor or anything like that. Quite the contrary; he seems like a pretty nice guy, and the story is festooned with pictures of him outside his home in the San Francisco area, where he lives with his wife and three children, engaged in homey activities like feeding his chickens and, well, checking his phone. (It’s Zuck!)
What’s extreme, rather, is the amount of damage Quiñonero can do. He is the director of artificial intelligence for Facebook, a leading AI scientist who is universally respected for his brilliance, and the keeper of Facebook’s algorithm. He is also the head of an internal initiative called Responsible AI.
Now, you might think that the job of Responsible AI would be to find ways to make Facebook’s algorithm less harmful without chipping away too much at Zuckerberg’s net worth, estimated recently at $97 billion. But no. The way Hao tells it, Quiñonero’s shop was diverted almost from the beginning from its mission of tamping down extremist and false information so that it could take on a more politically important task: making sure that right-wing content kept popping up in users’ news feeds in order to placate Donald Trump, who falsely claimed that Facebook was biased against conservatives.
How pernicious was this? According to Hao, Facebook developed a model called the “Fairness Flow,” among whose principles was that liberal and conservative content should not be treated equally if liberal content was more factual and conservative content promoted falsehoods — which is in fact the case much of the time. But Facebook executives were having none of it, deciding for purely political reasons that the algorithm should result in equal outcomes for liberal and conservative content regardless of truthfulness. Hao writes:
“They took ‘fairness’ to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. ‘There’s no point, then,’ the researcher says. A model modified in that way ‘would have literally no impact on the actual problem’ of misinformation.”
Hao ranges across the hellscape of Facebook’s wreckage, from the Cambridge Analytica scandal to amplifying a genocidal campaign against Muslims in Myanmar to boosting content that could worsen depression and thus lead to suicide. What she shows over and over again is not that Facebook is oblivious to these problems; in fact, it recently banned a number of QAnon, anti-vaccine and Holocaust-denial groups. But, in every case, it is slow to act, placing growth, engagement and, thus, revenue ahead of social responsibility.
It is fair to ask what Facebook’s role is in our current civic crisis, with a sizable minority of the public in thrall to Trump, disdaining vaccines and obsessing over trivia like Dr. Seuss and so-called cancel culture. Isn’t Fox News more to blame than Facebook? Aren’t the falsehoods spouted every night by Tucker Carlson, Sean Hannity and Laura Ingraham ultimately more dangerous than a social network that merely reflects what we’re already interested in?
The obvious answer, I think, is that there’s a synergistic effect between the two. The propaganda comes from Fox and its ilk and moves to Facebook, where it gets distributed and amplified. That, in turn, creates more demand for outrageous content from Fox and, occasionally, fuels the growth of even more extreme outlets like Newsmax and OAN. Dangerous as the Fox effect may be, Facebook makes it worse.
Hao’s final interview with Quiñonero came after the deadly insurrection of Jan. 6. I’m not going to spoil it for you, because it’s a really fine piece of writing, and quoting a few bits wouldn’t do it justice. But Quiñonero comes across as someone who knows, deep in his heart, that he could have played a role in preventing what happened but chose not to act.
It’s devastating — and something for him to think about as he ponders life in his nice home, with his family and his chickens, which are now coming home to roost.
Every so often, media observers berate the newspaper business for letting upstarts encroach on their turf rather than innovating themselves.
Weirdly enough, I’ve heard a number of people over the years assert that newspapers should have unveiled a free classified-ad service in order to forestall the rise of Craigslist — as if giving away classified ads was going to help pay for journalism. As of 2019, Craigslist employed a reported 50 full-time people worldwide. The Boston Globe and its related media properties, Stat News and Boston.com employ about 300 full-time journalists. As they say, do the math.
Sometimes you hear the same thing about Facebook, which is different enough from journalism that you might as well say that newspapers should have moved into the food-services industry. Don Graham’s legendary decision to let Mark Zuckerberg walk away from an agreed-upon investment in Facebook changed the course of newspaper history — the Graham family could have kept The Washington Post rather than having to sell to Jeff Bezos. As a bonus, someone with a conscience would have sat on Facebook’s board, although it’s hard to know whether that would have mattered. But journalism and social media are fundamentally different businesses, so it’s not as though there was any sort of natural fit.
More recently, I’ve heard the same thing about Nextdoor, a community-oriented social network that has emerged as the news source of record for reporting lost cats and suspicious-looking people in your neighborhood. I like our Nextdoor and visit it regularly. But when it comes to discussion of local news, I find it less useful than a few of our Facebook groups. Still, you hear critics complain that newspapers should have been there first.
Well, maybe they should have. But how good a business is it, really? Like Craigslist, social media thrives by having as few employees as possible. Journalism is labor-intensive. Over the years I’ve watched the original vision for Wicked Local — unveiled, if I’m remembering correctly, by the Old Colony Memorial in Plymouth — shrink from a genuinely interesting collection of local blogs and other community content into a collection of crappy websites for GateHouse Media’s and now Gannett’s newspapers.
The original Boston.com was a vibrant experiment as well, with community blogs and all sorts of interesting content that you wouldn’t find in the Globe. But after the Globe moved to its own paywalled website, Boston.com’s appeal was pretty much shot, although it continues to limp along. For someone who wants a free regional news source, it’s actually not that bad. But the message, as with Wicked Local, is that maybe community content just doesn’t produce enough revenue to support the journalists we need to produce actual news coverage.
Recently Will Oremus of a Medium-backed website called OneZero wrote a lengthy piece about the rise of Nextdoor, which has done especially well in the pandemic. Oremus’ take was admirably balanced — though Nextdoor can be a valuable resource, especially in communities lacking real news coverage, he wrote, it is also opaque in its operations and tilted toward the interests of its presumably affluent users. According to Oremus, Nextdoor sites are available in about 268,000 neighborhoods across the world, and its owners have considered taking the company public.
There’s no question that Nextdoor is taking on the role once played by local newspapers. But is that because people are moving to Nextdoor or because local newspapers are withering away? As Oremus writes, quoting Emily Bell:
In some ways, Nextdoor is filling a gap left by a dearth of local news outlets. “In discussions of how people are finding out about local news, Nextdoor and Facebook Groups are the two online platforms that crop up most in our research,” said Columbia’s Emily Bell. Bell is helping to lead a project examining the crisis in local news and the landscape that’s emerging in its wake.
“When we were scoping out, ‘What does a news desert look like?’ it was clear that there’s often a whole group of hyperlocal platforms that we don’t traditionally consider to be news,” Bell said. They included Nextdoor, Facebook Groups, local Reddit subs, and crime-focused apps such as Citizen and Amazon Ring’s Neighbors. In the absence of a traditional news outlet, “people do share news, they do comment on news,” she said. “But they’re doing it on a platform like Nextdoor that really is not designed for news — may be in the same way that Facebook is not designed for news.”
Look, I’m glad that Nextdoor is around. I’m glad that Patch is around, and in fact our local Patch occasionally publishes some original reporting. But there is no substitute for actual journalism — the hard work of sitting through local meetings, keeping an eye on the police and telling the story of the community. As inadequate as our local Gannett weekly is, there’s more local news in it than in any other source we have.
If local newspapers had developed Nextdoor and offered it as part of their journalism, would it have made a different to the bottom line? It seems unlikely — although it no doubt would have brought in somewhat more revenues than giving away free classifieds.
Nextdoor, like Facebook, makes money by offering low-cost ads and employing as few people as possible. It may add up to a lot of cash in the aggregate. At the local level, though, I suspect it adds up to very little — and, if pursued by newspapers, would distract from the hard work of coming up with genuinely sustainable business models.
How can we limit the damage that social media — and especially Facebook — are doing to democracy? We all know what the problem is. The platforms make money by keeping you logged on and engaged. And they keep you engaged by feeding you content that their algorithms have determined makes you angry and upset. How do we break that chain?
Josh Bernoff, writing in The Boston Globe, offers an idea similar to one I suggested a few months ago: leverage Section 230 of the Telecommunications Act of 1996, which holds digital publishers harmless for any content posted by third-party users. Under Section 230, publishers can’t be sued if a commenter libels someone, which amounts to a huge benefit not available in other contexts. For instance, a newspaper publisher is liable for every piece of content that it runs, from news articles to ads and letters to the editor — but not for comments posted on the newspaper’s website.
Bernoff suggests what strikes me as a rather convoluted system that would require Facebook (that is, if Mark Zuckerberg wants to continue benefiting from Section 230) to run ads calling attention to ideologically diverse content. Using the same algorithms that got us into trouble in the first place, Facebook would serve up conservative content to liberal users and liberal content to conservative users.
There are, I think, some problems with Bernoff’s proposal, starting with this: He writes that Facebook and the other platforms “would be required to show free ads for mainstream liberal news sources to conservatives, and ads for mainstream conservative news sites to liberals.”
But that elides dealing the reality of what has happened to political discourse over the past several decades, accelerated by the Trump era. Liberals and Democrats haven’t changed all that much. Conservatives and Republicans, on the other hand, have become deeply radical, supporting the overturning of a landslide presidential election and espousing dangerous conspiracy theories about COVID-19. Given that, what is a “mainstream conservative news site”?
Bernoff goes so far as to suggest that MSNBC and Fox News are liberal and conservative equivalents. In their prime-time programming, though, the liberal MSNBC — despite its annoyingly doctrinaire, hectoring tone — remains tethered to reality, whereas Fox’s right-wing prime-time hosts are moving ever closer to QAnon territory. The latest is Tucker Carlson’s anti-vax outburst. Who knew that he would think killing his viewers was a good business strategy?
Moving away from the fish-in-a-barrel examples of MSNBC and Fox, what about The New York Times and The Wall Street Journal? Well, the Times’ editorial pages are liberal and the Journal’s are conservative. But if we’re talking about news coverage, they’re really not all that different. So that doesn’t work, either.
I’m not sure that my alternative, which I wrote about for GBH News back in June, is workable, but it does have the advantage of being simple: eliminate Section 230 protections for any platform that uses algorithms to boost engagement. Facebook would have to comply; if it didn’t, it would be sued into oblivion in a matter of weeks or months. As I wrote at the time:
But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.
Unlike Bernoff’s proposal, mine wouldn’t attempt to regulate speech by identifying the news sites that are worthy of putting in front of users so that they’ll be exposed to views they disagree with. I would let it rip as long as artificial intelligence isn’t being used to boost the most harmful content.
Needless to say, Zuckerberg and his fellow Big Tech executives can be expected to fight like crazed weasels in order to keep using algorithms, which are incredibly valuable to their bottom line. Just this week The New York Times reported that Facebook temporarily tweaked its algorithms to emphasize quality news in the runup to the election and its aftermath — but it has now quietly reverted to boosting divisive slime, because that’s what keeps the ad money rolling in.
Donald Trump has been crusading against 230 during the final days of his presidency, even though he doesn’t seem to understand that he would be permanently banned from Twitter and every other platform — even Parler — if they had to worry about being held legally responsible for what he posts.
Still, that’s no reason not to do something about Section 230, which was approved in the earliest days of the commercial web and has warped digital discourse in ways we couldn’t have imagined back then. Hate speech and disinformation driven by algorithms have become the bane of our time. Why not modify 230 in order to do something about it?
Comments are open. Please include your full name, first and last, and speak with a civil tongue.