A few more thoughts about Threads

Although Mastodon is my preferred Twitter alternative, there’s every indication that Threads is going to emerge as the closest thing we get to a true Twitter replacement. It’s missing a lot — browser access, a reverse-chronological feed of your followers, and lists, to name just a few. I can really do without the celebrities and brands that Threads is pushing. But it’s already got mass appeal, a precious commodity that it’s not likely to relinquish.

There are reports that Mark Zuckerberg and company rushed this out the door before it was ready in order to take advantage of Elon Musk’s meltdown last weekend. Musk rewarded Zuckerberg by sending him a cease-and-desist order — precious publicity for an app that is taking off. As I said yesterday, you only get one chance to make a good first impression, but I suspect users will give Zuckerberg some time to get it right.

In addition to Twitter, I suspect the big loser in this may be Bluesky, started by Twitter co-founder Jack Dorsey. I finally scored an invitation earlier this week and have been playing around. I like it. But Dorsey has got to regret the leisurely pace he’s taken.

For now, I’m posting mainly to Mastodon because I want to, Twitter because I have to, and Bluesky and Threads because I’m checking them out. I’ve given up on Post. (If you’re reading this on the Media Nation website, my social media feeds are in the right-hand rail.) But it wouldn’t surprise me if this quickly devolves into a war between Twitter and Threads, with everyone else reduced to spectator status.

The unimpressive, trying-too-hard debut of Threads

Photo (cc) 2011 by J E Theriot

They say you only get one chance to make a good first impression. If that’s true, then Mark Zuckerberg missed that chance with the debut of Threads. There’s no browser access, so you’re stuck using your phone. You can’t switch to a reverse-chronological non-algorithmic feed of accounts you follow. Even Elon Musk still lets you do that at Twitter. No lists.

The whole thing, teeming with brands and celebrities you’re not interested in, feels very commercial in a forced-joviality, trying-too-hard way. These things can be fixed unless Zuck thinks they’re features rather than bugs. For now, though … not great.

Musk’s latest moves call into question the future of short-form social media

Elon Musk isn’t laughing with us. He’s laughing at us. Photo (cc) 2022 by Steve Jurvetson.

Update: Ivan Mehta of TechCrunch reports that Twitter may have already reversed itself on requiring log-ins to view tweets. I’ll test it later and think about whether I want to go to the trouble of restoring our Twitter timeline to What Works.

Today I want to return to a topic that I write about from time to time: the ongoing travails of Twitter under Elon Musk and the future of what I’ll call short-form interactive social media, which some people still refer to as “microblogging.” It’s something that’s of no interest to the vast majority of people (and if I’m describing you, then you have my congratulations and admiration) but of tremendous interest to a few of us.

You may have heard that a number of changes hit Twitter over the weekend, some deliberate, some perhaps accidental. They cut back on the number of posts you could read before encountering a “rate limit” of 600 per day for non-subscribers and 6,000 a day for those who pay $8 a month. Those limits were later raised. Now, very few people are paying $8 for those blue check marks and extra privileges, and you can reach 600 (or 800, or 1,000, or whatever it is at the moment) pretty quickly if you’re zipping through your timeline. It was and is a bizarre limitation, since it means that users will spend less time on the site and will see fewer of Twitter’s declining inventory of ads.

Twitter also got rid of its classic TweetDeck application, which lets you set up columns for lists, notifications and the like, and switched everyone over to a new, inferior version — and then announced that TweetDeck will soon be restricted to those $8-a-month customers.

Finally, and of the greatest significance to me and my work, you can no longer view a tweet unless you’re actually logged in to Twitter. We’ve all become accustomed to news outlets embedding tweets in stories. I do it myself sometimes. Well, now that has stopped working. Maybe it’s not that big a deal. After all, you can take a screenshot and/or quote from it, just as you can from any source. But it’s an extra hassle for both publishers and readers.

The problem

Moreover, this had a significant negative effect on What Works, the website about the future of local news that Ellen Clegg and I host. Just recently, I decide to add a news feed of updates and brief items to the right-hand rail, powered by Twitter. It was a convenient way of informing our readers regardless of whether they were Twitter users. And on Monday, it disappeared. What I’ve come up with to replace it is a half-solution: A box that links to our Mastodon account, which can still be read by Mastodon nonusers and users alike. But it’s an extra step. In order to add an actual Mastodon news feed we would either need to pay more or switch to a hosting service and put up with the attendant technical challenges.

What is Musk up to? I can’t imagine that he’s literally trying to destroy Twitter; but if he were, he’d be doing exactly what he’s doing. It’s strange. Twitter is now being inundated with competitors, the largest of which is Mastodon, a decentralized system that runs mainly on volunteer labor. But Twitter co-founder Jack Dorsey is slowly unveiling a very Twitter-like service called Bluesky (still in beta, and, for the moment, invitation-only), and, this Thursday, Facebook (I refuse to call it Meta) will debut Threads. If Mark Zuckerberg doesn’t screw it up, I think Threads, which is tied to Instagram, might prove to be a formidable challenger.

Still, what made Twitter compelling was that it was essentially the sole platform for short-form interactive social media. The breakdown of that audience into various niches makes it harder for any one service to benefit from the network effect. I’ve currently got conversations going on in three different places, and when I want to share links to my work, I now have to go to Twitter, Mastodon and Bluesky (which I just joined), not to mention Facebook and LinkedIn.

The solution

And speaking of the network effect: Twitter may be shrinking, but, with 330 million active monthly users, it’s still by far the largest of the three short-form platforms. Mastodon was up to 10 million registered users as of March (that number grows in spurts every time Musk indulges his inner sociopath), and Bluesky has just 100,000 — although another 2 million or so are on the wait list. What that means for my work is that just a handful of the media thought leaders I need to follow and interact with are on Mastodon or Bluesky, and, from what I can tell, none (as in zero) of the people and organizations that track developments in local news have budged from Twitter.

It will likely turn out that the social media era was brief and its demise unlamented. In the meantime, what’s going on is weird and — for those of us who depend on this stuff — aggravating. In some ways, I would like to see one-stop short-form social media continue. My money is on Threads, although I suspect that Zuckerberg’s greed will prevent it from realizing its full potential.

Antitrust suit brought by states claims Google and Facebook had a secret deal

Photo (cc) by Fir0002/Flagstaffotos

There’s been a significant new development in the antitrust cases being brought against Google and Facebook.

On Friday, Richard Nieva reported in BuzzFeed News that a lawsuit filed in December 2020 by Texas and several other states claims that Google CEO Sundar Pichai and Facebook CEO Mark Zuckerberg “personally signed off on a secret advertising deal that allegedly gave Facebook special privileges on Google’s ad platform.” That information was recently unredacted.

Nieva writes:

The revelation comes as both Google and Facebook face a crackdown from state and federal officials over antitrust concerns for their business practices. Earlier this week, a judge rejected Facebook’s motion to dismiss a lawsuit by the Federal Trade Commission that accuses the social network of using anticompetitive tactics.

The action being led by Texas is separate from an antitrust suit brought against Google and Facebook by more than 200 newspapers around the country. The suit essentially claims that Google has monopolized the digital ad marketplace in violation of antitrust law and has cut Facebook in on the deal in order to stave off competition. Writing in Business Insider, Martin Coulter puts it this way:

Most of the allegations in the suit hinge on Google’s fear of “header bidding,” an alternative to its own ad auctioning practices described as an “existential threat” to the company.

As I’ve written previously, the antitrust actions are potentially more interesting than the usual complaint made by newspapers — that Google and Facebook have repurposed their journalism and should pay for it. That’s never struck me as an especially strong legal argument, although it’s starting to happen in Australia and Western Europe.

The antitrust claims, on the other hand, are pretty straightforward. You can’t control all aspects of a market, and you can’t give special treatment to a would-be competitor. Google and Facebook, of course, have denied any wrongdoing, and that needs to be taken seriously. But keep an eye on this. It could shake the relationship between the platforms and the publishers to the very core.

A $150 billion lawsuit over genocide may force Facebook to confront its dark side

Displayed Rohingya Muslims. Photo (cc) 2017 by Tasnim News Agency.

Previously published at GBH News.

How much of a financial hit would it take to force Mark Zuckerberg sit up and pay attention?

We can be reasonably sure he didn’t lose any sleep when British authorities fined Facebook a paltry $70 million earlier this fall for withholding information about its acquisition of Giphy, an app for creating and hosting animated graphics. Maybe he stirred a bit in July 2019, when the Federal Trade Commission whacked the company with a $5 billion penalty for violating its users’ privacy — a punishment described by the FTC as “the largest ever imposed” in such a case. But then he probably rolled over and caught a few more z’s.

OK, how about $150 billion? Would that do it?

We may be about to find out. Because that’s the price tag lawyers for Rohingya refugees placed on a class-action lawsuit they filed in California last week against Facebook — excuse me, make that Meta Platforms. As reported by Kelvin Chan of The Associated Press, the suit claims that Facebook’s actions in Myanmar stirred up violence in a way that “amounted to a substantial cause, and eventual perpetuation of, the Rohingya genocide.”

Even by Zuckerberg’s standards, $150 billion is a lot of money. Facebook’s revenues in 2020 were just a shade under $86 billion. And though the pricetags lawyers affix on lawsuits should always be taken with several large shakers of salt, the case over genocide in Myanmar could be just the first step in holding Facebook to account for the way its algorithms amplify hate speech and disinformation.

The lawsuit is also one of the first tangible consequences of internal documents provided earlier this fall by Frances Haugen, a former Facebook employee turned whistleblower who went public with information showing that company executives knew its algorithms were wreaking worldwide havoc and did little or nothing about it. In addition to providing some 10,000 documents to the U.S. Securities and Exchange Commission, Haugen told her story anonymously to The Wall Street Journal, and later went public by appearing on “60 Minutes” and testifying before Congress.

The lawsuit is a multi-country effort, as Mathew Ingram reports for the Columbia Journalism Review, and the refugees’ lawyers are attempting to apply Myanmar’s laws in order to get around the United States’ First Amendment, which — with few exceptions — protects even the most loathsome speech.

But given that U.S. law may prevail, the lawyers have also taken the step of claiming that Facebook is a “defective” product. According to Tim De Chant, writing at Ars Technica, that claim appears to be targeted at Section 230, which would normally protect Facebook from legal liability for any content posted by third parties.

Facebook’s algorithms are programmed to show you more and more of the content that you engage with, which leads to the amplification of the sort of violent posts that helped drive genocide against the Rohingyas. A legal argument that would presumably find more favor in the U.S. court system is the algorithmic-driven spread of that content, rather than the content itself.

“While the Rohingya have long been the victims of discrimination and persecution, the scope and violent nature of that persecution changed dramatically in the last decade, turning from human rights abuses and sporadic violence into terrorism and mass genocide,” the lawsuit says. “A key inflection point for that change was the introduction of Facebook into Burma in 2011, which materially contributed to the development and widespread dissemination of anti-Rohingya hate speech, misinformation, and incitement of violence—which together amounted to a substantial cause, and perpetuation of, the eventual Rohingya genocide..”

Facebook has previously admitted that its response to the violence in Myanmar was inadequate. “We weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence,” the company said in 2018.

The lawsuit at least theoretically represents an existential threat to Facebook, and no doubt the company will fight back hard. Still, its initial response emphasized its regrets and steps it has taken over the past several years to lessen the damage. A Meta spokesperson recently issued this statement to multiple news organizations: “We’re appalled by the crimes committed against the Rohingya people in Myanmar. We’ve built a dedicated team of Burmese speakers, banned the Tatmadaw [the Burmese armed forces], disrupted networks manipulating public debate and taken action on harmful misinformation to help keep people safe. We’ve also invested in Burmese-language technology to reduce the prevalence of violating content. This work is guided by feedback from experts, civil society organizations and independent reports, including the UN Fact-Finding Mission on Myanmar’s findings and the independent Human Rights Impact Assessment we commissioned and released in 2018.”

No doubt Zuckerberg and company didn’t knowingly set out to contribute to a human-rights disaster that led to a rampage of rape and murder, with nearly 7,000 Rohingyas killed and 750,000 forced out of the country. Yet this tragedy was the inevitable consequence of the way Facebook works, and of its top executives’ obsession with growth over safety.

As University of Virginia media studies professor and author Siva Vaidhyanathan has put it: “The problem with Facebook is Facebook.”

Maybe the prospect of being forced to pay for the damage they have done will, at long last, force Zuckerberg, Sheryl Sandberg and the rest to do something about it.

A tidal wave of documents exposes the depths of Facebook’s depravity

Photo (cc) 2008 by Craig ONeal

Previously published at GBH News.

How bad is it for Facebook right now? The company is reportedly planning to change its name, possibly as soon as this week — thus entering the corporate equivalent of the Witness Protection Program.

Surely, though, Mark Zuckerberg can’t really think anyone is going to be fooled. As the tech publisher Scott Turman told Quartz, “If the general public has a negative and visceral reaction to a brand then it may be time to change the subject. Rebranding is one way to do that, but a fresh coat of lipstick on a pig will not fundamentally change the facts about a pig.”

And the facts are devastating, starting with “The Facebook Files” in The Wall Street Journal at the beginning of the month; accelerating as the Journal’s once-anonymous source, former Facebook executive Frances Haugen, went public, testified before Congress and was interviewed on “60 Minutes”; and then exploding over the weekend as a consortium of news organizations began publishing highlights from a trove of documents Haugen gave the Securities and Exchange Commission.

No one can possibly keep up with everything we’ve learned about Facebook — and, let’s face it, not all that much of it is new except for the revelations that Facebook executives were well aware of what their critics have been saying for years. How did they know? Their own employees told them, and begged them to do something about it to no avail.

If it’s possible to summarize, the meta-critique is that, no matter what the issue, Facebook’s algorithms boost content that enrages, polarizes and even depresses its users — and that Zuckerberg and company simply won’t take the steps that are needed to lower the volume, since that might result in lower profits as well. This is the case across the board, from self-esteem among teenage girls to the Jan. 6 insurrection, from COVID disinformation to factional violence in other countries.

In contrast to past crises, when Facebook executives would issue fulsome apologies and then keep right on doing what they were doing, the company has taken a pugnacious tone this time around, accusing the media of bad faith and claiming it has zillions of documents that contradict the damning evidence in the files Haugen has provided. For my money, though, the quote that will live in infamy is one that doesn’t quite fit the context — it was allegedly spoken by Facebook communications official Tucker Bounds in 2017, and it wasn’t for public consumption. Nevertheless, it is perfect:

“It will be a flash in the pan,” Bounds reportedly said. “Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

Is Facebook still fine? Probably not. At the moment, at least, is difficult to imagine that Facebook won’t be forced to undergo some fundamental changes, either through public pressure or by force of law. A number of news organizations have published overviews to help you make sense of the new documents. One of the better ones was written by Adrienne LaFrance, the executive editor of The Atlantic, who was especially appalled by new evidence of Facebook’s own employees pleading with their superiors to stop amplifying the extremism that led to Jan. 6.

“The documents are astonishing for two reasons: First, because their sheer volume is unbelievable,” she said. “And second, because these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.”

LaFrance offers some possible solutions, most of which revolve around changing the algorithm to optimize safety over growth — that is, not censoring speech, but taking steps to stop the worst of it from going viral. Keep in mind that one of the key findings from the past week involved a test account set up for a fictional conservative mother in North Carolina. Within days, her news feed was loaded with disinformation, including QAnon conspiracy theories, served up because the algorithm had figured out that such content would keep her engaged. As usual, Facebook’s own researchers sounded the alarm while those in charge did nothing.

In assessing what we’ve learned about Facebook, it’s important to differentiate between pure free-speech issues and those that involve amplifying bad speech for profit. Of course, as a private company, Facebook needn’t worry about the First Amendment — it can remove anything it likes for any reason it chooses.

But since Facebook is the closest thing we have to a public square these days, I’m uncomfortable with calls that certain types of harmful content be banned or removed. I’d rather focus on the algorithm. If someone posts, say, vaccine disinformation on the broader internet, people will see it (or not) solely on the basis of whether they visit the website or discussion board where it resides.

That doesn’t trouble me any more than I’m bothered by people handing out pamphlets about the coming apocalypse outside the subway station. Within reason, Facebook ought to be able to do the same. What it shouldn’t be able to do is make it easy for you to like and share such disinformation and keep you engaged by showing you more and — more extreme — versions of it.

And that’s where we might be able to do something useful about Facebook rather than just wring our hands. Reforming Section 230, which provides Facebook and other internet publishers with legal immunity for any content posted by their users, would be a good place to start. If 230 protections were removed for services that use algorithms to boost harmful content, then Facebook would change its practices overnight.

Meanwhile, we wait with bated breath for word on what the new name for Facebook will be. Friendster? Zucky McZuckface? The Social Network That Must Not Be Named?

Zuckerberg has created a two-headed beast. For most of us, Facebook is a fun, safe environment to share news and photos of our family and friends. For a few, it’s a dangerous place that leads them down dark passages from which they may never return.

In that sense, Facebook is like life itself, and it won’t ever be completely safe. But for years now, the public, elected officials and even Facebook’s own employees have called for changes that would make the platform less of a menace to its users as well as to the culture as a whole.

Zuckerberg has shown no inclination to change. It’s long past time to force his hand.

Facebook is in trouble again. Is this the time that it will finally matter?

Drawing (cc) 2019 by Carnby

Could this be the beginning of the end for Facebook?

Even the Cambridge Analytica scandal didn’t bring the sort of white-hot scrutiny the social media giant has been subjected to over the past few weeks — starting with The Wall Street Journal’s “Facebook Files” series, which proved that company officials were well aware their product had gone septic, and culminating in Sunday’s “60 Minutes” interview with the Journal’s source, Frances Haugen.

As we’ve seen over and over, though, these crises have a tendency to blow over. You could say that “this time it feels different,” but I’m not sure it does. Mark Zuckerberg and company have shown an amazing ability to pick themselves up and keep going, mainly because their 2.8 billion engaged monthly users show an amazing ability not to care.

On Monday, New York Times technology columnist Kevin Roose wondered whether the game really is up and argued that Facebook is now on the decline. He wrote:

What I’m talking about is a kind of slow, steady decline that anyone who has ever seen a dying company up close can recognize. It’s a cloud of existential dread that hangs over an organization whose best days are behind it, influencing every managerial priority and product decision and leading to increasingly desperate attempts to find a way out. This kind of decline is not necessarily visible from the outside, but insiders see a hundred small, disquieting signs of it every day — user-hostile growth hacks, frenetic pivots, executive paranoia, the gradual attrition of talented colleagues.

The trouble is, as Roose concedes, it could take Facebook an awfully long time to die, and it may prove to be even more of a threat to our culture during its waning years than it was on the way up.

I suspect what keeps Facebook from imploding is that, for most people, it works as intended. Very few of us are spurning vaccines or killing innocent people in Myanmar because of what we’ve seen on Facebook. Instead, we’re sharing personal updates, family photos and, yes, some news stories we’ve run across. For the most part, I like Facebook, even as I recognize what a toxic effect it’s having.

The very real damage that Facebook is doing seems far removed from the experience most of its customers have. And that is what’s going to make it incredibly difficult to do anything about it.

Facebook’s tortured relationship with journalism gets a few more tweaks

Facebook has long had a tortured relationship with journalism. When I was reporting for “The Return of the Moguls” in 2015 and ’16, news publishers were embracing Instant Articles, news stories that would load quickly but that would also live on Facebook’s platform rather than the publisher’s.

The Washington Post was so committed to the project that it published every single piece of content as an Instant Article. Shailesh Prakash, the Post’s chief technologist, would talk about the “Facebook barbell,” a strategy that aimed to convert users at the Facebook end of the barbell into paying subscribers at the Post end.

Instant Articles never really went away, but enthusiasm waned — especially when, in 2018, Facebook began downgrading news in its algorithm in favor of posts from family and friends.

Nor was that the first time Facebook pulled a bait-and-switch. Earlier it had something called the Social Reader, inviting news organizations to develop apps that would live within that space. Then, in 2012, it made changes that resulted in a collapse in traffic. Former Post digital editor David Beard told me that’s when he began turning his attention to newsletters, which the Post could control directly rather than having to depend on Mark Zuckerberg’s whims.

Now they’re doing it again. Mathew Ingram of the Columbia Journalism Review reports that Facebook is experimenting with its news feed to see what the effect would be of showing users less political news as well as the way it measures how users interact with the site. The change, needless to say, comes after years of controversy over Facebook’s role in promoting misinformation and disinformation about politics, the Jan. 6 insurrection and the COVID-19 pandemic.

I’m sure Zuckerberg would be very happy if Facebook could serve solely as a platform for people to share uplifting personal news and cat photos. It would make his life a lot easier. But I’m also sure that he would be unwilling to see Facebook’s revenues drop even a little in order to make that happen. Remember that story about Facebook tweaking its algorithm to favor reliable news just before the 2020 election — and then changing it back afterwards because they found that users spent less time on the platform? So he keeps trying this and that, hoping to alight up on the magic formula that will make him and his company less hated, and less likely to be hauled before congressional committees, without hurting his bottom line.

One of the latest efforts is his foray into local news. If Facebook can be a solution to the local news crisis, well, what’s not to like? Earlier this year Facebook and Substack announced initiatives to bring local news projects to their platforms for some very, very short money.

Earlier today, Sarah Scire of the Nieman Journalism Lab profiled some of the 25 local journalists who are setting up shop on Bulletin, Facebook’s new newsletter platform. They seem like an idealistic lot, with about half the newsletters being produced by journalists of color. But there are warning signs. Scire writes:

Facebook says it’s providing “licensing fees” to the local journalists as part of a “multi-year commitment” but spokesperson Erin Miller would not specify how much the company is paying the writers or for how long. The company has said it won’t take a cut of subscription revenue “for the length of these partnerships.” But, again, it’s not saying how long those partnerships will last.

How long will Facebook’s commitment to local news last before it goes the way of the Social Reader and Instant Articles? I don’t like playing the cynic, especially about a program that could help community journalists and the audiences they serve. But cynicism about Facebook is the only stance that seems realistic after years of bad behavior and broken promises.

Researchers dig up embarrassing data about Facebook — and lose access to their accounts

Photo (cc) 2011 by thierry ehrmann

Previously published at GBH News.

For researchers, Facebook is something of a black box. It’s hard to know what its 2.8 billion active users across the globe are seeing at any given time because the social media giant keeps most of its data to itself. If some users are seeing ads aimed at “Jew haters,” or Russian-generated memes comparing Hillary Clinton to Satan, well, so be it. Mark Zuckerberg has his strategy down cold: apologize when exposed, then move on to the next appalling scheme.

Some data scientists, though, have managed to pierce the darkness. Among them are Laura Edelson and Damon McCoy of New York University’s Center for Cybersecurity. With a tool called Ad Observer, which volunteers add to their browsers, they were able to track ads that Facebook users were being exposed to and draw some conclusions. For instance, they learned that users are more likely to engage with extreme falsehoods than with truthful material, and that more than 100,000 political ads are missing from an archive Facebook set up for researchers.

As you would expect, Facebook executives took these findings seriously. So what did they do? Did they change the algorithm to make it more likely that users would see reliable information in their news feed? Did they restore the missing ads and take steps to make sure such omissions wouldn’t happen again?

They did not. Instead, they cut off access to Edelson’s and McCoy’s accounts, making it harder for them to dig up such embarrassing facts in the future.

“There is still a lot of important research we want to do,” they wrote in a recent New York Times op-ed. “When Facebook shut down our accounts, we had just begun studies intended to determine whether the platform is contributing to vaccine hesitancy and sowing distrust in elections. We were also trying to figure out what role the platform may have played leading up to the Capitol assault on Jan. 6.”

In other words, they want to find out how responsible Zuckerberg, Sheryl Sandberg and the rest are for spreading a deadly illness and encouraging an armed insurrection. No wonder Facebook looked at what the researchers were doing and told them, gee, you know, we’d love to help, but you’re violating our privacy rules.

But that’s not even a real concern. Writing at the Columbia Journalism Review, Mathew Ingram points out that the privacy rules Facebook agreed to following the Cambridge Analytica scandal apply to Facebook itself, not to users who voluntarily agree to provide information to researchers.

Ingram quotes Princeton professor Jonathan Mayer, an adviser to Vice President Kamala Harris when she was a senator, who tweeted: “Facebook’s legal argument is bogus. The order “restricts how *Facebook* shares user information. It doesn’t preclude *users* from volunteering information about their experiences on the platform, including through a browser extension.”

The way Ingram describes it, as well as Edelson and McCoy themselves, Facebook’s actions didn’t stop their work altogether, but it has slowed it down and made it more difficult. Needless to say, the company should be doing everything it can to help with such research. Then again, Zuckerberg has never shown much regard for such mundane matters as public health and the future of democracy, especially when there’s money to be made.

By contrast, Facebook’s social media competitor Twitter has actually been much more open about making its data available to researchers. My Northeastern colleague John Wihbey, who co-authored an important study several years ago about how journalists use Twitter, says the difference explains why there have been more studies published about Twitter than Facebook. “This is unfortunate,” he says, “as it is a smaller network and less representative of the general public.”

It’s like the old saw about looking for your car keys under a street light because that’s where the light is. Trouble is, with fewer than 400 million active users, Twitter is little more than a rounding error in Facebook’s universe.

Earlier this year, MIT’s Technology Review published a remarkable story documenting how Facebook shied away from cracking down on extremist content, focusing instead on placating Donald Trump and other figures on the political right before the 2020 election. Needless to say, the NYU researchers represent an especially potent threat to the Zuckerborg since they plan to focus on the role that Facebook played in amplifying the disinformation that led to the insurrection, whose aftermath continues to befoul our body politic.

When the history of this ugly era is written, the two media giants that will stand out for their malignity are Fox News, for knowingly poisoning tens of millions of people with toxic falsehoods, and Facebook, for allowing its platform be used to amplify those falsehoods. Eventually, the truth will be told — no matter what steps Zuckerberg takes to slow it down. There should be hell to pay.

In a Pennsylvania county, fear and rumor-mongering replace reliable local news

The information gap here in Medford is not much different when compared to the situation in hundreds, if not thousands, of communities across the country. Despite having a population of nearly 60,000 and five reasonably healthy business districts, our Gannett weekly has not had a single full-time staff reporter since the fall of 2019.

So we do what people do everywhere — we rely on a few Facebook groups, Nextdoor and Patch. Of course, there is no substitute for a news source that does the unglamorous work of sitting through governmental meetings (which the weekly does on a piecemeal basis), following neighborhood issues, and keeping tabs on the local police. A lot of times we simply ask questions. Why was a helicopter hovering over the Mystic Lakes? When will everyone be allowed back in the school buildings?

Earlier this week, Brandy Zadrozny wrote a lengthy feature for NBC News about what’s happened in Beaver County, Pennsylvania, where Gannett and its predecessor company, GateHouse Media, have decimated the The Times of Beaver County since acquiring it from local ownership in 2017.

Become a member of Media Nation for just $5 a month

In particular, residents have turned to a Facebook group called The News Alerts of Beaver County, an occasionally useful forum with 43,000 members that all too often devolves into a cesspool of false rumors about murders, human trafficking and child molesters. Zadrozny writes:

The News Alerts of Beaver County isn’t home base for a gun-wielding militia, and it isn’t a QAnon fever swamp. In fact, the group’s focus on timely and relevant information for a small real-world community is probably the kind that Chief Executive Mark Zuckerberg envisioned when he pivoted his company toward communities in 2017.

And yet, the kind of misinformation that’s traded in The News Alerts of Beaver County and thousands of other groups just like it poses a unique danger. It’s subtler and in some ways more insidious, because it’s more likely to be trusted. The misinformation — shared in good faith by neighbors, sandwiched between legitimate local happenings and overseen by a community member with no training but good intentions — is still capable of tearing a community apart.

Zadrozny also quotes Jennifer Grygiel, a communications professor at Syracuse University, who tells her: “In a system with inadequate legitimate local news, they may only be able to get information by posting gossip and having the police correct it. One could argue this is what society will look like if we keep going down this road with less journalism and more police and government social media.”

The area does have an independent website, BeaverCountian.com, which took note of the NBC News story and has won a number of awards for its journalism. But it only posts once every couple of days or so, which isn’t enough for  county with nearly 164,000 people. Something more comprehensive is needed.

What’s at stake is our civic live and our ability to function in a democracy. This is why the fight to save local news is so important.