Subsidizing local news: The hopes and fears of a Harvard Law professor

Previously published at GBH News.

The challenge in providing government assistance to ease the local news crisis is to find ways of helping those who really need it while keeping the bad actors out. Which is why Martha Minow said this week that she’s “hopeful” but “fearful” about a federal bill that would create tax credits to subsidize subscribers, advertisers and news organizations.

“What I’m troubled about is: What’s local news, who defines it and how do we prevent the manipulation of this by multinational corporations?” she said. “That’s a problem, and I don’t know anyone who’s come up with an answer for that.”

Minow, a Harvard Law School professor, is the author of the recently published “Saving the News: Why the Constitution Calls for Government Action to Preserve Freedom of Speech.” The book lays out a series of ideas for reviving journalism, from requiring social media platforms to pay for content to providing subsidies for nonprofit news. She spoke Monday at a local book group that met virtually.

The legislation Minow was referencing, the Local Journalism Sustainability Act, has attracted an unusual amount of bipartisan support and seems to stand a decent chance of becoming law. Those who wrote the proposal included limits on the size of news organizations that would be eligible, but the large corporate chains that own many of them would not be blocked from applying. That’s problematic given that chains and hedge funds are squeezing the life out of local news.

Minow, though, was referring to a different phenomenon — “sham” local news organizations that “shill for who knows what.” Although Minow did not use the term, such sites are purveyors of what is known as “pink slime” journalism, which look like community sites but are in reality vehicles for political propaganda. Those who operate such projects have taken advantage of the opening created by the precipitous decline of legitimate local news organizations in recent years by launching hundreds of such sites — most of them on the political right, but some on the left as well. One suggestion Minow offered was to limit government assistance to news organizations whose journalists live in the communities they cover.

Much of “Saving the News” is devoted to the proposition that government has always been involved in subsidizing journalism, from low postal rates to the development of the telegraph, from regulating radio and television to investing in the internet. Given that activist history, she writes, it would be derelict for the government not to step in. She quotes Supreme Court Justice Hugo Black, who in 1945 wrote that “it would be strange indeed … if the grave concern for freedom of the press which prompted adoption of the First Amendment should be read as a command that the government was without power to protect that freedom.”

Her proposals fall under three broad categories:

• Regulating Facebook and other social media platforms “subject to duties and expectations commensurate with their functions and their powers.” That would include not just requiring them to pay news organizations for the content they use but also regulating them as public utilities and subjecting them to antitrust enforcement;

• Fighting misinformation and disinformation through “public and private protections against deception, fraud, and manipulation and bolstering the capacities of individuals and communities to monitor and correct abuses and demand better media and internet practices”;

• Using the power of government to “support, amplify, and sustain a variety of public interest news sources and resources at the local, regional, and national levels.”

“With the entire project of democracy in danger, federal, state, and local governments can and indeed should be obliged to act — while remaining as neutral as possible toward content and viewpoint in private speech,” Minow writes. “If judicial readings of the First Amendment prevent such actions, the courts would be turning the Constitution into a suicide pact.”

In a time of intense polarization, Minow said this week that she hopes reviving local news can help bring communities together. Noting that studies have shown corruption rises and voting rates drop in the absence of reliable local journalism, she said, “There’s less polarization in local communities for obvious reasons. People have to get along, they have to get the snow plowed.”

Minow comes by her interest in reliable news and information naturally: Her father, Newton Minow, is a former chair of the FCC best known for calling television “a vast wasteland.” His daughter’s book is a useful compendium of why we need to take steps to save local news — and what some of those steps might look like.

Facebook is in trouble again. Is this the time that it will finally matter?

Drawing (cc) 2019 by Carnby

Could this be the beginning of the end for Facebook?

Even the Cambridge Analytica scandal didn’t bring the sort of white-hot scrutiny the social media giant has been subjected to over the past few weeks — starting with The Wall Street Journal’s “Facebook Files” series, which proved that company officials were well aware their product had gone septic, and culminating in Sunday’s “60 Minutes” interview with the Journal’s source, Frances Haugen.

As we’ve seen over and over, though, these crises have a tendency to blow over. You could say that “this time it feels different,” but I’m not sure it does. Mark Zuckerberg and company have shown an amazing ability to pick themselves up and keep going, mainly because their 2.8 billion engaged monthly users show an amazing ability not to care.

On Monday, New York Times technology columnist Kevin Roose wondered whether the game really is up and argued that Facebook is now on the decline. He wrote:

What I’m talking about is a kind of slow, steady decline that anyone who has ever seen a dying company up close can recognize. It’s a cloud of existential dread that hangs over an organization whose best days are behind it, influencing every managerial priority and product decision and leading to increasingly desperate attempts to find a way out. This kind of decline is not necessarily visible from the outside, but insiders see a hundred small, disquieting signs of it every day — user-hostile growth hacks, frenetic pivots, executive paranoia, the gradual attrition of talented colleagues.

The trouble is, as Roose concedes, it could take Facebook an awfully long time to die, and it may prove to be even more of a threat to our culture during its waning years than it was on the way up.

I suspect what keeps Facebook from imploding is that, for most people, it works as intended. Very few of us are spurning vaccines or killing innocent people in Myanmar because of what we’ve seen on Facebook. Instead, we’re sharing personal updates, family photos and, yes, some news stories we’ve run across. For the most part, I like Facebook, even as I recognize what a toxic effect it’s having.

The very real damage that Facebook is doing seems far removed from the experience most of its customers have. And that is what’s going to make it incredibly difficult to do anything about it.

The Wall Street Journal exposes Facebook’s lies about content moderation

Comet Ping Pong. Photo (cc) 2016 by DOCLVHUGO.

What could shock us about Facebook at this point? That Mark Zuckerberg and Sheryl Sandberg are getting ready to shut it down and donate all of their wealth because of their anguish over how toxic the platform has become?

No, we all know there is no bottom to Facebook. So Jeff Horwitz’s investigative report in The Wall Street Journal on Monday — revealing the extent to which celebrities and politicians are allowed to break rules the rest of us must follow — was more confirmatory than revelatory.

That’s not to say it lacks value. Seeing it all laid out in internal company documents is pretty stunning, even if the information isn’t especially surprising.

Become a member of Media Nation for just $5 a month!

The story involves a program called XCheck, under which VIP users are given special treatment. Incredibly, there are 5.8 million people who fall into this category, so I guess you could say they’re not all that special. Horwitz explains: “Some users are ‘whitelisted’ — rendered immune from enforcement actions — while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.”

And here’s the killer paragraph, quoting a 2019 internal review:

“We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.”

Among other things, the story reveals that Facebook has lied to the Oversight Board it set up to review its content-moderation decisions — news that should prompt the entire board to resign.

Perhaps the worst abuse documented by Horwitz involves the Brazilian soccer star Neymar:

After a woman accused Neymar of rape in 2019, he posted Facebook and Instagram videos defending himself — and showing viewers his WhatsApp correspondence with his accuser, which included her name and nude photos of her. He accused the woman of extorting him.

Facebook’s standard procedure for handling the posting of “nonconsensual intimate imagery” is simple: Delete it. But Neymar was protected by XCheck.

For more than a day, the system blocked Facebook’s moderators from removing the video. An internal review of the incident found that 56 million Facebook and Instagram users saw what Facebook described in a separate document as “revenge porn,” exposing the woman to what an employee referred to in the review as abuse from other users.

“This included the video being reposted more than 6,000 times, bullying and harassment about her character,” the review found.

As good a story as this is, there’s a weird instance of both-sides-ism near the top. Horwitz writes: “Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up ‘pedophile rings,’ and that then-President Donald Trump had called all refugees seeking asylum ‘animals,’ according to the documents.”

The pedophile claim, of course, is better known as Pizzagate, the ur-conspiracy theory promulgated by QAnon, which led to an infamous shooting incident at the Comet Ping Pong pizza restaurant in Washington in 2016. Trump, on the other hand, had this to say in 2018, according to USA Today: “We have people coming into the country or trying to come in, we’re stopping a lot of them, but we’re taking people out of the country. You wouldn’t believe how bad these people are. These aren’t people. These are animals.”

Apparently the claim about Trump was rated as false because he appeared to be referring specifically to gang members, not to “all” refugees. But that “all” is doing a lot of work.

The Journal series continues today with a look at how Instagram is having a damaging effect on the self-esteem of teenage girls — and that Facebook, which owns the service, knows about it and isn’t doing anything.

Australian libel ruling shows what happens without Section 230 protections

Photo (cc) 2011 by Scott Calleja

I’m not familiar with the fine points of Australian libel law. But a decision this week by the High Court of Australia that publishers are liable for third-party comments posted on their Facebook pages demonstrates the power of Section 230 in the United States.

Section 230, part of the Communications Decency Act of 1996, does two things. First, it carves out an exception to the principle that publishers are legally responsible for all content, including advertisements and letters to the editor. By contrast, publishers are not liable for online comments in any way.

Second, in what is sometimes called the “Good Samaritan” provision, publishers may remove some third-party content without taking on liability for other content. For example, a lawyer might argue that a news organization that removed a libelous comment has taken on an editing role and could therefore be sued for other libelous comments that weren’t removed. Under Section 230, you can’t do that.

The Australian court’s ruling strikes me as a straightforward application of libel law in the absence of Section 230. Mike Cherney of The Wall Street Journal puts it this way:

The High Court of Australia determined that media companies, by creating a public Facebook page and posting content on that page, facilitated and encouraged comments from other users on those posts. That means the media companies should be considered publishers of the comments and are therefore responsible for any defamatory content that appears in them, according to a summary of the judgment from the court.

Over at the Nieman Journalism Lab, Joshua Benton has a markedly different take, arguing that the court is holding publishers responsible for content they did not publish. Benton writes:

Pandora’s box isn’t big enough to hold all the potential implications of that idea. That a news publisher should be held accountable for the journalism it publishes is obvious. That it should be held accountable for reader comments left on its own website (which it fully controls) is, at a minimum, debatable.

But that it should be held legally liable for the comments of every rando who visits its Facebook page — in other words, the speech of people it doesn’t control, on a platform it doesn’t control — is a big, big step.

I disagree. As I said, publishers are traditionally liable for every piece of content that appears under their name. Section 230 was a deviation from that tradition — a special carve-out providing publishers with immunity they wouldn’t otherwise have. If Benton is right, then we never needed 230. But of course we did. There’s a reason that the Electronic Frontier Foundation calls 230 “the most important law protecting internet speech.”

I also don’t see much difference between comments posted on a publisher’s website or on its Facebook page. A Facebook page is something you set up, add content to and manage. It’s not yours in the same way as your website, but it is part of your brand and under your control. If you should be liable for third-party content on your website, then it’s hardly a stretch to say that you should also be liable for third-party content on your Facebook page.

As the role of social media in our political discourse has become increasingly fraught, there have been a number of calls to abolish or reform 230. Abolition would mean the end of Facebook — and, for that matter, the comments sections on websites. (There are days when I’m tempted…) Personally, I’d look into abolishing 230 protections for sites that use algorithms to drive engagement and, thus, divisiveness. Such a change would make Facebook less profitable, but I think we could live with that.

Australia, meanwhile, has a dilemma on its hands. Maybe Parliament will pass a law equivalent to Section 230, but (I hope) with less sweeping protections. In any case, Australia should serve as an interesting test case to see what happens when toxic, often libelous third-party comments no longer get a free pass.

Facebook’s tortured relationship with journalism gets a few more tweaks

Facebook has long had a tortured relationship with journalism. When I was reporting for “The Return of the Moguls” in 2015 and ’16, news publishers were embracing Instant Articles, news stories that would load quickly but that would also live on Facebook’s platform rather than the publisher’s.

The Washington Post was so committed to the project that it published every single piece of content as an Instant Article. Shailesh Prakash, the Post’s chief technologist, would talk about the “Facebook barbell,” a strategy that aimed to convert users at the Facebook end of the barbell into paying subscribers at the Post end.

Instant Articles never really went away, but enthusiasm waned — especially when, in 2018, Facebook began downgrading news in its algorithm in favor of posts from family and friends.

Nor was that the first time Facebook pulled a bait-and-switch. Earlier it had something called the Social Reader, inviting news organizations to develop apps that would live within that space. Then, in 2012, it made changes that resulted in a collapse in traffic. Former Post digital editor David Beard told me that’s when he began turning his attention to newsletters, which the Post could control directly rather than having to depend on Mark Zuckerberg’s whims.

Now they’re doing it again. Mathew Ingram of the Columbia Journalism Review reports that Facebook is experimenting with its news feed to see what the effect would be of showing users less political news as well as the way it measures how users interact with the site. The change, needless to say, comes after years of controversy over Facebook’s role in promoting misinformation and disinformation about politics, the Jan. 6 insurrection and the COVID-19 pandemic.

I’m sure Zuckerberg would be very happy if Facebook could serve solely as a platform for people to share uplifting personal news and cat photos. It would make his life a lot easier. But I’m also sure that he would be unwilling to see Facebook’s revenues drop even a little in order to make that happen. Remember that story about Facebook tweaking its algorithm to favor reliable news just before the 2020 election — and then changing it back afterwards because they found that users spent less time on the platform? So he keeps trying this and that, hoping to alight up on the magic formula that will make him and his company less hated, and less likely to be hauled before congressional committees, without hurting his bottom line.

One of the latest efforts is his foray into local news. If Facebook can be a solution to the local news crisis, well, what’s not to like? Earlier this year Facebook and Substack announced initiatives to bring local news projects to their platforms for some very, very short money.

Earlier today, Sarah Scire of the Nieman Journalism Lab profiled some of the 25 local journalists who are setting up shop on Bulletin, Facebook’s new newsletter platform. They seem like an idealistic lot, with about half the newsletters being produced by journalists of color. But there are warning signs. Scire writes:

Facebook says it’s providing “licensing fees” to the local journalists as part of a “multi-year commitment” but spokesperson Erin Miller would not specify how much the company is paying the writers or for how long. The company has said it won’t take a cut of subscription revenue “for the length of these partnerships.” But, again, it’s not saying how long those partnerships will last.

How long will Facebook’s commitment to local news last before it goes the way of the Social Reader and Instant Articles? I don’t like playing the cynic, especially about a program that could help community journalists and the audiences they serve. But cynicism about Facebook is the only stance that seems realistic after years of bad behavior and broken promises.

Researchers dig up embarrassing data about Facebook — and lose access to their accounts

Photo (cc) 2011 by thierry ehrmann

Previously published at GBH News.

For researchers, Facebook is something of a black box. It’s hard to know what its 2.8 billion active users across the globe are seeing at any given time because the social media giant keeps most of its data to itself. If some users are seeing ads aimed at “Jew haters,” or Russian-generated memes comparing Hillary Clinton to Satan, well, so be it. Mark Zuckerberg has his strategy down cold: apologize when exposed, then move on to the next appalling scheme.

Some data scientists, though, have managed to pierce the darkness. Among them are Laura Edelson and Damon McCoy of New York University’s Center for Cybersecurity. With a tool called Ad Observer, which volunteers add to their browsers, they were able to track ads that Facebook users were being exposed to and draw some conclusions. For instance, they learned that users are more likely to engage with extreme falsehoods than with truthful material, and that more than 100,000 political ads are missing from an archive Facebook set up for researchers.

As you would expect, Facebook executives took these findings seriously. So what did they do? Did they change the algorithm to make it more likely that users would see reliable information in their news feed? Did they restore the missing ads and take steps to make sure such omissions wouldn’t happen again?

They did not. Instead, they cut off access to Edelson’s and McCoy’s accounts, making it harder for them to dig up such embarrassing facts in the future.

“There is still a lot of important research we want to do,” they wrote in a recent New York Times op-ed. “When Facebook shut down our accounts, we had just begun studies intended to determine whether the platform is contributing to vaccine hesitancy and sowing distrust in elections. We were also trying to figure out what role the platform may have played leading up to the Capitol assault on Jan. 6.”

In other words, they want to find out how responsible Zuckerberg, Sheryl Sandberg and the rest are for spreading a deadly illness and encouraging an armed insurrection. No wonder Facebook looked at what the researchers were doing and told them, gee, you know, we’d love to help, but you’re violating our privacy rules.

But that’s not even a real concern. Writing at the Columbia Journalism Review, Mathew Ingram points out that the privacy rules Facebook agreed to following the Cambridge Analytica scandal apply to Facebook itself, not to users who voluntarily agree to provide information to researchers.

Ingram quotes Princeton professor Jonathan Mayer, an adviser to Vice President Kamala Harris when she was a senator, who tweeted: “Facebook’s legal argument is bogus. The order “restricts how *Facebook* shares user information. It doesn’t preclude *users* from volunteering information about their experiences on the platform, including through a browser extension.”

The way Ingram describes it, as well as Edelson and McCoy themselves, Facebook’s actions didn’t stop their work altogether, but it has slowed it down and made it more difficult. Needless to say, the company should be doing everything it can to help with such research. Then again, Zuckerberg has never shown much regard for such mundane matters as public health and the future of democracy, especially when there’s money to be made.

By contrast, Facebook’s social media competitor Twitter has actually been much more open about making its data available to researchers. My Northeastern colleague John Wihbey, who co-authored an important study several years ago about how journalists use Twitter, says the difference explains why there have been more studies published about Twitter than Facebook. “This is unfortunate,” he says, “as it is a smaller network and less representative of the general public.”

It’s like the old saw about looking for your car keys under a street light because that’s where the light is. Trouble is, with fewer than 400 million active users, Twitter is little more than a rounding error in Facebook’s universe.

Earlier this year, MIT’s Technology Review published a remarkable story documenting how Facebook shied away from cracking down on extremist content, focusing instead on placating Donald Trump and other figures on the political right before the 2020 election. Needless to say, the NYU researchers represent an especially potent threat to the Zuckerborg since they plan to focus on the role that Facebook played in amplifying the disinformation that led to the insurrection, whose aftermath continues to befoul our body politic.

When the history of this ugly era is written, the two media giants that will stand out for their malignity are Fox News, for knowingly poisoning tens of millions of people with toxic falsehoods, and Facebook, for allowing its platform be used to amplify those falsehoods. Eventually, the truth will be told — no matter what steps Zuckerberg takes to slow it down. There should be hell to pay.

Facebook cuts access to data that was being used to embarrass the company

Facebook cuts reseachers’ access to data, claiming privacy violations. It seems more likely, though, that the Zuckerborg was tired of being embarrassed by the stories that were developed from that data. Mathew Ingram of the Columbia Journalism Review explains.

Tiny News Collective to provide funding to six local news start-ups

Six local news projects will launch or expand after winning a competition held by the Tiny News Collective — a joint venture of LION (Local Independent Online News) Publishers and News Catalyst, based at Temple University. News Catalyst receives funding from the Knight Foundation and the Lenfest Institute. According to the announcement:

Thanks to a partnership with the Google News Initiative, each organization in the first cohort will receive a $15,000 stipend to help create the capacity for the founders to get started. In addition, the GNI has funded their first year of membership dues in the Collective and LION Publishers.

The projects range from an organization covering education news in part of Orange County, California, to an outlet with the wonderful name Black by God, which seeks “to share perspectives that cultivate, curate, and elevate Black voices from West Virginia.”

Forty organizations applied. Among the judges were Kate Maxwell, co-founder and publisher of The Mendocino Voice, a news co-op that is one of the local news projects I’m following for a book I’m co-authoring with Ellen Clegg.

The Tiny News Collective strikes me as a more interesting approach to dealing with the local news crisis than initiatives unveiled recently by Substack and Facebook. Those require you to set up shop on their platforms. By contrast, the Tiny News Collective is aimed at helping community journalism entrepreneurs to achieve sustainability on their own rather than become cogs in someone else’s machine.

Become a member of Media Nation for just $5 a month!

President Biden says social media are killing people. But Fox News may be killing more.

Tucker Carlson. Photo (cc) 2018 by Gage Skidmore.

Previously published at GBH News.

With the delta variant spreading and COVID-19 rates climbing in all 50 states, President Joe Biden last Friday offered some tough words for Facebook and other social media companies that are enabling lies and misinformation.

“They’re killing people,” he said. “I mean, look, the only pandemic we have is among the unvaccinated. And they’re killing people.”

Biden was not wrong. But despite the enormous reach of Facebook, only one media outlet has devoted itself to injecting falsehoods about the pandemic into the nervous systems of its audience on a 24/7 basis. That, of course, would be Fox News, the right-wing cable station that tells its viewers, over and over, that vaccines are dangerous and that wearing a mask to prevent COVID-19 is ineffective — and, in any case, is not worth the price we’d pay in giving up our freedom.

Anne Applebaum, a staff writer for The Atlantic, put it well in a tweet reacting to Biden’s warning to Facebook and its ilk: “Surely Fox poses as big or even bigger problem?”

Consider a recent exchange between Fox’s biggest star, Tucker Carlson, and Alex Berenson, a former New York Times reporter and frequent Fox guest who’s become a notorious purveyor of pandemic falsehoods. “Masks are useless,” Berenson said, although he added that an N-95 medical-grade mask might be of “some minor benefit.” Mainly, he said, mask directives are “symbolic,” explaining, “If I don’t see people wearing masks I forget to be scared, and that’s why they want people wearing masks.”

Berenson wasn’t done. In response to gentle prodding by Carlson, he said, “The vaccines unfortunately appear to be declining in effectiveness very quickly.” He complained that he’d been suspended by Twitter for saying just that, and he urged Carlson’s viewers to subscribe to his Substack “before I get kicked off Twitter.”

Carlson responded by appearing to agree with Berenson. “The big media outlets are committed to lying and censorship,” he said sympathetically. “It’s terrifying.”

Carlson’s show is the top-rated program on cable news, drawing some 3 million viewers every weeknight. That may pale in comparison to the reach of social media. But unlike Facebook, where you’re going to encounter news about your family and friends, cat photos and the like along with the occasional falsehood, Fox is pushing this stuff at all hours of the day and night.

As CNN media reporter Oliver Darcy put it: “Rupert Murdoch, who was among the first in the world to receive a coronavirus vaccine, but who pays people who intentionally fear-monger to millions of people about them, must be smiling about all the attention Facebook is getting. Facebook is allowing for the spread of misinfo, but at least, unlike Fox News, has made some effort to reduce it.”

From “Fox & Friends” in the morning to Carlson, Sean Hannity and Laura Ingraham at night, Fox in recent years has morphed from a somewhat normal conservative news and opinion outlet into pure propaganda.

Last week, Media Matters for America released a study that showed the extent of Fox’s disinformation campaign about COVID and vaccines. Media Matters is liberal and partisan, but it also has a reputation for getting its facts right. The findings were sobering.

“From June 28 through July 11, 57% of segments about coronavirus vaccines on the network included claims that undermined vaccination efforts,” according to the report. The biggest offender was the “Fox & Friends” morning show, followed by Ingraham, though Carlson wasn’t far behind.

During the two-week period, the report said, “Fox personalities and guests made 216 claims undermining or downplaying vaccines or immunization drives. Out of those, 151 claims came from pundits on the network, which represented 70% of the total. Fox pundits described vaccine efforts as coercive or government overreach 103 times and described vaccines as unnecessary or dangerous 75 times.”

This is pure poison, and it goes a long way toward explaining why Trump supporters are lagging on vaccinations, and why we’re all wondering how soon we’ll be under a mask mandate once again.

The Washington Post and Time magazine weighed in earlier this month with in-depth profiles of Carlson, who has become perhaps the most influential force in right-wing politics since the semi-departure of Trump and the death of Rush Limbaugh. Both profiles focused on his racism — a worthy subject, for sure, but no doubt a sign that the stories were assigned before the recent resurgence of the pandemic.

Gillian Laub of Time, though, did manage to work in some key COVID-19 material into her piece, eliciting a ludicrously offensive answer from Carlson when she asked if he’d been vaccinated. He called the anodyne question “super-vulgar” and parried with “What’s your favorite sexual position and when did you last engage in it?”

Laub also noted that, early in the pandemic, Carlson took COVID-19 more seriously than his fellow Fox hosts and even urged then-President Trump to change course. As a result, researchers found that Carlson’s viewers modified their behavior in practices such as hand-washing sooner than did Hannity’s fans.

There are some recent signs that Fox is hedging its bets. Steve Doocy of “Fox & Friends” has been praised for pushing back against his anti-vaxxer co-host Brian Kilmeade. (Both sides!) Even Hannity has been edging toward encouraging his viewers to get vaccinated. But it’s Carlson with the most viewers and influence, and there’s little evidence that his bosses are going to intervene.

Is there anything that can be done about the toxic influence of Fox News? It would be exceedingly difficult. Occasionally you hear some talk about reviving the FCC’s fairness doctrine, which required broadcasters to air opposing views and offer equal time to those who had been attacked. But even if that were politically possible, it would be unlikely to pass constitutional muster. The fairness doctrine applied only to over-the-air television and radio, not cable TV, since the airwaves were regarded as a finite, publicly owned resource.

In any case, such a heavy-handed approach might not be necessary. Congress could require cable providers to offer à la carte service so that no one would have to pay for Fox News or any other cable channel unless they wanted to. No more bundling. Personally, I’d probably keep Fox so I could check in on what they were saying from time to time. But I’d happily give up the 57 flavors of ESPN I’m forced to pay for and rarely watch.

For now, though, we’re stuck with Fox and the baleful influence it exercises over our entire culture. People are literally dying because of the false beliefs they harbor about COVID-19, and Fox is one of the principal vectors for spreading those beliefs.

Donald Trump himself has urged people to get vaccinated. But that’s not the message being delivered to the Trump supporters who tune in to Fox News every day. As a result, some 47% of Republicans say they are unlikely to get the shots, according to a Washington Post-ABC News poll, compared to just 6% of Democrats.

Over the course of the next few weeks, more people will get sick and more people will die. We may be told to wear masks in public once again. New restrictions may be put in place. We were so close to beating COVID-19, and now we’re moving backwards. For that you can thank Tucker Carlson, Laura Ingraham and the rest of their ilk at Fox.

Most of all you can thank Rupert Murdoch, for whom misery and disease is just another profitable day at the office.

A small example of how racially biased algorithms distort social media

You may have heard that the algorithms used by Facebook and other social media platforms are racially biased. I ran into a small but interesting example of that earlier today.

My previous post is about a webinar on news co-ops that I attended last week. I used a photo of Kevon Paynter, co-founder of Bloc by Block News, as the lead art and a photo of Jasper Wang, co-founder of The Defector, well down in the piece.

But when I posted links on Facebook, Twitter and LinkedIn, all three of them automatically grabbed the photo of Wang as the image that would go with the link. For example, here’s how it appeared on Twitter.

I don’t know what happened. Paynter was more central to what I was writing, which is why I led with his photo. Paynter is Black; Wang is of Asian descent. There’s more contrast in the image of Wang, which may be why the algorithms identified it as a superior picture. But in so doing they ignored my choice of Paynter as the lead.

File this under “Things that make you go hmmmm.”