By Dan Kennedy • The press, politics, technology, culture and other passions

Tag: Facebook Page 4 of 13

A tidal wave of documents exposes the depths of Facebook’s depravity

Photo (cc) 2008 by Craig ONeal

Previously published at GBH News.

How bad is it for Facebook right now? The company is reportedly planning to change its name, possibly as soon as this week — thus entering the corporate equivalent of the Witness Protection Program.

Surely, though, Mark Zuckerberg can’t really think anyone is going to be fooled. As the tech publisher Scott Turman told Quartz, “If the general public has a negative and visceral reaction to a brand then it may be time to change the subject. Rebranding is one way to do that, but a fresh coat of lipstick on a pig will not fundamentally change the facts about a pig.”

And the facts are devastating, starting with “The Facebook Files” in The Wall Street Journal at the beginning of the month; accelerating as the Journal’s once-anonymous source, former Facebook executive Frances Haugen, went public, testified before Congress and was interviewed on “60 Minutes”; and then exploding over the weekend as a consortium of news organizations began publishing highlights from a trove of documents Haugen gave the Securities and Exchange Commission.

No one can possibly keep up with everything we’ve learned about Facebook — and, let’s face it, not all that much of it is new except for the revelations that Facebook executives were well aware of what their critics have been saying for years. How did they know? Their own employees told them, and begged them to do something about it to no avail.

If it’s possible to summarize, the meta-critique is that, no matter what the issue, Facebook’s algorithms boost content that enrages, polarizes and even depresses its users — and that Zuckerberg and company simply won’t take the steps that are needed to lower the volume, since that might result in lower profits as well. This is the case across the board, from self-esteem among teenage girls to the Jan. 6 insurrection, from COVID disinformation to factional violence in other countries.

In contrast to past crises, when Facebook executives would issue fulsome apologies and then keep right on doing what they were doing, the company has taken a pugnacious tone this time around, accusing the media of bad faith and claiming it has zillions of documents that contradict the damning evidence in the files Haugen has provided. For my money, though, the quote that will live in infamy is one that doesn’t quite fit the context — it was allegedly spoken by Facebook communications official Tucker Bounds in 2017, and it wasn’t for public consumption. Nevertheless, it is perfect:

“It will be a flash in the pan,” Bounds reportedly said. “Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

Is Facebook still fine? Probably not. At the moment, at least, is difficult to imagine that Facebook won’t be forced to undergo some fundamental changes, either through public pressure or by force of law. A number of news organizations have published overviews to help you make sense of the new documents. One of the better ones was written by Adrienne LaFrance, the executive editor of The Atlantic, who was especially appalled by new evidence of Facebook’s own employees pleading with their superiors to stop amplifying the extremism that led to Jan. 6.

“The documents are astonishing for two reasons: First, because their sheer volume is unbelievable,” she said. “And second, because these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.”

LaFrance offers some possible solutions, most of which revolve around changing the algorithm to optimize safety over growth — that is, not censoring speech, but taking steps to stop the worst of it from going viral. Keep in mind that one of the key findings from the past week involved a test account set up for a fictional conservative mother in North Carolina. Within days, her news feed was loaded with disinformation, including QAnon conspiracy theories, served up because the algorithm had figured out that such content would keep her engaged. As usual, Facebook’s own researchers sounded the alarm while those in charge did nothing.

In assessing what we’ve learned about Facebook, it’s important to differentiate between pure free-speech issues and those that involve amplifying bad speech for profit. Of course, as a private company, Facebook needn’t worry about the First Amendment — it can remove anything it likes for any reason it chooses.

But since Facebook is the closest thing we have to a public square these days, I’m uncomfortable with calls that certain types of harmful content be banned or removed. I’d rather focus on the algorithm. If someone posts, say, vaccine disinformation on the broader internet, people will see it (or not) solely on the basis of whether they visit the website or discussion board where it resides.

That doesn’t trouble me any more than I’m bothered by people handing out pamphlets about the coming apocalypse outside the subway station. Within reason, Facebook ought to be able to do the same. What it shouldn’t be able to do is make it easy for you to like and share such disinformation and keep you engaged by showing you more and — more extreme — versions of it.

And that’s where we might be able to do something useful about Facebook rather than just wring our hands. Reforming Section 230, which provides Facebook and other internet publishers with legal immunity for any content posted by their users, would be a good place to start. If 230 protections were removed for services that use algorithms to boost harmful content, then Facebook would change its practices overnight.

Meanwhile, we wait with bated breath for word on what the new name for Facebook will be. Friendster? Zucky McZuckface? The Social Network That Must Not Be Named?

Zuckerberg has created a two-headed beast. For most of us, Facebook is a fun, safe environment to share news and photos of our family and friends. For a few, it’s a dangerous place that leads them down dark passages from which they may never return.

In that sense, Facebook is like life itself, and it won’t ever be completely safe. But for years now, the public, elected officials and even Facebook’s own employees have called for changes that would make the platform less of a menace to its users as well as to the culture as a whole.

Zuckerberg has shown no inclination to change. It’s long past time to force his hand.

Why Section 230 should be curbed for algorithmically driven platforms

Facebook whistleblower Frances Haugen testifies on Capitol Hill Tuesday.

Facebook in the midst of what we can only hope will prove to be an existential crisis. So I was struck this morning when Boston Globe technology columnist Hiawatha Bray suggested a step that I proposed more than a year ago — eliminating Section 230 protections from social media platforms that use algorithms. Bray writes:

Maybe we should eliminate Section 230 protections for algorithmically powered social networks. For Internet sites that let readers find their own way around, the law would remain the same. But a Facebook or Twitter or YouTube or TikTok could be sued by private citizens — not the government — for postings that defame somebody or which threaten violence.

Here’s what I wrote for GBH News in June 2020:

One possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers.

I hope it’s an idea whose time has come.

Subsidizing local news: The hopes and fears of a Harvard Law professor

Previously published at GBH News.

The challenge in providing government assistance to ease the local news crisis is to find ways of helping those who really need it while keeping the bad actors out. Which is why Martha Minow said this week that she’s “hopeful” but “fearful” about a federal bill that would create tax credits to subsidize subscribers, advertisers and news organizations.

“What I’m troubled about is: What’s local news, who defines it and how do we prevent the manipulation of this by multinational corporations?” she said. “That’s a problem, and I don’t know anyone who’s come up with an answer for that.”

Minow, a Harvard Law School professor, is the author of the recently published “Saving the News: Why the Constitution Calls for Government Action to Preserve Freedom of Speech.” The book lays out a series of ideas for reviving journalism, from requiring social media platforms to pay for content to providing subsidies for nonprofit news. She spoke Monday at a local book group that met virtually.

The legislation Minow was referencing, the Local Journalism Sustainability Act, has attracted an unusual amount of bipartisan support and seems to stand a decent chance of becoming law. Those who wrote the proposal included limits on the size of news organizations that would be eligible, but the large corporate chains that own many of them would not be blocked from applying. That’s problematic given that chains and hedge funds are squeezing the life out of local news.

Minow, though, was referring to a different phenomenon — “sham” local news organizations that “shill for who knows what.” Although Minow did not use the term, such sites are purveyors of what is known as “pink slime” journalism, which look like community sites but are in reality vehicles for political propaganda. Those who operate such projects have taken advantage of the opening created by the precipitous decline of legitimate local news organizations in recent years by launching hundreds of such sites — most of them on the political right, but some on the left as well. One suggestion Minow offered was to limit government assistance to news organizations whose journalists live in the communities they cover.

Much of “Saving the News” is devoted to the proposition that government has always been involved in subsidizing journalism, from low postal rates to the development of the telegraph, from regulating radio and television to investing in the internet. Given that activist history, she writes, it would be derelict for the government not to step in. She quotes Supreme Court Justice Hugo Black, who in 1945 wrote that “it would be strange indeed … if the grave concern for freedom of the press which prompted adoption of the First Amendment should be read as a command that the government was without power to protect that freedom.”

Her proposals fall under three broad categories:

• Regulating Facebook and other social media platforms “subject to duties and expectations commensurate with their functions and their powers.” That would include not just requiring them to pay news organizations for the content they use but also regulating them as public utilities and subjecting them to antitrust enforcement;

• Fighting misinformation and disinformation through “public and private protections against deception, fraud, and manipulation and bolstering the capacities of individuals and communities to monitor and correct abuses and demand better media and internet practices”;

• Using the power of government to “support, amplify, and sustain a variety of public interest news sources and resources at the local, regional, and national levels.”

“With the entire project of democracy in danger, federal, state, and local governments can and indeed should be obliged to act — while remaining as neutral as possible toward content and viewpoint in private speech,” Minow writes. “If judicial readings of the First Amendment prevent such actions, the courts would be turning the Constitution into a suicide pact.”

In a time of intense polarization, Minow said this week that she hopes reviving local news can help bring communities together. Noting that studies have shown corruption rises and voting rates drop in the absence of reliable local journalism, she said, “There’s less polarization in local communities for obvious reasons. People have to get along, they have to get the snow plowed.”

Minow comes by her interest in reliable news and information naturally: Her father, Newton Minow, is a former chair of the FCC best known for calling television “a vast wasteland.” His daughter’s book is a useful compendium of why we need to take steps to save local news — and what some of those steps might look like.

Facebook is in trouble again. Is this the time that it will finally matter?

Drawing (cc) 2019 by Carnby

Could this be the beginning of the end for Facebook?

Even the Cambridge Analytica scandal didn’t bring the sort of white-hot scrutiny the social media giant has been subjected to over the past few weeks — starting with The Wall Street Journal’s “Facebook Files” series, which proved that company officials were well aware their product had gone septic, and culminating in Sunday’s “60 Minutes” interview with the Journal’s source, Frances Haugen.

As we’ve seen over and over, though, these crises have a tendency to blow over. You could say that “this time it feels different,” but I’m not sure it does. Mark Zuckerberg and company have shown an amazing ability to pick themselves up and keep going, mainly because their 2.8 billion engaged monthly users show an amazing ability not to care.

On Monday, New York Times technology columnist Kevin Roose wondered whether the game really is up and argued that Facebook is now on the decline. He wrote:

What I’m talking about is a kind of slow, steady decline that anyone who has ever seen a dying company up close can recognize. It’s a cloud of existential dread that hangs over an organization whose best days are behind it, influencing every managerial priority and product decision and leading to increasingly desperate attempts to find a way out. This kind of decline is not necessarily visible from the outside, but insiders see a hundred small, disquieting signs of it every day — user-hostile growth hacks, frenetic pivots, executive paranoia, the gradual attrition of talented colleagues.

The trouble is, as Roose concedes, it could take Facebook an awfully long time to die, and it may prove to be even more of a threat to our culture during its waning years than it was on the way up.

I suspect what keeps Facebook from imploding is that, for most people, it works as intended. Very few of us are spurning vaccines or killing innocent people in Myanmar because of what we’ve seen on Facebook. Instead, we’re sharing personal updates, family photos and, yes, some news stories we’ve run across. For the most part, I like Facebook, even as I recognize what a toxic effect it’s having.

The very real damage that Facebook is doing seems far removed from the experience most of its customers have. And that is what’s going to make it incredibly difficult to do anything about it.

The Wall Street Journal exposes Facebook’s lies about content moderation

Comet Ping Pong. Photo (cc) 2016 by DOCLVHUGO.

What could shock us about Facebook at this point? That Mark Zuckerberg and Sheryl Sandberg are getting ready to shut it down and donate all of their wealth because of their anguish over how toxic the platform has become?

No, we all know there is no bottom to Facebook. So Jeff Horwitz’s investigative report in The Wall Street Journal on Monday — revealing the extent to which celebrities and politicians are allowed to break rules the rest of us must follow — was more confirmatory than revelatory.

That’s not to say it lacks value. Seeing it all laid out in internal company documents is pretty stunning, even if the information isn’t especially surprising.

Become a member of Media Nation for just $5 a month!

The story involves a program called XCheck, under which VIP users are given special treatment. Incredibly, there are 5.8 million people who fall into this category, so I guess you could say they’re not all that special. Horwitz explains: “Some users are ‘whitelisted’ — rendered immune from enforcement actions — while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.”

And here’s the killer paragraph, quoting a 2019 internal review:

“We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.”

Among other things, the story reveals that Facebook has lied to the Oversight Board it set up to review its content-moderation decisions — news that should prompt the entire board to resign.

Perhaps the worst abuse documented by Horwitz involves the Brazilian soccer star Neymar:

After a woman accused Neymar of rape in 2019, he posted Facebook and Instagram videos defending himself — and showing viewers his WhatsApp correspondence with his accuser, which included her name and nude photos of her. He accused the woman of extorting him.

Facebook’s standard procedure for handling the posting of “nonconsensual intimate imagery” is simple: Delete it. But Neymar was protected by XCheck.

For more than a day, the system blocked Facebook’s moderators from removing the video. An internal review of the incident found that 56 million Facebook and Instagram users saw what Facebook described in a separate document as “revenge porn,” exposing the woman to what an employee referred to in the review as abuse from other users.

“This included the video being reposted more than 6,000 times, bullying and harassment about her character,” the review found.

As good a story as this is, there’s a weird instance of both-sides-ism near the top. Horwitz writes: “Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up ‘pedophile rings,’ and that then-President Donald Trump had called all refugees seeking asylum ‘animals,’ according to the documents.”

The pedophile claim, of course, is better known as Pizzagate, the ur-conspiracy theory promulgated by QAnon, which led to an infamous shooting incident at the Comet Ping Pong pizza restaurant in Washington in 2016. Trump, on the other hand, had this to say in 2018, according to USA Today: “We have people coming into the country or trying to come in, we’re stopping a lot of them, but we’re taking people out of the country. You wouldn’t believe how bad these people are. These aren’t people. These are animals.”

Apparently the claim about Trump was rated as false because he appeared to be referring specifically to gang members, not to “all” refugees. But that “all” is doing a lot of work.

The Journal series continues today with a look at how Instagram is having a damaging effect on the self-esteem of teenage girls — and that Facebook, which owns the service, knows about it and isn’t doing anything.

Australian libel ruling shows what happens without Section 230 protections

Photo (cc) 2011 by Scott Calleja

I’m not familiar with the fine points of Australian libel law. But a decision this week by the High Court of Australia that publishers are liable for third-party comments posted on their Facebook pages demonstrates the power of Section 230 in the United States.

Section 230, part of the Communications Decency Act of 1996, does two things. First, it carves out an exception to the principle that publishers are legally responsible for all content, including advertisements and letters to the editor. By contrast, publishers are not liable for online comments in any way.

Second, in what is sometimes called the “Good Samaritan” provision, publishers may remove some third-party content without taking on liability for other content. For example, a lawyer might argue that a news organization that removed a libelous comment has taken on an editing role and could therefore be sued for other libelous comments that weren’t removed. Under Section 230, you can’t do that.

The Australian court’s ruling strikes me as a straightforward application of libel law in the absence of Section 230. Mike Cherney of The Wall Street Journal puts it this way:

The High Court of Australia determined that media companies, by creating a public Facebook page and posting content on that page, facilitated and encouraged comments from other users on those posts. That means the media companies should be considered publishers of the comments and are therefore responsible for any defamatory content that appears in them, according to a summary of the judgment from the court.

Over at the Nieman Journalism Lab, Joshua Benton has a markedly different take, arguing that the court is holding publishers responsible for content they did not publish. Benton writes:

Pandora’s box isn’t big enough to hold all the potential implications of that idea. That a news publisher should be held accountable for the journalism it publishes is obvious. That it should be held accountable for reader comments left on its own website (which it fully controls) is, at a minimum, debatable.

But that it should be held legally liable for the comments of every rando who visits its Facebook page — in other words, the speech of people it doesn’t control, on a platform it doesn’t control — is a big, big step.

I disagree. As I said, publishers are traditionally liable for every piece of content that appears under their name. Section 230 was a deviation from that tradition — a special carve-out providing publishers with immunity they wouldn’t otherwise have. If Benton is right, then we never needed 230. But of course we did. There’s a reason that the Electronic Frontier Foundation calls 230 “the most important law protecting internet speech.”

I also don’t see much difference between comments posted on a publisher’s website or on its Facebook page. A Facebook page is something you set up, add content to and manage. It’s not yours in the same way as your website, but it is part of your brand and under your control. If you should be liable for third-party content on your website, then it’s hardly a stretch to say that you should also be liable for third-party content on your Facebook page.

As the role of social media in our political discourse has become increasingly fraught, there have been a number of calls to abolish or reform 230. Abolition would mean the end of Facebook — and, for that matter, the comments sections on websites. (There are days when I’m tempted…) Personally, I’d look into abolishing 230 protections for sites that use algorithms to drive engagement and, thus, divisiveness. Such a change would make Facebook less profitable, but I think we could live with that.

Australia, meanwhile, has a dilemma on its hands. Maybe Parliament will pass a law equivalent to Section 230, but (I hope) with less sweeping protections. In any case, Australia should serve as an interesting test case to see what happens when toxic, often libelous third-party comments no longer get a free pass.

Facebook’s tortured relationship with journalism gets a few more tweaks

Facebook has long had a tortured relationship with journalism. When I was reporting for “The Return of the Moguls” in 2015 and ’16, news publishers were embracing Instant Articles, news stories that would load quickly but that would also live on Facebook’s platform rather than the publisher’s.

The Washington Post was so committed to the project that it published every single piece of content as an Instant Article. Shailesh Prakash, the Post’s chief technologist, would talk about the “Facebook barbell,” a strategy that aimed to convert users at the Facebook end of the barbell into paying subscribers at the Post end.

Instant Articles never really went away, but enthusiasm waned — especially when, in 2018, Facebook began downgrading news in its algorithm in favor of posts from family and friends.

Nor was that the first time Facebook pulled a bait-and-switch. Earlier it had something called the Social Reader, inviting news organizations to develop apps that would live within that space. Then, in 2012, it made changes that resulted in a collapse in traffic. Former Post digital editor David Beard told me that’s when he began turning his attention to newsletters, which the Post could control directly rather than having to depend on Mark Zuckerberg’s whims.

Now they’re doing it again. Mathew Ingram of the Columbia Journalism Review reports that Facebook is experimenting with its news feed to see what the effect would be of showing users less political news as well as the way it measures how users interact with the site. The change, needless to say, comes after years of controversy over Facebook’s role in promoting misinformation and disinformation about politics, the Jan. 6 insurrection and the COVID-19 pandemic.

I’m sure Zuckerberg would be very happy if Facebook could serve solely as a platform for people to share uplifting personal news and cat photos. It would make his life a lot easier. But I’m also sure that he would be unwilling to see Facebook’s revenues drop even a little in order to make that happen. Remember that story about Facebook tweaking its algorithm to favor reliable news just before the 2020 election — and then changing it back afterwards because they found that users spent less time on the platform? So he keeps trying this and that, hoping to alight up on the magic formula that will make him and his company less hated, and less likely to be hauled before congressional committees, without hurting his bottom line.

One of the latest efforts is his foray into local news. If Facebook can be a solution to the local news crisis, well, what’s not to like? Earlier this year Facebook and Substack announced initiatives to bring local news projects to their platforms for some very, very short money.

Earlier today, Sarah Scire of the Nieman Journalism Lab profiled some of the 25 local journalists who are setting up shop on Bulletin, Facebook’s new newsletter platform. They seem like an idealistic lot, with about half the newsletters being produced by journalists of color. But there are warning signs. Scire writes:

Facebook says it’s providing “licensing fees” to the local journalists as part of a “multi-year commitment” but spokesperson Erin Miller would not specify how much the company is paying the writers or for how long. The company has said it won’t take a cut of subscription revenue “for the length of these partnerships.” But, again, it’s not saying how long those partnerships will last.

How long will Facebook’s commitment to local news last before it goes the way of the Social Reader and Instant Articles? I don’t like playing the cynic, especially about a program that could help community journalists and the audiences they serve. But cynicism about Facebook is the only stance that seems realistic after years of bad behavior and broken promises.

Researchers dig up embarrassing data about Facebook — and lose access to their accounts

Photo (cc) 2011 by thierry ehrmann

Previously published at GBH News.

For researchers, Facebook is something of a black box. It’s hard to know what its 2.8 billion active users across the globe are seeing at any given time because the social media giant keeps most of its data to itself. If some users are seeing ads aimed at “Jew haters,” or Russian-generated memes comparing Hillary Clinton to Satan, well, so be it. Mark Zuckerberg has his strategy down cold: apologize when exposed, then move on to the next appalling scheme.

Some data scientists, though, have managed to pierce the darkness. Among them are Laura Edelson and Damon McCoy of New York University’s Center for Cybersecurity. With a tool called Ad Observer, which volunteers add to their browsers, they were able to track ads that Facebook users were being exposed to and draw some conclusions. For instance, they learned that users are more likely to engage with extreme falsehoods than with truthful material, and that more than 100,000 political ads are missing from an archive Facebook set up for researchers.

As you would expect, Facebook executives took these findings seriously. So what did they do? Did they change the algorithm to make it more likely that users would see reliable information in their news feed? Did they restore the missing ads and take steps to make sure such omissions wouldn’t happen again?

They did not. Instead, they cut off access to Edelson’s and McCoy’s accounts, making it harder for them to dig up such embarrassing facts in the future.

“There is still a lot of important research we want to do,” they wrote in a recent New York Times op-ed. “When Facebook shut down our accounts, we had just begun studies intended to determine whether the platform is contributing to vaccine hesitancy and sowing distrust in elections. We were also trying to figure out what role the platform may have played leading up to the Capitol assault on Jan. 6.”

In other words, they want to find out how responsible Zuckerberg, Sheryl Sandberg and the rest are for spreading a deadly illness and encouraging an armed insurrection. No wonder Facebook looked at what the researchers were doing and told them, gee, you know, we’d love to help, but you’re violating our privacy rules.

But that’s not even a real concern. Writing at the Columbia Journalism Review, Mathew Ingram points out that the privacy rules Facebook agreed to following the Cambridge Analytica scandal apply to Facebook itself, not to users who voluntarily agree to provide information to researchers.

Ingram quotes Princeton professor Jonathan Mayer, an adviser to Vice President Kamala Harris when she was a senator, who tweeted: “Facebook’s legal argument is bogus. The order “restricts how *Facebook* shares user information. It doesn’t preclude *users* from volunteering information about their experiences on the platform, including through a browser extension.”

The way Ingram describes it, as well as Edelson and McCoy themselves, Facebook’s actions didn’t stop their work altogether, but it has slowed it down and made it more difficult. Needless to say, the company should be doing everything it can to help with such research. Then again, Zuckerberg has never shown much regard for such mundane matters as public health and the future of democracy, especially when there’s money to be made.

By contrast, Facebook’s social media competitor Twitter has actually been much more open about making its data available to researchers. My Northeastern colleague John Wihbey, who co-authored an important study several years ago about how journalists use Twitter, says the difference explains why there have been more studies published about Twitter than Facebook. “This is unfortunate,” he says, “as it is a smaller network and less representative of the general public.”

It’s like the old saw about looking for your car keys under a street light because that’s where the light is. Trouble is, with fewer than 400 million active users, Twitter is little more than a rounding error in Facebook’s universe.

Earlier this year, MIT’s Technology Review published a remarkable story documenting how Facebook shied away from cracking down on extremist content, focusing instead on placating Donald Trump and other figures on the political right before the 2020 election. Needless to say, the NYU researchers represent an especially potent threat to the Zuckerborg since they plan to focus on the role that Facebook played in amplifying the disinformation that led to the insurrection, whose aftermath continues to befoul our body politic.

When the history of this ugly era is written, the two media giants that will stand out for their malignity are Fox News, for knowingly poisoning tens of millions of people with toxic falsehoods, and Facebook, for allowing its platform be used to amplify those falsehoods. Eventually, the truth will be told — no matter what steps Zuckerberg takes to slow it down. There should be hell to pay.

Facebook cuts access to data that was being used to embarrass the company

Facebook cuts reseachers’ access to data, claiming privacy violations. It seems more likely, though, that the Zuckerborg was tired of being embarrassed by the stories that were developed from that data. Mathew Ingram of the Columbia Journalism Review explains.

Tiny News Collective to provide funding to six local news start-ups

Six local news projects will launch or expand after winning a competition held by the Tiny News Collective — a joint venture of LION (Local Independent Online News) Publishers and News Catalyst, based at Temple University. News Catalyst receives funding from the Knight Foundation and the Lenfest Institute. According to the announcement:

Thanks to a partnership with the Google News Initiative, each organization in the first cohort will receive a $15,000 stipend to help create the capacity for the founders to get started. In addition, the GNI has funded their first year of membership dues in the Collective and LION Publishers.

The projects range from an organization covering education news in part of Orange County, California, to an outlet with the wonderful name Black by God, which seeks “to share perspectives that cultivate, curate, and elevate Black voices from West Virginia.”

Forty organizations applied. Among the judges were Kate Maxwell, co-founder and publisher of The Mendocino Voice, a news co-op that is one of the local news projects I’m following for a book I’m co-authoring with Ellen Clegg.

The Tiny News Collective strikes me as a more interesting approach to dealing with the local news crisis than initiatives unveiled recently by Substack and Facebook. Those require you to set up shop on their platforms. By contrast, the Tiny News Collective is aimed at helping community journalism entrepreneurs to achieve sustainability on their own rather than become cogs in someone else’s machine.

Become a member of Media Nation for just $5 a month!

Page 4 of 13

Powered by WordPress & Theme by Anders Norén