Why Section 230 should be curbed for algorithmically driven platforms

Facebook whistleblower Frances Haugen testifies on Capitol Hill Tuesday.

Facebook in the midst of what we can only hope will prove to be an existential crisis. So I was struck this morning when Boston Globe technology columnist Hiawatha Bray suggested a step that I proposed more than a year ago — eliminating Section 230 protections from social media platforms that use algorithms. Bray writes:

Maybe we should eliminate Section 230 protections for algorithmically powered social networks. For Internet sites that let readers find their own way around, the law would remain the same. But a Facebook or Twitter or YouTube or TikTok could be sued by private citizens — not the government — for postings that defame somebody or which threaten violence.

Here’s what I wrote for GBH News in June 2020:

One possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers.

I hope it’s an idea whose time has come.

Facebook is in trouble again. Is this the time that it will finally matter?

Drawing (cc) 2019 by Carnby

Could this be the beginning of the end for Facebook?

Even the Cambridge Analytica scandal didn’t bring the sort of white-hot scrutiny the social media giant has been subjected to over the past few weeks — starting with The Wall Street Journal’s “Facebook Files” series, which proved that company officials were well aware their product had gone septic, and culminating in Sunday’s “60 Minutes” interview with the Journal’s source, Frances Haugen.

As we’ve seen over and over, though, these crises have a tendency to blow over. You could say that “this time it feels different,” but I’m not sure it does. Mark Zuckerberg and company have shown an amazing ability to pick themselves up and keep going, mainly because their 2.8 billion engaged monthly users show an amazing ability not to care.

On Monday, New York Times technology columnist Kevin Roose wondered whether the game really is up and argued that Facebook is now on the decline. He wrote:

What I’m talking about is a kind of slow, steady decline that anyone who has ever seen a dying company up close can recognize. It’s a cloud of existential dread that hangs over an organization whose best days are behind it, influencing every managerial priority and product decision and leading to increasingly desperate attempts to find a way out. This kind of decline is not necessarily visible from the outside, but insiders see a hundred small, disquieting signs of it every day — user-hostile growth hacks, frenetic pivots, executive paranoia, the gradual attrition of talented colleagues.

The trouble is, as Roose concedes, it could take Facebook an awfully long time to die, and it may prove to be even more of a threat to our culture during its waning years than it was on the way up.

I suspect what keeps Facebook from imploding is that, for most people, it works as intended. Very few of us are spurning vaccines or killing innocent people in Myanmar because of what we’ve seen on Facebook. Instead, we’re sharing personal updates, family photos and, yes, some news stories we’ve run across. For the most part, I like Facebook, even as I recognize what a toxic effect it’s having.

The very real damage that Facebook is doing seems far removed from the experience most of its customers have. And that is what’s going to make it incredibly difficult to do anything about it.

The Wall Street Journal exposes Facebook’s lies about content moderation

Comet Ping Pong. Photo (cc) 2016 by DOCLVHUGO.

What could shock us about Facebook at this point? That Mark Zuckerberg and Sheryl Sandberg are getting ready to shut it down and donate all of their wealth because of their anguish over how toxic the platform has become?

No, we all know there is no bottom to Facebook. So Jeff Horwitz’s investigative report in The Wall Street Journal on Monday — revealing the extent to which celebrities and politicians are allowed to break rules the rest of us must follow — was more confirmatory than revelatory.

That’s not to say it lacks value. Seeing it all laid out in internal company documents is pretty stunning, even if the information isn’t especially surprising.

Become a member of Media Nation for just $5 a month!

The story involves a program called XCheck, under which VIP users are given special treatment. Incredibly, there are 5.8 million people who fall into this category, so I guess you could say they’re not all that special. Horwitz explains: “Some users are ‘whitelisted’ — rendered immune from enforcement actions — while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.”

And here’s the killer paragraph, quoting a 2019 internal review:

“We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.”

Among other things, the story reveals that Facebook has lied to the Oversight Board it set up to review its content-moderation decisions — news that should prompt the entire board to resign.

Perhaps the worst abuse documented by Horwitz involves the Brazilian soccer star Neymar:

After a woman accused Neymar of rape in 2019, he posted Facebook and Instagram videos defending himself — and showing viewers his WhatsApp correspondence with his accuser, which included her name and nude photos of her. He accused the woman of extorting him.

Facebook’s standard procedure for handling the posting of “nonconsensual intimate imagery” is simple: Delete it. But Neymar was protected by XCheck.

For more than a day, the system blocked Facebook’s moderators from removing the video. An internal review of the incident found that 56 million Facebook and Instagram users saw what Facebook described in a separate document as “revenge porn,” exposing the woman to what an employee referred to in the review as abuse from other users.

“This included the video being reposted more than 6,000 times, bullying and harassment about her character,” the review found.

As good a story as this is, there’s a weird instance of both-sides-ism near the top. Horwitz writes: “Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up ‘pedophile rings,’ and that then-President Donald Trump had called all refugees seeking asylum ‘animals,’ according to the documents.”

The pedophile claim, of course, is better known as Pizzagate, the ur-conspiracy theory promulgated by QAnon, which led to an infamous shooting incident at the Comet Ping Pong pizza restaurant in Washington in 2016. Trump, on the other hand, had this to say in 2018, according to USA Today: “We have people coming into the country or trying to come in, we’re stopping a lot of them, but we’re taking people out of the country. You wouldn’t believe how bad these people are. These aren’t people. These are animals.”

Apparently the claim about Trump was rated as false because he appeared to be referring specifically to gang members, not to “all” refugees. But that “all” is doing a lot of work.

The Journal series continues today with a look at how Instagram is having a damaging effect on the self-esteem of teenage girls — and that Facebook, which owns the service, knows about it and isn’t doing anything.

The demise of Adobe Flash broke the 9/11 web. But it’s just the tip of a bigger issue.

This is an important story — not just because some crucial 9/11 coverage has been lost or even because the demise of Adobe Flash means that parts of the internet are now broken. Rather, it illustrates that the internet is, in many ways, an ephemeral medium, meaning that we simply can’t preserve and archive our history the way we could during the print era.

Clare Duffy and Kerry Flynn report for CNN.com that The Washington Post, ABC News and CNN itself are among the news organizations whose interactive presentations in the aftermath of 9/11 no longer work properly.

Become a member of Media Nation for just $5 a month!

As they recount, Flash was a real advance in the early days of the web, as it was an important step forward for video and interactive graphics. But the late Steve Jobs, criticizing Flash’s security flaws, decreed that Apple’s iPhone and iPad would not run Flash. At that point the platform began to crumble, and Adobe pulled support for it at the end of 2020.

Duffy and Flynn write that some efforts are under way to use Flash emulators in order to bring some old content back to life. Adobe, which is worth $314 billion, ought to spend a few nickels to help with that effort.

More broadly, though, the problem with Flash illustrates how the internet decays over time. Link rot is an ongoing frustration — you link to something, go back a year or five later, and find that the content has moved or been taken down. Publications go out of business, taking their websites with them. Or they change content-management systems, resulting in new URLs for everything.

We’re all grateful for the work that the Internet Archive does in preserving as much as it can. Here, for instance, is the home page of The New York Times on the evening of Sept. 11, 2001.

But what’s available online isn’t nearly as complete as what’s in print. For the moment, at least, we can still go to the library and look at microfilm of print editions for publications that pay little attention to preserving their digital past. It won’t be too many years, though, before digital is all we’ve got.

Apple’s attempted crackdown on child sexual abuse leads to a battle over privacy

Apple CEO Tim Cook. Photo (cc) 2017 by Austin Community College.

Previously published at GBH News.

There is no privacy on the internet.

You would think such a commonplace observation hardly needs to be said out loud. In recent years, though, Apple has tried to market itself as the great exception.

“Privacy is built in from the beginning,” reads Apple’s privacy policy. “Our products and features include innovative privacy technologies and techniques designed to minimize how much of your data we — or anyone else — can access. And powerful security features help prevent anyone except you from being able to access your information. We are constantly working on new ways to keep your personal information safe.”

All that has now blown up in Apple’s face. Last Friday, the company backed off from a controversial initiative that would have allowed its iOS devices — that is, iPhones and iPads — to be scanned for the presence of child sexual abuse material, or CSAM. The policy, announced in early August, proved wildly unpopular with privacy advocates, who warned that it could open a backdoor to repressive governments seeking to spy on dissidents. Apple cooperates with China, for instance, arguing that it is bound by the laws of the countries in which it operates.

What made Apple’s efforts especially vulnerable to criticism was that it involved placing spyware directly on users’ devices. Although surveillance wouldn’t actually kick in unless users backed up their devices to Apple’s iCloud service, it raised alarms that the company was planning to engage in phone-level snooping.

“Apple has put in place elaborate measures to stop abuse from happening,” wrote Tatum Hunter and Reed Albergotti in The Washington Post. “But part of the problem is the unknown. iPhone users don’t know exactly where this is all headed, and while they might trust Apple, there is a nagging suspicion among privacy advocates and security researchers that something could go wrong.”

The initiative has proved to be a public-relations disaster for Apple. Albergotti, who apparently had enough of the company’s attempts at spin, wrote a remarkable sentence in his Friday story reporting the abrupt reversal: “Apple spokesman Fred Sainz said he would not provide a statement on Friday’s announcement because The Washington Post would not agree to use it without naming the spokesperson.”

That, in turn, brought an attaboy tweet from Albergotti’s Post colleague Christiano Lima, complete with flames and applauding hands, which promptly went viral.

“We in the press ought to do this far, far more often,” tweeted Troy Wolverton, managing editor of the Silicon Valley Business Journal, in a characteristically supportive response.

Even though the media rely on unnamed sources far too often, my own view is that there would have been nothing wrong with Albergotti’s going along with Sainz’s request. Sainz was essentially offering an on-the-record quote from Apple.

(Still, it’s hard not to experience a zing of delight at Albergotti’s insouciance. Now let’s see the Post do the same with politicians and government officials.)

Apple has gotten a lot of mileage out of its embrace of privacy. Tim Cook, the company’s chief executive, delivered a speech earlier this year in which he attempted to position Apple as the ethical alternative to Google, Facebook and Amazon, whose business models depend on hoovering up vast amounts of data from their customers in order to sell them more stuff.

“If we accept as normal and unavoidable that everything in our lives can be aggregated and sold, we lose so much more than data, we lose the freedom to be human,” Cook said. “And yet, this is a hopeful new season, a time of thoughtfulness and reform.”

The current controversy comes just months after Apple unveiled new features in its iOS operating software that made it more difficult for users to be tracked in a variety of ways, offering greater security for their email and more protection from being tracked by advertisers.

Yet it always seemed that there was something performative about Apple’s embrace of privacy. For instance, although Apple allows users to maintain tight control over their iPhones and iMessages, the company continues to hold the encryption keys to iCloud — which, in turn, makes the company liable to a court order to turn over user data.

“The dirty little secret with nearly all of Apple’s privacy promises is that there’s been a backdoor all along,” wrote privacy advocates Albert Fox Cahn and Evan Selinger in a recent commentary for Wired. “Whether it’s iPhone data from Apple’s latest devices or the iMessage data that the company constantly championed as being ‘end-to-end encrypted,’ all of this data is vulnerable when using iCloud.”

Of course, you might argue that there ought to be reasonable limits to privacy. Just as the First Amendment does not protect obscenity, libel or serious breaches of national security, privacy laws — or, in this case, a powerful company’s policies — shouldn’t protect child pornography or certain other activities such as terrorist threats. Fair enough.

But as the aforementioned Selinger, a professor of philosophy at MIT and an affiliate scholar at Northeastern University, argued over the weekend in a Boston Globe Ideas piece, there are times when slippery-slope arguments, often bogus, are sometimes valid.

“Governments worldwide have a strong incentive to ask, if not demand, that Apple extend its monitoring to search for evidence of interest in politically controversial material and participation in politically contentious activities,” Selinger wrote, adding: “The strong incentives to push for intensified surveillance combined with the low costs for repurposing Apple’s technology make this situation a real slippery slope.”

Five years ago, the FBI sought a court order that would have forced Apple to provide the encryption keys so they could access the data on an iPhone used by one of the shooters in a deadly terrorist attack in San Bernardino, California. Apple refused, which set off a public controversy, including a debate between former CIA director John Deutsch and Harvard Law School professor Jonathan Zittrain that I covered for GBH News.

The controversy proved to be for naught. In the end, the FBI was able to break into the phone without Apple’s help. Which suggests a solution, however imperfect, to the current controversy.

Apple should withdraw its plan to install spyware directly on users’ iPhones and iPads. And it should remind users that anything stored in iCloud might be revealed in response to a legitimate court order. More than anything, Apple needs to stop making unrealistic promises and remind its users:

There is no privacy on the internet.

Facebook’s tortured relationship with journalism gets a few more tweaks

Facebook has long had a tortured relationship with journalism. When I was reporting for “The Return of the Moguls” in 2015 and ’16, news publishers were embracing Instant Articles, news stories that would load quickly but that would also live on Facebook’s platform rather than the publisher’s.

The Washington Post was so committed to the project that it published every single piece of content as an Instant Article. Shailesh Prakash, the Post’s chief technologist, would talk about the “Facebook barbell,” a strategy that aimed to convert users at the Facebook end of the barbell into paying subscribers at the Post end.

Instant Articles never really went away, but enthusiasm waned — especially when, in 2018, Facebook began downgrading news in its algorithm in favor of posts from family and friends.

Nor was that the first time Facebook pulled a bait-and-switch. Earlier it had something called the Social Reader, inviting news organizations to develop apps that would live within that space. Then, in 2012, it made changes that resulted in a collapse in traffic. Former Post digital editor David Beard told me that’s when he began turning his attention to newsletters, which the Post could control directly rather than having to depend on Mark Zuckerberg’s whims.

Now they’re doing it again. Mathew Ingram of the Columbia Journalism Review reports that Facebook is experimenting with its news feed to see what the effect would be of showing users less political news as well as the way it measures how users interact with the site. The change, needless to say, comes after years of controversy over Facebook’s role in promoting misinformation and disinformation about politics, the Jan. 6 insurrection and the COVID-19 pandemic.

I’m sure Zuckerberg would be very happy if Facebook could serve solely as a platform for people to share uplifting personal news and cat photos. It would make his life a lot easier. But I’m also sure that he would be unwilling to see Facebook’s revenues drop even a little in order to make that happen. Remember that story about Facebook tweaking its algorithm to favor reliable news just before the 2020 election — and then changing it back afterwards because they found that users spent less time on the platform? So he keeps trying this and that, hoping to alight up on the magic formula that will make him and his company less hated, and less likely to be hauled before congressional committees, without hurting his bottom line.

One of the latest efforts is his foray into local news. If Facebook can be a solution to the local news crisis, well, what’s not to like? Earlier this year Facebook and Substack announced initiatives to bring local news projects to their platforms for some very, very short money.

Earlier today, Sarah Scire of the Nieman Journalism Lab profiled some of the 25 local journalists who are setting up shop on Bulletin, Facebook’s new newsletter platform. They seem like an idealistic lot, with about half the newsletters being produced by journalists of color. But there are warning signs. Scire writes:

Facebook says it’s providing “licensing fees” to the local journalists as part of a “multi-year commitment” but spokesperson Erin Miller would not specify how much the company is paying the writers or for how long. The company has said it won’t take a cut of subscription revenue “for the length of these partnerships.” But, again, it’s not saying how long those partnerships will last.

How long will Facebook’s commitment to local news last before it goes the way of the Social Reader and Instant Articles? I don’t like playing the cynic, especially about a program that could help community journalists and the audiences they serve. But cynicism about Facebook is the only stance that seems realistic after years of bad behavior and broken promises.

Facebook cuts access to data that was being used to embarrass the company

Facebook cuts reseachers’ access to data, claiming privacy violations. It seems more likely, though, that the Zuckerborg was tired of being embarrassed by the stories that were developed from that data. Mathew Ingram of the Columbia Journalism Review explains.

Coming to terms with the false promise of Twitter

Photo (cc) 2014 by =Nahemoth=

Roxane Gay brilliantly captures my own love/hate relationship with Twitter. In a New York Times essay published on Sunday, she writes:

After a while, the lines blur, and it’s not at all clear what friend or foe look like, or how we as humans should interact in this place. After being on the receiving end of enough aggression, everything starts to feel like an attack. Your skin thins until you have no defenses left. It becomes harder and harder to distinguish good-faith criticism from pettiness or cruelty. It becomes harder to disinvest from pointless arguments that have nothing at all to do with you. An experience that was once charming and fun becomes stressful and largely unpleasant. I don’t think I’m alone in feeling this way. We have all become hammers in search of nails.

This is perfect. It’s not that people are terrible on Twitter, although they are. It’s that it’s nearly impossible to avoid becoming our own worst versions of ourselves.

Twitter may not be as harmful to the culture as Facebook, but for some reason I’ve found interactions on Facebook — as well as my own behavior — to be more congenial than on Twitter. Of course, on Facebook you have more control over whom you choose to interact with, and there’s a lot more sharing of family photos and other cheerful content. Twitter, by contrast, can feel like a never-ending exercise in hyper-aggression and performative defensiveness.

From time to time I’ve tried to cut back and use Twitter only for professional reasons — promoting my work and that of others, tweeting less and reading more of what others have to say. It works to an extent, but I always slide back. Twitter seems to reward snark, but what, really, is the reward? More likes and retweets? Who cares?

I can’t leave — Twitter is too important to my work. But Gay’s fine piece is a reminder that social media have fallen far short of what we were hoping for 12 to 15 years ago, and that we ourselves are largely to blame.

A small example of how racially biased algorithms distort social media

You may have heard that the algorithms used by Facebook and other social media platforms are racially biased. I ran into a small but interesting example of that earlier today.

My previous post is about a webinar on news co-ops that I attended last week. I used a photo of Kevon Paynter, co-founder of Bloc by Block News, as the lead art and a photo of Jasper Wang, co-founder of The Defector, well down in the piece.

But when I posted links on Facebook, Twitter and LinkedIn, all three of them automatically grabbed the photo of Wang as the image that would go with the link. For example, here’s how it appeared on Twitter.

I don’t know what happened. Paynter was more central to what I was writing, which is why I led with his photo. Paynter is Black; Wang is of Asian descent. There’s more contrast in the image of Wang, which may be why the algorithms identified it as a superior picture. But in so doing they ignored my choice of Paynter as the lead.

File this under “Things that make you go hmmmm.”

Can artificial intelligence help local news? Sure. And it can cause great harm as well.

Image via Pixabay

Read the rest at GBH News.

I’ll admit that I was more than a little skeptical when the Knight Foundation announced last week that it would award $3 million in grants to help local news organizations use artificial intelligence. My first reaction was that dousing the cash with gasoline and tossing a match would be just as effective.

But then I started thinking about how AI has enhanced my own work as a journalist. For instance, just a few years ago I had two unappetizing choices after I recorded an interview: transcribing it myself or sending it out to an actual human being to do the work at considerable expense. Now I use an automated system, based on AI, that does a decent job at a fraction of the cost.

Or consider Google, whose search engine makes use of AI. At one time, I’d have to travel to Beacon Hill if I wanted to look up state and local campaign finance records — and then pore through them by hand, taking notes or making photocopies as long as the quarters held out. These days I can search for “Massachusetts campaign finance reports” and have what I need in a few seconds.

Given that local journalism is in crisis, what’s not to like about the idea of helping community news organizations develop the tools they need to automate more of what they do?

Well, a few things, in fact.

Foremost among the downsides is the use of AI to produce robot-written news stories. Such a system has been in use at The Washington Post for several years to produce reports about high school football. Input a box score and out comes a story that looks more or less like an actual person wrote it. Some news organizations are doing the same with financial data. It sounds innocuous enough given that much of this work would probably go undone if it couldn’t be automated. But let’s curb our enthusiasm.

Patrick White, a journalism professor at the University of Quebec in Montreal, sounded this unrealistically hopeful note in a piece for The Conversation about a year ago: “Artificial intelligence is not there to replace journalists or eliminate jobs.” According to one estimate cited by White, AI would have only a minimal effect on newsroom employment and would “reorient editors and journalists towards value-added content: long-form journalism, feature interviews, analysis, data-driven journalism and investigative journalism.”

Uh, Professor White, let me introduce you to the two most bottom line-obsessed newspaper publishers in the United States — Alden Global Capital and Gannett. If they could, they’d unleash the algorithms to cover everything up to and including city council meetings, mayoral speeches and development proposals. And if they could figure out how to program the robots to write human-interest stories and investigative reports, well, they’d do that too.

Another danger AI poses is that it can track scrolling and clicking patterns to personalize a news report. Over time, for instance, your Boston Globe would look different from mine. Remember the “Daily Me,” an early experiment in individualized news popularized by MIT Media Lab founder Nicholas Negroponte? That didn’t quite come to pass. But it’s becoming increasingly feasible, and it represents one more step away from a common culture and a common set of facts, potentially adding another layer to the polarization that’s tearing us apart.

“Personalization of news … puts the public record at risk,” according to a report published in 2017 by Columbia’s Tow Center for Digital Journalism. “When everyone sees a different version of a story, there is no authoritative version to cite. The internet has also made it possible to remove content from the web, which may not be archived anywhere. There is no guarantee that what you see will be what everyone sees — or that it will be there in the future.”

Of course, AI has also made journalism better — and not just for transcribing interviews or Googling public records. As the Tow Center report also points out, AI makes it possible for investigative reporters to sift through thousands of records to find patterns, instances of wrongdoing or trends.

The Knight Foundation, in its press release announcing the grant, held out the promise that AI could reduce costs on the business side of news organizations — a crucial goal given how financially strapped most of them are. The $3 million will go to The Associated Press, Columbia University, the NYC Media Lab and the Partnership on AI. Under the terms of the grant, the four organizations will work together on projects such as training local journalists, developing revenue strategies and studying the ethical use of AI. It all sounds eminently worthy.

But there are always unintended consequences. The highly skilled people whom I used to pay to transcribe my interviews no longer have those jobs. High school students who might have gotten an opportunity to write up the exploits of their sports teams for a few bucks have been deprived of a chance at an early connection with news — an experience that might have turned them into paying customers or even journalists when they got older.

And local news, much of which is already produced at distant outposts, some of them overseas, is about to become that much more impersonal and removed from the communities they serve.