Apple’s attempted crackdown on child sexual abuse leads to a battle over privacy

Apple CEO Tim Cook. Photo (cc) 2017 by Austin Community College.

Previously published at GBH News.

There is no privacy on the internet.

You would think such a commonplace observation hardly needs to be said out loud. In recent years, though, Apple has tried to market itself as the great exception.

“Privacy is built in from the beginning,” reads Apple’s privacy policy. “Our products and features include innovative privacy technologies and techniques designed to minimize how much of your data we — or anyone else — can access. And powerful security features help prevent anyone except you from being able to access your information. We are constantly working on new ways to keep your personal information safe.”

All that has now blown up in Apple’s face. Last Friday, the company backed off from a controversial initiative that would have allowed its iOS devices — that is, iPhones and iPads — to be scanned for the presence of child sexual abuse material, or CSAM. The policy, announced in early August, proved wildly unpopular with privacy advocates, who warned that it could open a backdoor to repressive governments seeking to spy on dissidents. Apple cooperates with China, for instance, arguing that it is bound by the laws of the countries in which it operates.

What made Apple’s efforts especially vulnerable to criticism was that it involved placing spyware directly on users’ devices. Although surveillance wouldn’t actually kick in unless users backed up their devices to Apple’s iCloud service, it raised alarms that the company was planning to engage in phone-level snooping.

“Apple has put in place elaborate measures to stop abuse from happening,” wrote Tatum Hunter and Reed Albergotti in The Washington Post. “But part of the problem is the unknown. iPhone users don’t know exactly where this is all headed, and while they might trust Apple, there is a nagging suspicion among privacy advocates and security researchers that something could go wrong.”

The initiative has proved to be a public-relations disaster for Apple. Albergotti, who apparently had enough of the company’s attempts at spin, wrote a remarkable sentence in his Friday story reporting the abrupt reversal: “Apple spokesman Fred Sainz said he would not provide a statement on Friday’s announcement because The Washington Post would not agree to use it without naming the spokesperson.”

That, in turn, brought an attaboy tweet from Albergotti’s Post colleague Christiano Lima, complete with flames and applauding hands, which promptly went viral.

“We in the press ought to do this far, far more often,” tweeted Troy Wolverton, managing editor of the Silicon Valley Business Journal, in a characteristically supportive response.

Even though the media rely on unnamed sources far too often, my own view is that there would have been nothing wrong with Albergotti’s going along with Sainz’s request. Sainz was essentially offering an on-the-record quote from Apple.

(Still, it’s hard not to experience a zing of delight at Albergotti’s insouciance. Now let’s see the Post do the same with politicians and government officials.)

Apple has gotten a lot of mileage out of its embrace of privacy. Tim Cook, the company’s chief executive, delivered a speech earlier this year in which he attempted to position Apple as the ethical alternative to Google, Facebook and Amazon, whose business models depend on hoovering up vast amounts of data from their customers in order to sell them more stuff.

“If we accept as normal and unavoidable that everything in our lives can be aggregated and sold, we lose so much more than data, we lose the freedom to be human,” Cook said. “And yet, this is a hopeful new season, a time of thoughtfulness and reform.”

The current controversy comes just months after Apple unveiled new features in its iOS operating software that made it more difficult for users to be tracked in a variety of ways, offering greater security for their email and more protection from being tracked by advertisers.

Yet it always seemed that there was something performative about Apple’s embrace of privacy. For instance, although Apple allows users to maintain tight control over their iPhones and iMessages, the company continues to hold the encryption keys to iCloud — which, in turn, makes the company liable to a court order to turn over user data.

“The dirty little secret with nearly all of Apple’s privacy promises is that there’s been a backdoor all along,” wrote privacy advocates Albert Fox Cahn and Evan Selinger in a recent commentary for Wired. “Whether it’s iPhone data from Apple’s latest devices or the iMessage data that the company constantly championed as being ‘end-to-end encrypted,’ all of this data is vulnerable when using iCloud.”

Of course, you might argue that there ought to be reasonable limits to privacy. Just as the First Amendment does not protect obscenity, libel or serious breaches of national security, privacy laws — or, in this case, a powerful company’s policies — shouldn’t protect child pornography or certain other activities such as terrorist threats. Fair enough.

But as the aforementioned Selinger, a professor of philosophy at MIT and an affiliate scholar at Northeastern University, argued over the weekend in a Boston Globe Ideas piece, there are times when slippery-slope arguments, often bogus, are sometimes valid.

“Governments worldwide have a strong incentive to ask, if not demand, that Apple extend its monitoring to search for evidence of interest in politically controversial material and participation in politically contentious activities,” Selinger wrote, adding: “The strong incentives to push for intensified surveillance combined with the low costs for repurposing Apple’s technology make this situation a real slippery slope.”

Five years ago, the FBI sought a court order that would have forced Apple to provide the encryption keys so they could access the data on an iPhone used by one of the shooters in a deadly terrorist attack in San Bernardino, California. Apple refused, which set off a public controversy, including a debate between former CIA director John Deutsch and Harvard Law School professor Jonathan Zittrain that I covered for GBH News.

The controversy proved to be for naught. In the end, the FBI was able to break into the phone without Apple’s help. Which suggests a solution, however imperfect, to the current controversy.

Apple should withdraw its plan to install spyware directly on users’ iPhones and iPads. And it should remind users that anything stored in iCloud might be revealed in response to a legitimate court order. More than anything, Apple needs to stop making unrealistic promises and remind its users:

There is no privacy on the internet.

Facebook’s tortured relationship with journalism gets a few more tweaks

Facebook has long had a tortured relationship with journalism. When I was reporting for “The Return of the Moguls” in 2015 and ’16, news publishers were embracing Instant Articles, news stories that would load quickly but that would also live on Facebook’s platform rather than the publisher’s.

The Washington Post was so committed to the project that it published every single piece of content as an Instant Article. Shailesh Prakash, the Post’s chief technologist, would talk about the “Facebook barbell,” a strategy that aimed to convert users at the Facebook end of the barbell into paying subscribers at the Post end.

Instant Articles never really went away, but enthusiasm waned — especially when, in 2018, Facebook began downgrading news in its algorithm in favor of posts from family and friends.

Nor was that the first time Facebook pulled a bait-and-switch. Earlier it had something called the Social Reader, inviting news organizations to develop apps that would live within that space. Then, in 2012, it made changes that resulted in a collapse in traffic. Former Post digital editor David Beard told me that’s when he began turning his attention to newsletters, which the Post could control directly rather than having to depend on Mark Zuckerberg’s whims.

Now they’re doing it again. Mathew Ingram of the Columbia Journalism Review reports that Facebook is experimenting with its news feed to see what the effect would be of showing users less political news as well as the way it measures how users interact with the site. The change, needless to say, comes after years of controversy over Facebook’s role in promoting misinformation and disinformation about politics, the Jan. 6 insurrection and the COVID-19 pandemic.

I’m sure Zuckerberg would be very happy if Facebook could serve solely as a platform for people to share uplifting personal news and cat photos. It would make his life a lot easier. But I’m also sure that he would be unwilling to see Facebook’s revenues drop even a little in order to make that happen. Remember that story about Facebook tweaking its algorithm to favor reliable news just before the 2020 election — and then changing it back afterwards because they found that users spent less time on the platform? So he keeps trying this and that, hoping to alight up on the magic formula that will make him and his company less hated, and less likely to be hauled before congressional committees, without hurting his bottom line.

One of the latest efforts is his foray into local news. If Facebook can be a solution to the local news crisis, well, what’s not to like? Earlier this year Facebook and Substack announced initiatives to bring local news projects to their platforms for some very, very short money.

Earlier today, Sarah Scire of the Nieman Journalism Lab profiled some of the 25 local journalists who are setting up shop on Bulletin, Facebook’s new newsletter platform. They seem like an idealistic lot, with about half the newsletters being produced by journalists of color. But there are warning signs. Scire writes:

Facebook says it’s providing “licensing fees” to the local journalists as part of a “multi-year commitment” but spokesperson Erin Miller would not specify how much the company is paying the writers or for how long. The company has said it won’t take a cut of subscription revenue “for the length of these partnerships.” But, again, it’s not saying how long those partnerships will last.

How long will Facebook’s commitment to local news last before it goes the way of the Social Reader and Instant Articles? I don’t like playing the cynic, especially about a program that could help community journalists and the audiences they serve. But cynicism about Facebook is the only stance that seems realistic after years of bad behavior and broken promises.

Facebook cuts access to data that was being used to embarrass the company

Facebook cuts reseachers’ access to data, claiming privacy violations. It seems more likely, though, that the Zuckerborg was tired of being embarrassed by the stories that were developed from that data. Mathew Ingram of the Columbia Journalism Review explains.

Coming to terms with the false promise of Twitter

Photo (cc) 2014 by =Nahemoth=

Roxane Gay brilliantly captures my own love/hate relationship with Twitter. In a New York Times essay published on Sunday, she writes:

After a while, the lines blur, and it’s not at all clear what friend or foe look like, or how we as humans should interact in this place. After being on the receiving end of enough aggression, everything starts to feel like an attack. Your skin thins until you have no defenses left. It becomes harder and harder to distinguish good-faith criticism from pettiness or cruelty. It becomes harder to disinvest from pointless arguments that have nothing at all to do with you. An experience that was once charming and fun becomes stressful and largely unpleasant. I don’t think I’m alone in feeling this way. We have all become hammers in search of nails.

This is perfect. It’s not that people are terrible on Twitter, although they are. It’s that it’s nearly impossible to avoid becoming our own worst versions of ourselves.

Twitter may not be as harmful to the culture as Facebook, but for some reason I’ve found interactions on Facebook — as well as my own behavior — to be more congenial than on Twitter. Of course, on Facebook you have more control over whom you choose to interact with, and there’s a lot more sharing of family photos and other cheerful content. Twitter, by contrast, can feel like a never-ending exercise in hyper-aggression and performative defensiveness.

From time to time I’ve tried to cut back and use Twitter only for professional reasons — promoting my work and that of others, tweeting less and reading more of what others have to say. It works to an extent, but I always slide back. Twitter seems to reward snark, but what, really, is the reward? More likes and retweets? Who cares?

I can’t leave — Twitter is too important to my work. But Gay’s fine piece is a reminder that social media have fallen far short of what we were hoping for 12 to 15 years ago, and that we ourselves are largely to blame.

A small example of how racially biased algorithms distort social media

You may have heard that the algorithms used by Facebook and other social media platforms are racially biased. I ran into a small but interesting example of that earlier today.

My previous post is about a webinar on news co-ops that I attended last week. I used a photo of Kevon Paynter, co-founder of Bloc by Block News, as the lead art and a photo of Jasper Wang, co-founder of The Defector, well down in the piece.

But when I posted links on Facebook, Twitter and LinkedIn, all three of them automatically grabbed the photo of Wang as the image that would go with the link. For example, here’s how it appeared on Twitter.

I don’t know what happened. Paynter was more central to what I was writing, which is why I led with his photo. Paynter is Black; Wang is of Asian descent. There’s more contrast in the image of Wang, which may be why the algorithms identified it as a superior picture. But in so doing they ignored my choice of Paynter as the lead.

File this under “Things that make you go hmmmm.”

Can artificial intelligence help local news? Sure. And it can cause great harm as well.

Image via Pixabay

Read the rest at GBH News.

I’ll admit that I was more than a little skeptical when the Knight Foundation announced last week that it would award $3 million in grants to help local news organizations use artificial intelligence. My first reaction was that dousing the cash with gasoline and tossing a match would be just as effective.

But then I started thinking about how AI has enhanced my own work as a journalist. For instance, just a few years ago I had two unappetizing choices after I recorded an interview: transcribing it myself or sending it out to an actual human being to do the work at considerable expense. Now I use an automated system, based on AI, that does a decent job at a fraction of the cost.

Or consider Google, whose search engine makes use of AI. At one time, I’d have to travel to Beacon Hill if I wanted to look up state and local campaign finance records — and then pore through them by hand, taking notes or making photocopies as long as the quarters held out. These days I can search for “Massachusetts campaign finance reports” and have what I need in a few seconds.

Given that local journalism is in crisis, what’s not to like about the idea of helping community news organizations develop the tools they need to automate more of what they do?

Well, a few things, in fact.

Foremost among the downsides is the use of AI to produce robot-written news stories. Such a system has been in use at The Washington Post for several years to produce reports about high school football. Input a box score and out comes a story that looks more or less like an actual person wrote it. Some news organizations are doing the same with financial data. It sounds innocuous enough given that much of this work would probably go undone if it couldn’t be automated. But let’s curb our enthusiasm.

Patrick White, a journalism professor at the University of Quebec in Montreal, sounded this unrealistically hopeful note in a piece for The Conversation about a year ago: “Artificial intelligence is not there to replace journalists or eliminate jobs.” According to one estimate cited by White, AI would have only a minimal effect on newsroom employment and would “reorient editors and journalists towards value-added content: long-form journalism, feature interviews, analysis, data-driven journalism and investigative journalism.”

Uh, Professor White, let me introduce you to the two most bottom line-obsessed newspaper publishers in the United States — Alden Global Capital and Gannett. If they could, they’d unleash the algorithms to cover everything up to and including city council meetings, mayoral speeches and development proposals. And if they could figure out how to program the robots to write human-interest stories and investigative reports, well, they’d do that too.

Another danger AI poses is that it can track scrolling and clicking patterns to personalize a news report. Over time, for instance, your Boston Globe would look different from mine. Remember the “Daily Me,” an early experiment in individualized news popularized by MIT Media Lab founder Nicholas Negroponte? That didn’t quite come to pass. But it’s becoming increasingly feasible, and it represents one more step away from a common culture and a common set of facts, potentially adding another layer to the polarization that’s tearing us apart.

“Personalization of news … puts the public record at risk,” according to a report published in 2017 by Columbia’s Tow Center for Digital Journalism. “When everyone sees a different version of a story, there is no authoritative version to cite. The internet has also made it possible to remove content from the web, which may not be archived anywhere. There is no guarantee that what you see will be what everyone sees — or that it will be there in the future.”

Of course, AI has also made journalism better — and not just for transcribing interviews or Googling public records. As the Tow Center report also points out, AI makes it possible for investigative reporters to sift through thousands of records to find patterns, instances of wrongdoing or trends.

The Knight Foundation, in its press release announcing the grant, held out the promise that AI could reduce costs on the business side of news organizations — a crucial goal given how financially strapped most of them are. The $3 million will go to The Associated Press, Columbia University, the NYC Media Lab and the Partnership on AI. Under the terms of the grant, the four organizations will work together on projects such as training local journalists, developing revenue strategies and studying the ethical use of AI. It all sounds eminently worthy.

But there are always unintended consequences. The highly skilled people whom I used to pay to transcribe my interviews no longer have those jobs. High school students who might have gotten an opportunity to write up the exploits of their sports teams for a few bucks have been deprived of a chance at an early connection with news — an experience that might have turned them into paying customers or even journalists when they got older.

And local news, much of which is already produced at distant outposts, some of them overseas, is about to become that much more impersonal and removed from the communities they serve.

It’s no surprise that Google Podcasts include hateful content

I think there’s something of a category error in today’s front-page New York Times story on the hateful and false content you can find on Google Podcasts. Reporter Reggie Ugwu repeats on several occasions that Google Podcasts includes some pretty terrible stuff from neo-Nazis, white supremacists and conspiracy theorists that you won’t find at Google’s competitors. He writes:

Google Podcasts — whose app has been downloaded more than 19 million times, according to Apptopia — stands alone among major platforms in its tolerance of hate speech and other extremist content. A recent nonexhaustive search turned up more than two dozen podcasts from white supremacists and pro-Nazi groups, offering a buffet of slurs and conspiracy theories. None of the podcasts appeared on Apple Podcasts, Spotify or Stitcher.

The problem here is that Apple, Spotify and Stitcher are all trying to offer a curated experience. Google’s DNA is in search. If you Google “InfoWars,” you expect to be taken to Alex Jones’ hallucinatory home of hate and disinformation. And you are. So if you search Google Podcasts, why should that be any different? Indeed, that’s exactly the reasoning Google invoked when Ugwu contacted them for comment:

Told of the white supremacist and pro-Nazi content on its platform and asked about its policy, a Google spokeswoman, Charity Mhende, compared Google Podcasts to Google Search. She said that the company did not want to “limit what people are able to find,” and that it only blocks content “in rare circumstances, largely guided by local law.”

Let me be clear. It doesn’t have to be this way. Google could choose to keep its searches wide open while providing users of Google Podcasts with the same safe experience that its competitors offer. And maybe it should. It’s just that I find it unremarkable that a search company would run its business differently from those whose business model is based on creating a safe, walled-in environment.

I’m hardly a Google fanboy. I’d like to see it broken up so that it can no longer use search to leverage its advertising business to the disadvantage of publishers. But unless you think it ought to stop showing hate-filled websites when you search for them, then I don’t think you should be surprised that it also shows you hate-filled podcasts.

Facebook could have made itself less toxic. It chose profit and Trump instead.

Locked down following the Jan. 6 insurrection. Photo (cc) 2021 by Geoff Livingston.

Previously published at GBH News.

Working for Facebook can be pretty lucrative. According to PayScale, the average salary of a Facebook employee is $123,000, with senior software engineers earning more than $200,000. Even better, the job is pandemic-proof. Traffic soared during the early months of COVID (though advertising was down), and the service attracted nearly 2.8 billion active monthly users worldwide during the fourth quarter of 2020.

So employees are understandably reluctant to demand change from their maximum leader, the now-36-year-old Mark Zuckerberg, the man-child who has led them to their promised land.

For instance, last fall Facebook tweaked its algorithm so that users were more likely to see reliable news rather than hyperpartisan propaganda in advance of the election — a very small step in the right direction. Afterwards, some employees thought Facebook ought to do the civic-minded thing and make the change permanent. Management’s answer: Well, no, the change cost us money, so it’s time to resume business as usual. And thus it was.

Joaquin Quiñonero Candela is what you might call an extreme example of this go-along mentality. Quiñonero is the principal subject of a remarkable 6,700-word story in the current issue of Technology Review, published by MIT. As depicted by reporter Karen Hao, Quiñonero is extreme not in the sense that he’s a true believer or a bad actor or anything like that. Quite the contrary; he seems like a pretty nice guy, and the story is festooned with pictures of him outside his home in the San Francisco area, where he lives with his wife and three children, engaged in homey activities like feeding his chickens and, well, checking his phone. (It’s Zuck!)

What’s extreme, rather, is the amount of damage Quiñonero can do. He is the director of artificial intelligence for Facebook, a leading AI scientist who is universally respected for his brilliance, and the keeper of Facebook’s algorithm. He is also the head of an internal initiative called Responsible AI.

Now, you might think that the job of Responsible AI would be to find ways to make Facebook’s algorithm less harmful without chipping away too much at Zuckerberg’s net worth, estimated recently at $97 billion. But no. The way Hao tells it, Quiñonero’s shop was diverted almost from the beginning from its mission of tamping down extremist and false information so that it could take on a more politically important task: making sure that right-wing content kept popping up in users’ news feeds in order to placate Donald Trump, who falsely claimed that Facebook was biased against conservatives.

How pernicious was this? According to Hao, Facebook developed a model called the “Fairness Flow,” among whose principles was that liberal and conservative content should not be treated equally if liberal content was more factual and conservative content promoted falsehoods — which is in fact the case much of the time. But Facebook executives were having none of it, deciding for purely political reasons that the algorithm should result in equal outcomes for liberal and conservative content regardless of truthfulness. Hao writes:

“They took ‘fairness’ to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. ‘There’s no point, then,’ the researcher says. A model modified in that way ‘would have literally no impact on the actual problem’ of misinformation.”

Hao ranges across the hellscape of Facebook’s wreckage, from the Cambridge Analytica scandal to amplifying a genocidal campaign against Muslims in Myanmar to boosting content that could worsen depression and thus lead to suicide. What she shows over and over again is not that Facebook is oblivious to these problems; in fact, it recently banned a number of QAnon, anti-vaccine and Holocaust-denial groups. But, in every case, it is slow to act, placing growth, engagement and, thus, revenue ahead of social responsibility.

It is fair to ask what Facebook’s role is in our current civic crisis, with a sizable minority of the public in thrall to Trump, disdaining vaccines and obsessing over trivia like Dr. Seuss and so-called cancel culture. Isn’t Fox News more to blame than Facebook? Aren’t the falsehoods spouted every night by Tucker Carlson, Sean Hannity and Laura Ingraham ultimately more dangerous than a social network that merely reflects what we’re already interested in?

The obvious answer, I think, is that there’s a synergistic effect between the two. The propaganda comes from Fox and its ilk and moves to Facebook, where it gets distributed and amplified. That, in turn, creates more demand for outrageous content from Fox and, occasionally, fuels the growth of even more extreme outlets like Newsmax and OAN. Dangerous as the Fox effect may be, Facebook makes it worse.

Hao’s final interview with Quiñonero came after the deadly insurrection of Jan. 6. I’m not going to spoil it for you, because it’s a really fine piece of writing, and quoting a few bits wouldn’t do it justice. But Quiñonero comes across as someone who knows, deep in his heart, that he could have played a role in preventing what happened but chose not to act.

It’s devastating — and something for him to think about as he ponders life in his nice home, with his family and his chickens, which are now coming home to roost.

Amazon outrage of the week

From The Washington Post’s Geoffrey Fowler:

The tech giant … won’t sell downloadable versions of its more than 10,000 e-books or tens of thousands of audiobooks to libraries. That’s right, for a decade, the company that killed bookstores has been starving the reading institution that cares for kids, the needy and the curious. And that’s turned into a mission-critical problem during a pandemic that cut off physical access to libraries and left a lot of people unable to afford books on their own.

And good for the Post, which, as we all know, is owned by Amazon founder Jeff Bezos.

Thinking through a social-contract framework for reforming Section 230

Mary Anne Franks. Photo (cc) 2014 by the Internet Education Foundation.

The Lawfare podcasts are doing an excellent job of making sense of complicated media-technical issues. Last week I recommended a discussion of Australia’s new law mandating that Facebook and Google pay for news. Today I want to tell you about an interview with Mary Anne Franks, a law professor at the University of Miami, who is calling for the reform of Section 230 of the Communications Decency Act.

The host, Alan Rozenshtein, guides Franks through a paper she’s written titled “Section 230 and the Anti-Social Contract,” which, as he points out, is short and highly readable. Franks’ overriding argument is that Section 230 — which protects internet services, including platform companies such as Facebook and Twitter, from being sued for what their users post — is a way of entrenching the traditional white male power structure.

That might strike you as a bit much, and, as you’ll hear, Rozenshtein challenges her on it, pointing out that some members of disenfranchised communities have been adamant about retaining Section 230 in order to protect their free-speech rights. Nevertheless, her thesis is elegant, encompassing everyone from Thomas Jefferson to John Perry Barlow, the author of the 1996 document “A Declaration of the Independence of Cyberspace,” of which she takes a dim view. Franks writes:

Section 230 serves as an anti-social contract, replicating and perpetuating long-standing inequalities of gender, race, and class. The power that tech platforms have over individuals can be legitimized only by rejecting the fraudulent contract of Section 230 and instituting principles of consent, reciprocity, and collective responsibility.

So what is to be done? Franks pushes back on Rozenshtein’s suggestion that Section 230 reform has attracted bipartisan support. Republicans such as Donald Trump and Sen. Josh Hawley, she notes, are talking about changes that would force the platforms to publish content whether they want to or not — a nonstarter, since that would be a violation of the First Amendment.

Democrats, on the other hand, are seeking to find ways of limiting the Section 230 protections that the platform companies now enjoy without tearing down the entire law. Again, she writes:

Specifically, a true social contract would require tech platforms to offer transparent and comprehensive information about their products so that individuals can make informed choices about whether to use them. It would also require tech companies to be held accountable for foreseeable harms arising from the use of their platforms and services, instead of being granted preemptive immunity for ignoring or profiting from those harms. Online intermediaries must be held to similar standards as other private businesses, including duty of care and other collective responsibility principles.

Putting a little more meat on the bones, Franks adds that Section 230 should be reformed so as to “deny immunity to any online intermediary that exhibits deliberate indifference to harmful conduct.”

Today’s New York Times offers some details as to what that might look like:

One bill introduced last month would strip the protections from content the companies are paid to distribute, like ads, among other categories. A different proposal, expected to be reintroduced from the last congressional session, would allow people to sue when a platform amplified content linked to terrorism. And another that is likely to return would exempt content from the law only when a platform failed to follow a court’s order to take it down.

Since its passage in 1996, Section 230 has been an incredible boon to any internet publisher who opens its gates to third-party content. They’re under no obligation to take down material that is libelous or threatening. Quite the contrary — they can make money from it.

This is hardly what the First Amendment envisioned, since publishers in other spheres are legally responsible for every bit of content they put before their audiences, up to and including advertisements and letters to the editor. The internet as we know it would be an impossibility if Section 230 didn’t exist in some form. But it may be time to rein it in, and Franks has put forth a valuable framework for how we might think about that.

Become a member of Media Nation today.