I think there’s something of a category error in today’s front-page New York Times story on the hateful and false content you can find on Google Podcasts. Reporter Reggie Ugwu repeats on several occasions that Google Podcasts includes some pretty terrible stuff from neo-Nazis, white supremacists and conspiracy theorists that you won’t find at Google’s competitors. He writes:
Google Podcasts — whose app has been downloaded more than 19 million times, according to Apptopia — stands alone among major platforms in its tolerance of hate speech and other extremist content. A recent nonexhaustive search turned up more than two dozen podcasts from white supremacists and pro-Nazi groups, offering a buffet of slurs and conspiracy theories. None of the podcasts appeared on Apple Podcasts, Spotify or Stitcher.
The problem here is that Apple, Spotify and Stitcher are all trying to offer a curated experience. Google’s DNA is in search. If you Google “InfoWars,” you expect to be taken to Alex Jones’ hallucinatory home of hate and disinformation. And you are. So if you search Google Podcasts, why should that be any different? Indeed, that’s exactly the reasoning Google invoked when Ugwu contacted them for comment:
Told of the white supremacist and pro-Nazi content on its platform and asked about its policy, a Google spokeswoman, Charity Mhende, compared Google Podcasts to Google Search. She said that the company did not want to “limit what people are able to find,” and that it only blocks content “in rare circumstances, largely guided by local law.”
Let me be clear. It doesn’t have to be this way. Google could choose to keep its searches wide open while providing users of Google Podcasts with the same safe experience that its competitors offer. And maybe it should. It’s just that I find it unremarkable that a search company would run its business differently from those whose business model is based on creating a safe, walled-in environment.
I’m hardly a Google fanboy. I’d like to see it broken up so that it can no longer use search to leverage its advertising business to the disadvantage of publishers. But unless you think it ought to stop showing hate-filled websites when you search for them, then I don’t think you should be surprised that it also shows you hate-filled podcasts.
Working for Facebook can be pretty lucrative. According to PayScale, the average salary of a Facebook employee is $123,000, with senior software engineers earning more than $200,000. Even better, the job is pandemic-proof. Traffic soared during the early months of COVID (though advertising was down), and the service attracted nearly 2.8 billion active monthly users worldwide during the fourth quarter of 2020.
So employees are understandably reluctant to demand change from their maximum leader, the now-36-year-old Mark Zuckerberg, the man-child who has led them to their promised land.
For instance, last fall Facebook tweaked its algorithm so that users were more likely to see reliable news rather than hyperpartisan propaganda in advance of the election — a very small step in the right direction. Afterwards, some employees thought Facebook ought to do the civic-minded thing and make the change permanent. Management’s answer: Well, no, the change cost us money, so it’s time to resume business as usual. And thus it was.
Joaquin Quiñonero Candela is what you might call an extreme example of this go-along mentality. Quiñonero is the principal subject of a remarkable 6,700-word story in the current issue of Technology Review, published by MIT. As depicted by reporter Karen Hao, Quiñonero is extreme not in the sense that he’s a true believer or a bad actor or anything like that. Quite the contrary; he seems like a pretty nice guy, and the story is festooned with pictures of him outside his home in the San Francisco area, where he lives with his wife and three children, engaged in homey activities like feeding his chickens and, well, checking his phone. (It’s Zuck!)
What’s extreme, rather, is the amount of damage Quiñonero can do. He is the director of artificial intelligence for Facebook, a leading AI scientist who is universally respected for his brilliance, and the keeper of Facebook’s algorithm. He is also the head of an internal initiative called Responsible AI.
Now, you might think that the job of Responsible AI would be to find ways to make Facebook’s algorithm less harmful without chipping away too much at Zuckerberg’s net worth, estimated recently at $97 billion. But no. The way Hao tells it, Quiñonero’s shop was diverted almost from the beginning from its mission of tamping down extremist and false information so that it could take on a more politically important task: making sure that right-wing content kept popping up in users’ news feeds in order to placate Donald Trump, who falsely claimed that Facebook was biased against conservatives.
How pernicious was this? According to Hao, Facebook developed a model called the “Fairness Flow,” among whose principles was that liberal and conservative content should not be treated equally if liberal content was more factual and conservative content promoted falsehoods — which is in fact the case much of the time. But Facebook executives were having none of it, deciding for purely political reasons that the algorithm should result in equal outcomes for liberal and conservative content regardless of truthfulness. Hao writes:
“They took ‘fairness’ to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. ‘There’s no point, then,’ the researcher says. A model modified in that way ‘would have literally no impact on the actual problem’ of misinformation.”
Hao ranges across the hellscape of Facebook’s wreckage, from the Cambridge Analytica scandal to amplifying a genocidal campaign against Muslims in Myanmar to boosting content that could worsen depression and thus lead to suicide. What she shows over and over again is not that Facebook is oblivious to these problems; in fact, it recently banned a number of QAnon, anti-vaccine and Holocaust-denial groups. But, in every case, it is slow to act, placing growth, engagement and, thus, revenue ahead of social responsibility.
It is fair to ask what Facebook’s role is in our current civic crisis, with a sizable minority of the public in thrall to Trump, disdaining vaccines and obsessing over trivia like Dr. Seuss and so-called cancel culture. Isn’t Fox News more to blame than Facebook? Aren’t the falsehoods spouted every night by Tucker Carlson, Sean Hannity and Laura Ingraham ultimately more dangerous than a social network that merely reflects what we’re already interested in?
The obvious answer, I think, is that there’s a synergistic effect between the two. The propaganda comes from Fox and its ilk and moves to Facebook, where it gets distributed and amplified. That, in turn, creates more demand for outrageous content from Fox and, occasionally, fuels the growth of even more extreme outlets like Newsmax and OAN. Dangerous as the Fox effect may be, Facebook makes it worse.
Hao’s final interview with Quiñonero came after the deadly insurrection of Jan. 6. I’m not going to spoil it for you, because it’s a really fine piece of writing, and quoting a few bits wouldn’t do it justice. But Quiñonero comes across as someone who knows, deep in his heart, that he could have played a role in preventing what happened but chose not to act.
It’s devastating — and something for him to think about as he ponders life in his nice home, with his family and his chickens, which are now coming home to roost.
The tech giant … won’t sell downloadable versions of its more than 10,000 e-books or tens of thousands of audiobooks to libraries. That’s right, for a decade, the company that killed bookstores has been starving the reading institution that cares for kids, the needy and the curious. And that’s turned into a mission-critical problem during a pandemic that cut off physical access to libraries and left a lot of people unable to afford books on their own.
And good for the Post, which, as we all know, is owned by Amazon founder Jeff Bezos.
The Lawfare podcasts are doing an excellent job of making sense of complicated media-technical issues. Last week I recommended a discussion of Australia’s new law mandating that Facebook and Google pay for news. Today I want to tell you about an interview with Mary Anne Franks, a law professor at the University of Miami, who is calling for the reform of Section 230 of the Communications Decency Act.
The host, Alan Rozenshtein, guides Franks through a paper she’s written titled “Section 230 and the Anti-Social Contract,” which, as he points out, is short and highly readable. Franks’ overriding argument is that Section 230 — which protects internet services, including platform companies such as Facebook and Twitter, from being sued for what their users post — is a way of entrenching the traditional white male power structure.
That might strike you as a bit much, and, as you’ll hear, Rozenshtein challenges her on it, pointing out that some members of disenfranchised communities have been adamant about retaining Section 230 in order to protect their free-speech rights. Nevertheless, her thesis is elegant, encompassing everyone from Thomas Jefferson to John Perry Barlow, the author of the 1996 document “A Declaration of the Independence of Cyberspace,” of which she takes a dim view. Franks writes:
Section 230 serves as an anti-social contract, replicating and perpetuating long-standing inequalities of gender, race, and class. The power that tech platforms have over individuals can be legitimized only by rejecting the fraudulent contract of Section 230 and instituting principles of consent, reciprocity, and collective responsibility.
So what is to be done? Franks pushes back on Rozenshtein’s suggestion that Section 230 reform has attracted bipartisan support. Republicans such as Donald Trump and Sen. Josh Hawley, she notes, are talking about changes that would force the platforms to publish content whether they want to or not — a nonstarter, since that would be a violation of the First Amendment.
Democrats, on the other hand, are seeking to find ways of limiting the Section 230 protections that the platform companies now enjoy without tearing down the entire law. Again, she writes:
Specifically, a true social contract would require tech platforms to offer transparent and comprehensive information about their products so that individuals can make informed choices about whether to use them. It would also require tech companies to be held accountable for foreseeable harms arising from the use of their platforms and services, instead of being granted preemptive immunity for ignoring or profiting from those harms. Online intermediaries must be held to similar standards as other private businesses, including duty of care and other collective responsibility principles.
Putting a little more meat on the bones, Franks adds that Section 230 should be reformed so as to “deny immunity to any online intermediary that exhibits deliberate indifference to harmful conduct.”
One bill introduced last month would strip the protections from content the companies are paid to distribute, like ads, among other categories. A different proposal, expected to be reintroduced from the last congressional session, would allow people to sue when a platform amplified content linked to terrorism. And another that is likely to return would exempt content from the law only when a platform failed to follow a court’s order to take it down.
Since its passage in 1996, Section 230 has been an incredible boon to any internet publisher who opens its gates to third-party content. They’re under no obligation to take down material that is libelous or threatening. Quite the contrary — they can make money from it.
This is hardly what the First Amendment envisioned, since publishers in other spheres are legally responsible for every bit of content they put before their audiences, up to and including advertisements and letters to the editor. The internet as we know it would be an impossibility if Section 230 didn’t exist in some form. But it may be time to rein it in, and Franks has put forth a valuable framework for how we might think about that.
If you get a chance, you should listen to this Lawfare podcast featuring Rasmus Kleis Nielsen, director of the Reuters Institute and professor of political communication at the University of Oxford.
Nielsen covers a lot of ground, but the most interesting part comes toward the end, when he discusses Australia’s new law that (to way oversimplify) requires Facebook and Google to pay for news.
What makes this worthwhile is Nielsen’s calm rationality. For instance, he pronounces the Australian law a success if success is defined as extracting revenue from Big Tech and giving it to large incumbent news organizations. That’s not necessarily a bad thing, since those news orgs are where the social media giants have been getting a lot of their content.
But Nielsen says we should look at other definitions of success, too — such as finding ways for Google and Facebook to support local and nonprofit news organizations as well as those that serve undercovered communities.
And thanks to Heidi Legg for calling this to my attention.
Could Australian-style rules to force Google and Facebook to pay for news be coming to the United States?
U.S. Rep. David Cicilline, D-R.I., told the CNN program “Reliable Sources” over the weekend that the House will soon take up legislation that would give news publishers an antitrust exemption allowing them to bargain collectively with the Big Tech platforms. The purpose would be negotiating a compensation system.
“Local news is on life support in this country,” said Cicilline, who chairs the House Judiciary Antitrust Subcommittee. “The monopoly power of these two platforms is resulting in a significant decline in local journalism.”
More broadly, he said his committee will also take up parts of a 450-page report, compiled over 16 months, to rein in the power of the giant platforms. He told host Brian Stelter that many of the recommendations in the report have bipartisan support and are aimed at breaking up the tech companies’ monopoly power.
The most intriguing of those ideas, according to a recent story by Cat Zakrzewski in The Washington Post, involves “interoperability and data portability, which would make it easier for consumers to move their data to new or competing tech services.”
Facebook has massive market dominance, and it would be difficult for a competitor to get a toehold in the market in any case. But it would be at least somewhat more feasible if users could easily transfer all their data over to a new service and delete it from Facebook, something that is almost impossible to do at the moment.
Regardless of what happens, it seems that Google and Facebook may soon no longer be able to operate with impunity. I’m far from certain that the Australian system is the best way to go given that it privileges entrenched publishers like Rupert Murdoch. But the idea that the platforms should pay something for what they use is long overdue.
Regardless of what really happened, this had the appearance of pure extortion.
In response to Australia’s new law requiring Google and Facebook to hold negotiations with news publishers aimed at compensating publishers for their content, Facebook took down not just news — which would be a proportionate response, I suppose — but all kinds of information.
The newly banned Facebook content comprises, as The Washington Post reports, “dozens of government and charity websites as well, including public health sites containing critical information about the pandemic during the first week of its coronavirus vaccine rollout.”
The information was restored about 12 hours later, and Facebook claimed it was all a mistake. Still, it was a powerful demonstration of what Mark Zuckerberg can do if you refuse to kiss the ring.
I’m hardly the first person to make this observation, but there’s a reason that Google is trying to accommodate Australian news publishers while Facebook is fighting them tooth and nail: Google needs news much more than Facebook does. The New York Times puts it this way:
Facebook and Google ultimately value news differently. Google’s mission statement has long been to organize the world’s information, an ambition that is not achievable without up-to-the-minute news. For Facebook, news is not as central. Instead, the company positions itself as a network of users coming together to share photos, political views, internet memes, videos — and, on occasion, news articles.
While I have no problem with publishers trying to extract some revenues from the two tech giants, I’m disheartened to see that Google is trying to buy its way out of trouble in Australia by cutting deals with the likes of Rupert Murdoch. This shouldn’t be a matter of buying off critics and then resuming business as usual.
That’s why I prefer an idea put forth by the tech analyst Benedict Evans in a conversation with Ingram: help fund news by taxing Google and Facebook. At least theoretically, that could lead to a more equitable distribution of revenues to large and small publishers alike.
Regardless of what the road ahead looks like, though, it’s clear that Facebook is going to be harder to deal with than Google. The Zuckerborg just doesn’t need journalism as much.
The Overton Window has opened a bit wider for the idea of requiring Google and Facebook to pay for news content. At Axios, Sara Fischer reports that Microsoft president Brad Smith has endorsed the Australian government’s move to do just that — and thinks such a system ought to be considered in the U.S. as well.
What’s taking place in Australia is complicated, but essentially it requires Google and Facebook to bargain with the news business and come up with a compensation system. Both companies have said they would stop offering some of their services if Australian authorities don’t back off.
In the U.S., the News Media Alliance, a lobbying group for news publishers, has been pushing for several years for an antitrust exemption that would allow them the right to bargain collectively with the tech giants — which is exactly what is going to happen in Australia. With the sheen wearing off Big Tech’s once-sterling image, the likelihood of Congress passing such an exemption has increased. A lawsuit brought by a group of West Virginia newspapers that I wrote about for GBH News last week may serve as a further goad.
In a blog post, Microsoft’s Smith cites a News Media Alliance study showing that Google makes an estimated $4.7 billion a year “from crawling and scraping news publishers’ content.” That study came under fire at the time of its release a couple of years ago. But regardless of the actual figure, Google — and Facebook — are surely making a lot of money from other people’s content without paying for any of it.
Smith makes no bones about his own business imperatives, saying that Microsoft is prepared to play by Australia’s rules through its Bing search engine, writing:
Microsoft’s Bing search service has less than 5% market share in Australia, substantially smaller than the 15-20% market share that we have across PC and mobile searches in the United States and the 10-15% share we have in Canada and the United Kingdom. But, with a realistic prospect of gaining usage share, we are confident we can build the service Australians want and need. And, unlike Google, if we can grow, we are prepared to sign up for the new law’s obligations, including sharing revenue as proposed with news organizations. The key would be to create a more competitive market, something the government can facilitate. But, as we made clear, we are comfortable running a high-quality search service at lower economic margins than Google and with more economic returns for the press.
A final thought. If Congress isn’t prepared to act, might it be possible to require Google and Facebook to compensate news publishers at the state level? Jack Nicas reports in today’s New York Times that a proposal has been made in North Dakota to forbid Apple and Google from collecting app-store fees from North Dakota-based businesses.
The legislation strikes me as more than a little half-baked. Yet the principle — that states can impose their own regulations on Big Tech — is one worth pondering.
Massachusetts Republican gadfly Shiva Ayyadurai has been banned from Twitter, most likely for claiming that he’d lost his most recent race for the U.S. Senate only because Secretary of State Bill Galvin’s office destroyed a million electronic ballots. Adam Gaffin of Universal Hub has the details.
In 2018, I gave the City of Cambridge a GBH News New England Muzzle Award for ordering Ayyadurai to dismantle an wildly offensive sign on his company’s Cambridge property that criticized Democratic Sen. Elizabeth Warren. City officials told him that the sign, which read “Only a REAL INDIAN Can Defeat the Fake Indian,” violated the city’s building code.
Ayyadurai threatened to sue, which led the city to back off.