You may have heard that the algorithms used by Facebook and other social media platforms are racially biased. I ran into a small but interesting example of that earlier today.
My previous post is about a webinar on news co-ops that I attended last week. I used a photo of Kevon Paynter, co-founder of Bloc by Block News, as the lead art and a photo of Jasper Wang, co-founder of The Defector, well down in the piece.
But when I posted links on Facebook, Twitter and LinkedIn, all three of them automatically grabbed the photo of Wang as the image that would go with the link. For example, here’s how it appeared on Twitter.
New at Media Nation: Are cooperatively owned news projects an idea whose time has finally come? https://t.co/ZqfzNr2dzA
I don’t know what happened. Paynter was more central to what I was writing, which is why I led with his photo. Paynter is Black; Wang is of Asian descent. There’s more contrast in the image of Wang, which may be why the algorithms identified it as a superior picture. But in so doing they ignored my choice of Paynter as the lead.
I’ll admit that I was more than a little skeptical when the Knight Foundation announced last week that it would award $3 million in grants to help local news organizations use artificial intelligence. My first reaction was that dousing the cash with gasoline and tossing a match would be just as effective.
But then I started thinking about how AI has enhanced my own work as a journalist. For instance, just a few years ago I had two unappetizing choices after I recorded an interview: transcribing it myself or sending it out to an actual human being to do the work at considerable expense. Now I use an automated system, based on AI, that does a decent job at a fraction of the cost.
Or consider Google, whose search engine makes use of AI. At one time, I’d have to travel to Beacon Hill if I wanted to look up state and local campaign finance records — and then pore through them by hand, taking notes or making photocopies as long as the quarters held out. These days I can search for “Massachusetts campaign finance reports” and have what I need in a few seconds.
Given that local journalism is in crisis, what’s not to like about the idea of helping community news organizations develop the tools they need to automate more of what they do?
Well, a few things, in fact.
Foremost among the downsides is the use of AI to produce robot-written news stories. Such a system has been in use at The Washington Post for several years to produce reports about high school football. Input a box score and out comes a story that looks more or less like an actual person wrote it. Some news organizations are doing the same with financial data. It sounds innocuous enough given that much of this work would probably go undone if it couldn’t be automated. But let’s curb our enthusiasm.
Patrick White, a journalism professor at the University of Quebec in Montreal, sounded this unrealistically hopeful note in a piece for The Conversation about a year ago: “Artificial intelligence is not there to replace journalists or eliminate jobs.” According to one estimate cited by White, AI would have only a minimal effect on newsroom employment and would “reorient editors and journalists towards value-added content: long-form journalism, feature interviews, analysis, data-driven journalism and investigative journalism.”
Uh, Professor White, let me introduce you to the two most bottom line-obsessed newspaper publishers in the United States — Alden Global Capital and Gannett. If they could, they’d unleash the algorithms to cover everything up to and including city council meetings, mayoral speeches and development proposals. And if they could figure out how to program the robots to write human-interest stories and investigative reports, well, they’d do that too.
Another danger AI poses is that it can track scrolling and clicking patterns to personalize a news report. Over time, for instance, your Boston Globe would look different from mine. Remember the “Daily Me,” an early experiment in individualized news popularized by MIT Media Lab founder Nicholas Negroponte? That didn’t quite come to pass. But it’s becoming increasingly feasible, and it represents one more step away from a common culture and a common set of facts, potentially adding another layer to the polarization that’s tearing us apart.
“Personalization of news … puts the public record at risk,” according to a report published in 2017 by Columbia’s Tow Center for Digital Journalism. “When everyone sees a different version of a story, there is no authoritative version to cite. The internet has also made it possible to remove content from the web, which may not be archived anywhere. There is no guarantee that what you see will be what everyone sees — or that it will be there in the future.”
Of course, AI has also made journalism better — and not just for transcribing interviews or Googling public records. As the Tow Center report also points out, AI makes it possible for investigative reporters to sift through thousands of records to find patterns, instances of wrongdoing or trends.
The Knight Foundation, in its press release announcing the grant, held out the promise that AI could reduce costs on the business side of news organizations — a crucial goal given how financially strapped most of them are. The $3 million will go to The Associated Press, Columbia University, the NYC Media Lab and the Partnership on AI. Under the terms of the grant, the four organizations will work together on projects such as training local journalists, developing revenue strategies and studying the ethical use of AI. It all sounds eminently worthy.
But there are always unintended consequences. The highly skilled people whom I used to pay to transcribe my interviews no longer have those jobs. High school students who might have gotten an opportunity to write up the exploits of their sports teams for a few bucks have been deprived of a chance at an early connection with news — an experience that might have turned them into paying customers or even journalists when they got older.
And local news, much of which is already produced at distant outposts, some of them overseas, is about to become that much more impersonal and removed from the communities they serve.
I think there’s something of a category error in today’s front-page New York Times story on the hateful and false content you can find on Google Podcasts. Reporter Reggie Ugwu repeats on several occasions that Google Podcasts includes some pretty terrible stuff from neo-Nazis, white supremacists and conspiracy theorists that you won’t find at Google’s competitors. He writes:
Google Podcasts — whose app has been downloaded more than 19 million times, according to Apptopia — stands alone among major platforms in its tolerance of hate speech and other extremist content. A recent nonexhaustive search turned up more than two dozen podcasts from white supremacists and pro-Nazi groups, offering a buffet of slurs and conspiracy theories. None of the podcasts appeared on Apple Podcasts, Spotify or Stitcher.
The problem here is that Apple, Spotify and Stitcher are all trying to offer a curated experience. Google’s DNA is in search. If you Google “InfoWars,” you expect to be taken to Alex Jones’ hallucinatory home of hate and disinformation. And you are. So if you search Google Podcasts, why should that be any different? Indeed, that’s exactly the reasoning Google invoked when Ugwu contacted them for comment:
Told of the white supremacist and pro-Nazi content on its platform and asked about its policy, a Google spokeswoman, Charity Mhende, compared Google Podcasts to Google Search. She said that the company did not want to “limit what people are able to find,” and that it only blocks content “in rare circumstances, largely guided by local law.”
Let me be clear. It doesn’t have to be this way. Google could choose to keep its searches wide open while providing users of Google Podcasts with the same safe experience that its competitors offer. And maybe it should. It’s just that I find it unremarkable that a search company would run its business differently from those whose business model is based on creating a safe, walled-in environment.
I’m hardly a Google fanboy. I’d like to see it broken up so that it can no longer use search to leverage its advertising business to the disadvantage of publishers. But unless you think it ought to stop showing hate-filled websites when you search for them, then I don’t think you should be surprised that it also shows you hate-filled podcasts.
Working for Facebook can be pretty lucrative. According to PayScale, the average salary of a Facebook employee is $123,000, with senior software engineers earning more than $200,000. Even better, the job is pandemic-proof. Traffic soared during the early months of COVID (though advertising was down), and the service attracted nearly 2.8 billion active monthly users worldwide during the fourth quarter of 2020.
So employees are understandably reluctant to demand change from their maximum leader, the now-36-year-old Mark Zuckerberg, the man-child who has led them to their promised land.
For instance, last fall Facebook tweaked its algorithm so that users were more likely to see reliable news rather than hyperpartisan propaganda in advance of the election — a very small step in the right direction. Afterwards, some employees thought Facebook ought to do the civic-minded thing and make the change permanent. Management’s answer: Well, no, the change cost us money, so it’s time to resume business as usual. And thus it was.
Joaquin Quiñonero Candela is what you might call an extreme example of this go-along mentality. Quiñonero is the principal subject of a remarkable 6,700-word story in the current issue of Technology Review, published by MIT. As depicted by reporter Karen Hao, Quiñonero is extreme not in the sense that he’s a true believer or a bad actor or anything like that. Quite the contrary; he seems like a pretty nice guy, and the story is festooned with pictures of him outside his home in the San Francisco area, where he lives with his wife and three children, engaged in homey activities like feeding his chickens and, well, checking his phone. (It’s Zuck!)
What’s extreme, rather, is the amount of damage Quiñonero can do. He is the director of artificial intelligence for Facebook, a leading AI scientist who is universally respected for his brilliance, and the keeper of Facebook’s algorithm. He is also the head of an internal initiative called Responsible AI.
Now, you might think that the job of Responsible AI would be to find ways to make Facebook’s algorithm less harmful without chipping away too much at Zuckerberg’s net worth, estimated recently at $97 billion. But no. The way Hao tells it, Quiñonero’s shop was diverted almost from the beginning from its mission of tamping down extremist and false information so that it could take on a more politically important task: making sure that right-wing content kept popping up in users’ news feeds in order to placate Donald Trump, who falsely claimed that Facebook was biased against conservatives.
How pernicious was this? According to Hao, Facebook developed a model called the “Fairness Flow,” among whose principles was that liberal and conservative content should not be treated equally if liberal content was more factual and conservative content promoted falsehoods — which is in fact the case much of the time. But Facebook executives were having none of it, deciding for purely political reasons that the algorithm should result in equal outcomes for liberal and conservative content regardless of truthfulness. Hao writes:
“They took ‘fairness’ to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. ‘There’s no point, then,’ the researcher says. A model modified in that way ‘would have literally no impact on the actual problem’ of misinformation.”
Hao ranges across the hellscape of Facebook’s wreckage, from the Cambridge Analytica scandal to amplifying a genocidal campaign against Muslims in Myanmar to boosting content that could worsen depression and thus lead to suicide. What she shows over and over again is not that Facebook is oblivious to these problems; in fact, it recently banned a number of QAnon, anti-vaccine and Holocaust-denial groups. But, in every case, it is slow to act, placing growth, engagement and, thus, revenue ahead of social responsibility.
It is fair to ask what Facebook’s role is in our current civic crisis, with a sizable minority of the public in thrall to Trump, disdaining vaccines and obsessing over trivia like Dr. Seuss and so-called cancel culture. Isn’t Fox News more to blame than Facebook? Aren’t the falsehoods spouted every night by Tucker Carlson, Sean Hannity and Laura Ingraham ultimately more dangerous than a social network that merely reflects what we’re already interested in?
The obvious answer, I think, is that there’s a synergistic effect between the two. The propaganda comes from Fox and its ilk and moves to Facebook, where it gets distributed and amplified. That, in turn, creates more demand for outrageous content from Fox and, occasionally, fuels the growth of even more extreme outlets like Newsmax and OAN. Dangerous as the Fox effect may be, Facebook makes it worse.
Hao’s final interview with Quiñonero came after the deadly insurrection of Jan. 6. I’m not going to spoil it for you, because it’s a really fine piece of writing, and quoting a few bits wouldn’t do it justice. But Quiñonero comes across as someone who knows, deep in his heart, that he could have played a role in preventing what happened but chose not to act.
It’s devastating — and something for him to think about as he ponders life in his nice home, with his family and his chickens, which are now coming home to roost.
The tech giant … won’t sell downloadable versions of its more than 10,000 e-books or tens of thousands of audiobooks to libraries. That’s right, for a decade, the company that killed bookstores has been starving the reading institution that cares for kids, the needy and the curious. And that’s turned into a mission-critical problem during a pandemic that cut off physical access to libraries and left a lot of people unable to afford books on their own.
And good for the Post, which, as we all know, is owned by Amazon founder Jeff Bezos.
The Lawfare podcasts are doing an excellent job of making sense of complicated media-technical issues. Last week I recommended a discussion of Australia’s new law mandating that Facebook and Google pay for news. Today I want to tell you about an interview with Mary Anne Franks, a law professor at the University of Miami, who is calling for the reform of Section 230 of the Communications Decency Act.
The host, Alan Rozenshtein, guides Franks through a paper she’s written titled “Section 230 and the Anti-Social Contract,” which, as he points out, is short and highly readable. Franks’ overriding argument is that Section 230 — which protects internet services, including platform companies such as Facebook and Twitter, from being sued for what their users post — is a way of entrenching the traditional white male power structure.
That might strike you as a bit much, and, as you’ll hear, Rozenshtein challenges her on it, pointing out that some members of disenfranchised communities have been adamant about retaining Section 230 in order to protect their free-speech rights. Nevertheless, her thesis is elegant, encompassing everyone from Thomas Jefferson to John Perry Barlow, the author of the 1996 document “A Declaration of the Independence of Cyberspace,” of which she takes a dim view. Franks writes:
Section 230 serves as an anti-social contract, replicating and perpetuating long-standing inequalities of gender, race, and class. The power that tech platforms have over individuals can be legitimized only by rejecting the fraudulent contract of Section 230 and instituting principles of consent, reciprocity, and collective responsibility.
So what is to be done? Franks pushes back on Rozenshtein’s suggestion that Section 230 reform has attracted bipartisan support. Republicans such as Donald Trump and Sen. Josh Hawley, she notes, are talking about changes that would force the platforms to publish content whether they want to or not — a nonstarter, since that would be a violation of the First Amendment.
Democrats, on the other hand, are seeking to find ways of limiting the Section 230 protections that the platform companies now enjoy without tearing down the entire law. Again, she writes:
Specifically, a true social contract would require tech platforms to offer transparent and comprehensive information about their products so that individuals can make informed choices about whether to use them. It would also require tech companies to be held accountable for foreseeable harms arising from the use of their platforms and services, instead of being granted preemptive immunity for ignoring or profiting from those harms. Online intermediaries must be held to similar standards as other private businesses, including duty of care and other collective responsibility principles.
Putting a little more meat on the bones, Franks adds that Section 230 should be reformed so as to “deny immunity to any online intermediary that exhibits deliberate indifference to harmful conduct.”
One bill introduced last month would strip the protections from content the companies are paid to distribute, like ads, among other categories. A different proposal, expected to be reintroduced from the last congressional session, would allow people to sue when a platform amplified content linked to terrorism. And another that is likely to return would exempt content from the law only when a platform failed to follow a court’s order to take it down.
Since its passage in 1996, Section 230 has been an incredible boon to any internet publisher who opens its gates to third-party content. They’re under no obligation to take down material that is libelous or threatening. Quite the contrary — they can make money from it.
This is hardly what the First Amendment envisioned, since publishers in other spheres are legally responsible for every bit of content they put before their audiences, up to and including advertisements and letters to the editor. The internet as we know it would be an impossibility if Section 230 didn’t exist in some form. But it may be time to rein it in, and Franks has put forth a valuable framework for how we might think about that.
If you get a chance, you should listen to this Lawfare podcast featuring Rasmus Kleis Nielsen, director of the Reuters Institute and professor of political communication at the University of Oxford.
Nielsen covers a lot of ground, but the most interesting part comes toward the end, when he discusses Australia’s new law that (to way oversimplify) requires Facebook and Google to pay for news.
What makes this worthwhile is Nielsen’s calm rationality. For instance, he pronounces the Australian law a success if success is defined as extracting revenue from Big Tech and giving it to large incumbent news organizations. That’s not necessarily a bad thing, since those news orgs are where the social media giants have been getting a lot of their content.
But Nielsen says we should look at other definitions of success, too — such as finding ways for Google and Facebook to support local and nonprofit news organizations as well as those that serve undercovered communities.
And thanks to Heidi Legg for calling this to my attention.
Could Australian-style rules to force Google and Facebook to pay for news be coming to the United States?
U.S. Rep. David Cicilline, D-R.I., told the CNN program “Reliable Sources” over the weekend that the House will soon take up legislation that would give news publishers an antitrust exemption allowing them to bargain collectively with the Big Tech platforms. The purpose would be negotiating a compensation system.
“Local news is on life support in this country,” said Cicilline, who chairs the House Judiciary Antitrust Subcommittee. “The monopoly power of these two platforms is resulting in a significant decline in local journalism.”
More broadly, he said his committee will also take up parts of a 450-page report, compiled over 16 months, to rein in the power of the giant platforms. He told host Brian Stelter that many of the recommendations in the report have bipartisan support and are aimed at breaking up the tech companies’ monopoly power.
The most intriguing of those ideas, according to a recent story by Cat Zakrzewski in The Washington Post, involves “interoperability and data portability, which would make it easier for consumers to move their data to new or competing tech services.”
Facebook has massive market dominance, and it would be difficult for a competitor to get a toehold in the market in any case. But it would be at least somewhat more feasible if users could easily transfer all their data over to a new service and delete it from Facebook, something that is almost impossible to do at the moment.
Regardless of what happens, it seems that Google and Facebook may soon no longer be able to operate with impunity. I’m far from certain that the Australian system is the best way to go given that it privileges entrenched publishers like Rupert Murdoch. But the idea that the platforms should pay something for what they use is long overdue.
Regardless of what really happened, this had the appearance of pure extortion.
In response to Australia’s new law requiring Google and Facebook to hold negotiations with news publishers aimed at compensating publishers for their content, Facebook took down not just news — which would be a proportionate response, I suppose — but all kinds of information.
The newly banned Facebook content comprises, as The Washington Post reports, “dozens of government and charity websites as well, including public health sites containing critical information about the pandemic during the first week of its coronavirus vaccine rollout.”
The information was restored about 12 hours later, and Facebook claimed it was all a mistake. Still, it was a powerful demonstration of what Mark Zuckerberg can do if you refuse to kiss the ring.
I’m hardly the first person to make this observation, but there’s a reason that Google is trying to accommodate Australian news publishers while Facebook is fighting them tooth and nail: Google needs news much more than Facebook does. The New York Times puts it this way:
Facebook and Google ultimately value news differently. Google’s mission statement has long been to organize the world’s information, an ambition that is not achievable without up-to-the-minute news. For Facebook, news is not as central. Instead, the company positions itself as a network of users coming together to share photos, political views, internet memes, videos — and, on occasion, news articles.
While I have no problem with publishers trying to extract some revenues from the two tech giants, I’m disheartened to see that Google is trying to buy its way out of trouble in Australia by cutting deals with the likes of Rupert Murdoch. This shouldn’t be a matter of buying off critics and then resuming business as usual.
That’s why I prefer an idea put forth by the tech analyst Benedict Evans in a conversation with Ingram: help fund news by taxing Google and Facebook. At least theoretically, that could lead to a more equitable distribution of revenues to large and small publishers alike.
Regardless of what the road ahead looks like, though, it’s clear that Facebook is going to be harder to deal with than Google. The Zuckerborg just doesn’t need journalism as much.