In a lawsuit against Meta, the state’s highest court will rule on the limits of Section 230

Attorney General Andrea Campbell. Photo (cc) 2022 by Dan Kennedy.

Section 230 of the Communications Decency Act of 1996 protects website owners from liability over third-party content. The classic example would be an anonymous commenter who libels someone. The offended party would be able to sue the commenter but not the publishing platform, although the platform might be required to turn over information that would help identify the commenter.

Sign up for free email delivery of Media Nation. You can also become a supporter for just $6 a month and receive a weekly newsletter with exclusive content.

But where is the line between passively hosting third-party content and activity promoting certain types of material in order to boost engagement and, thus, profitability? That question will go before the Massachusetts Supreme Judicial Court on Friday, reports Jennifer Smith of CommonWealth Beacon.

At issue is a lawsuit brought against Meta by 42 state attorneys general, including Andrea Campbell of Massachusetts. Meta operates Facebook, Instagram, Threads and other social media platforms, and it has long been criticized for using algorithms and other tactics that keep users hooked on content that, in some cases, provokes anger and depression, even suicide. Smith writes:

The Massachusetts complaint alleges that Meta violated state consumer protection law and created a public nuisance by deliberately designing Instagram with features like infinite scroll, autoplay, push notifications, and “like” buttons to addict young users, then falsely represented the platform’s safety to the public. The company has also been reckless with age verification, the AG argues, and allowed children under 13 years old to access its content.

Meta and its allies counter that Section 230 protects not just the third-party content they host but also how Facebook et al. display that content to its users.

In an accompanying opinion piece, attorney Megan Iorio of the Electronic Privacy Information Center, computer scientist Laura Edelson of Northeastern University and policy analyst Yaël Eisenstat of Cybersecurity for Democracy argue that Section 230 was not designed to protect website operators from putting their thumbs on the scales to favor one type of third-party content over another. As they put it in describing the amicus brief they have filed:

Our brief explains how the platform features at the heart of the Commonwealth’s case — things like infinite scroll, autoplay, the timing and batching of push notifications, and other tactics borrowed from the gambling industry — have nothing to do with content moderation; they are designed to elicit a behavior on the part of the user that furthers the company’s own business goals.

As Smith makes clear, this is a long and complex legal action, and the SJC is being asked to rule only on the narrow question of whether Campbell can move ahead with the lawsuit to which she has lent the state’s support. (Double disclosure: I am a member of CommonWealth Beacon’s editorial advisory aboard as well as a fellow Northeastern professor.)

I’ve long argued (as I did in this GBH News commentary from 2020) that, just as a matter of logic, favoring some types of content over others is a publishing activity that goes beyond the mere passive hosting of third-party content, and thus website operators should be liable for whatever harm those decisions create. That argument has not found much support in the courts, however. It will be interesting to see how this plays out.

How Margaret Sullivan’s erroneous slip of the tongue became (briefly) an AI-generated ‘fact’

Paul Krugman and Margaret Sullivan. Photo via Paul Krugman’s newsletter.

Media critic Margaret Sullivan made an error recently. No big deal — we all do it. But her account of what happened next is worth thinking about.

First, the error. Sullivan writes in her newsletter, American Crisis, that she recently appeared on economist Paul Krugman’s podcast and said that Los Angeles Times owner Patrick Soon-Shiong was among the billionaires who joined Donald Trump at his second inauguration earlier this year, along with the likes of Mark Zuckerberg, Jeff Bezos and Elon Musk. “I was wrong about that,” she notes, although she adds that Soon-Shiong “has been friendly to Trump in other ways.” Then she writes:

But — how’s this for a cautionary tale about the dubious accuracy of artificial intelligence? — a Google “AI overview,” in response to a search, almost immediately took my error and spread it around: “Yes, Dr. Patrick Soon-Shiong attended Donald Trump’s inauguration in 2025. He was seen there alongside other prominent figures like Mark Zuckerberg and Jeff Bezos.” It cited Krugman’s and my conversation. Again, I was wrong and I regret the error.

It does appear that the error was corrected fairly quickly. I asked Google this morning and got this from AI: “Patrick Soon-Shiong did not attend Donald Trump’s second inauguration. Earlier reports and AI overviews that claimed he did were based on an error by a journalist who later issued a correction.” It links to Sullivan’s newsletter.

Unlike Google, Claude makes no mention Sullivan’s original mistake, concluding, accurately: “While the search results don’t show Patrick Soon-Shiong listed among the most prominent billionaires seated in the Capitol Rotunda (such as Musk, Bezos, Zuckerberg, and others who received extensive coverage), the evidence suggests he was engaged with the inauguration events and has maintained a relationship with Trump’s administration.”

And here’s the verdict from ChatGPT: “I found no credible public evidence that Patrick Soon-Shiong attended Donald Trump’s second inauguration.”

You might cite my findings as evidence that AI corrects mistakes quickly, and in this case it did. (By the way, the error has not yet been corrected at Krugman’s site.) But a less careful journalist than Sullivan might have let the original error hang out there, and it would soon have become part of the established record of who did and didn’t pay homage to Trump on that particular occasion.

In other words: always follow your queries back to the source.

Surveillance cameras in Brookline, Mass., raise serious questions about civil liberties

Photo (cc) 2014 by Jay Phagan.

The surveillance state has come to Brookline, Massachusetts. Sam Mintz reports for Brookline.News that Chestnut Hill Realty will set up license-plate readers on Independence Drive near Hancock Village, located in South Brookline, on the Boston border. The readers are made by Flock Safety, which is signing an agreement with the Brookline Police Department to use the data. The data will also be made available to Boston Police.

Sign up for free email delivery of Media Nation. You can also become a supporter for just $6 a month and receive a weekly newsletter with exclusive content.

Two months ago I wrote about a campaign to keep Flock out of the affluent community of Scarsdale Village, New York. The story was covered by a startup local website, Scarsdale 10583, and after a period of months the contract was canceled in the face of rising opposition. Unfortunately, Scarsdale Village is the exception, as Flock Safety, a $7.5 billion company, has a presence in 5,000 communities in 49 states as well as a reputation for secretive dealings with local officials.

Adam Gaffin of Universal Hub reports that the state’s Supreme Judicial Court ruled in 2020 that automated license-plate readers are legal in Massachusetts. Gaffin also notes that, early this year, police in Johnson County, Texas, used data from 83,000 Flock cameras across the U.S. in a demented quest to track down a woman they wanted to arrest for a self-induced abortion. Presumably Texas authorities could plug into the Brookline network with Flock’s permission.

Mintz notes in his Brookline.News story that Flock recently opened an office in Boston and that its data has been used by police in dozens of Massachusetts communities. He also quotes Kade Crockford of the ACLU of Massachusetts as saying that though such uses of Flock data as identifying stolen cars or assisting with Amber Alerts isn’t a problem, “Unregulated, this technology facilitates the mass tracking of every single person’s movements on the road.”

The cameras could also be used by ICE in its out-of-control crackdown on undocumented (and, in some cases, documented) immigrants. This is just bad news all around, it’s hard to imagine that members of the public would support it if they knew about it.

Google appears to be throttling AI searches about Trump’s obviously addled mental state

Be careful what you search for.

Google appears to be throttling AI searches related to Donald Trump’s obviously addled mental state. Jay Peters reports (sub. req.) in The Verge:

There’s been a lot of coverage of the mental acuity of both President Trump and President Biden, who are the two oldest presidents ever, so it’s reasonable to expect that people might query Google about it. The company may be worried about accurately presenting information on a sensitive subject, as AI overviews remain susceptible to delivering incorrect information. But in this case, it may also be worried about the president’s response to such information. Google agreed this week to pay $24.5 million to settle a highly questionable lawsuit about Trump’s account being banned from YouTube.

I wanted to see if I could reproduce Peters’ results, and sure enough, Google is still giving Trump special treatment, even though Peters’ embarrassing story was published two days ago. I searched “is trump showing signs of dementia” in Google’s “All” tab, which these days will generally give you an AI-generated summary before getting to the links. Instead, you get nothing but links. The same thing happened when I switched to “AI Mode.”

Next I searched for “is biden showing signs of dementia” at the “All” tab. As with Trump, I got nothing but links — no AI summary at the top. But when I switched to “AI Mode,” I got a detailed AI summary that begins:

In response to concerns and observations about President Joe Biden’s cognitive abilities, a range of opinions and reports have emerged. It’s important to note that diagnosing dementia or cognitive decline requires a formal medical assessment by qualified professionals.

I have mixed feelings about AI searches, though, like many people, I make use of them — always checking the citations to make sure I’m getting accurate information. But as Peters observes, it looks like Google is flinching.

Seems like old times: Facebook is once again inflicting harm on the rest of us, this time using AI

This AI image of “Big sis Billie” was generated by Meta AI at the prompting of a Reuters journalist.

There was a time when it seemed like every other week I was writing about some terrible thing we had learned about Facebook or one of Meta’s other platforms.

There was Facebook’s complicity in the genocide of the Rohingya people in Myanmar. Or the Cambridge Analytica scandal, in which the personal data of millions of people on Facebook was hoovered up so that Steve Bannon could target political ads to them. Or Instagram’s ties to depression among teenage girls.

Sign up for free email delivery of Media Nation. You can also become a supporter for just $6 a month and receive a weekly newsletter with exclusive content.

Now Jeff Horwitz, who uncovered much of Facebook’s nefarious behavior when he was at The Wall Street Journal, is back with an in-depth report for Reuters on how Meta’s use of artificial intelligence led to the accidental death of a mentally disabled man and how it’s being used to seduce children as well.

The man, a 76-year-old stroke survivor named Thongbue Wongbandue, suffered fatal injuries when he fell while running for a train so that he could meet his AI-generated paramour, “Big sis Billie,” who had repeatedly assured Wongbandue in their online encounters that she was real.

As for interactions with children, Horwitz writes:

An internal Meta policy document seen by Reuters as well as interviews with people familiar with its chatbot training show that the company’s policies have treated romantic overtures as a feature of its generative AI products, which are available to users aged 13 and older.

“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.

Yes, the Zuckerborg’s strategy going back many years now is to back off when caught — and then move on to some other antisocial business practice.

Ever since Elon Musk bought Twitter, and especially during his brief, chaotic stint in the Trump White House, Mark Zuckerberg has gotten something of a free pass. Just this week it was announced that Threads, a Meta product launched for users who were fleeing Twitter, now has 400 million active monthly users, making it about two-thirds as large as Twitter/X. (An independent alternative, Bluesky, trails far behind.)

Well, Zuckerberg is still out there wreaking havoc, and AI has given him (and Musk and all the rest) a new toy with which to make money while harming the rest of us.

Remember that ‘drunk Pelosi’ video? AI-powered deepfakes are making disinformation much more toxic

Should we be worried about deepfake videos? Well, sure. But I’ve tended to think that some skepticism is warranted.

My leading example is a 6-year-old video of then-House Speaker Nancy Pelosi in which we are told that she appears to be drunk. I say “we are told” because the video was simply slowed down to 75%, and the right-wing audience for whom it was intended thought this crude alteration was proof that she was loaded. Who needs deepfakes when gullible viewers will be fooled by such crap? People believe what they want to believe.

Become a supporter of Media Nation for just $6 a month. You’ll receive a weekly newsletter with exclusive content, a roundup of the week’s posts, photography and a song of the week.

But the deepfakes are getting better. This morning I want to call your attention to a crucially important story in The New York Times (gift link) showing that deepfakes powered by artificial intelligence are causing toxic damage to the political and cultural environment around the world.

“The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal,” write reporters Steven Lee Myers and Stuart A. Thompson. A few examples:

  • Romania had to redo last year’s presidential election after a court ruled that AI manipulation of one of the candidates may have changed the result.
  • An AI-generated TikTok video falsely showed Donald Trump endorsing a far-right candidate in Poland.
  • Another fake video from last year’s U.S. election tied to Russia falsely showed Kamala Harris saying that Trump refused to “die with dignity.”

As with the Pelosi video, fakes have been polluting the media environment for a long time. So I was struck by something that Isabelle Frances-Wright of the Institute for Strategic Dialogue told the Times: Before AI, “you had to pick between scale or quality — quality coming from human troll farms, essentially, and scale coming from bots that could give you that but were low quality. Now, you can have both, and that’s really scary territory to be in.”

In other words, disinformation is expanding exponentially both in terms of quality and quantity. Given that, it’s unlikely we’ll see any more Russian-generated memes of a satanic Hillary Clinton boxing with Jesus, a particularly inept example of Russian propaganda from 2016. Next time, you’ll see a realistic video of a politician pledging their eternal soul to the Dark Lord.

And since I still have a few gift links to give out before the end of month, here’s a Times quiz with 10 videos, some of which are AI fakes and some real. Can you tell the difference? I didn’t do very well.

So what can we do to protect our political discourse? I’m sure we can all agree that it’s already in shockingly bad shape, dominated by lies from Trump and his allies that are amplified on Fox News and social media. As I said, people are going to believe what they want to believe. But AI-generated deepfake videos are only going to make things that much worse.

Why the rise of social media has given us a less happy, more polarized and dangerous world

In his 2010 book “The Shallows: What the Internet Is Doing to Our Brains,” Nicholas Carr argued that our immersion in digital media is rewiring the way we think, turning us into distracted skimmers who are losing the capacity for deep concentration.

Yet social media was in its infancy back then. His lament in those days was aimed at a panoply of online distractions such as email that needed to be written, blogs that cried out to be read, streaming videos, downloadable music — in other words, anything but the task at hand. He mentions Facebook, but only in passing. Over the years, I’ve sometimes wondered what he would make of the explosion not just of Facebook but of Instagram, TikTok and their ilk now that they’ve taken over so much of our lives.

Well, my question has been answered. Earlier this year Carr published what is essentially a follow-up to “The Shallows.” Titled “Superbloom: How Technologies of Connection Tear Us Apart,” the book surveys the mediascape of algorithmically driven tech platforms and finds that it is not just driving us to distraction but is creating a less happy, more polarized and more dangerous world.

Read the rest at Poynter Online.

How news outlets may benefit from a ruling that Google’s ad tech violates antitrust law

Photo (cc) 2014 by Anthony Quintano

To the extent that news organizations have been able to overcome the collapse of advertising caused by the rise of giant tech platforms, it’s through two imperfect methods.

  • For-profits, especially larger newspapers, charge for digital subscriptions and try to maintain a baseline level of print advertising, which has maintained at least some of its value.
  • Nonprofits, many of them digital-only, pursue large gifts and grants while attempting to induce their audience to pay for voluntary memberships, often for goodies like premium newsletters.

At the same time, though, news publishers have continued to look longingly at what might have been. When journalism started moving online 30 years ago, the assumption was that news outlets would continue to control much of that advertising.

Become a supporter of this free source of news and commentary for just $5 a month. You’ll receive a weekly newsletter with all sorts of exclusive content.

Those hopes were cut short. And in large measure, that’s because Google — according to publishers — established a monopoly over digital advertising that news organizations couldn’t crack. Now we’re getting a glimpse of a possible alternative universe, because last week a federal district-court judge agreed, at least in part.

I’ve read several accounts of Judge Leonie Brinkema’s 115-page ruling on an antitrust suit brought by the U.S. Justice Department and eight states (but not Massachusetts). It’s confusing, but I thought this account by David McCabe in The New York Times (gift link) was clearer than some, so I’m relying on it here. I’ll begin with this:

The government argued in its case that Google had a monopoly over three parts of the online advertising market: the tools used by online publishers, like news sites, to host open ad space; the tools advertisers use to buy that ad space; and the software that facilitates those transactions.

In other words, the suit claimed that Google controlled both ends of the market as well as the middleman software that makes it happen. Judge Brinkema agreed with the first two propositions but disagreed with the third, saying, in McCabe’s words, that “the government had failed to prove that it constituted a real and defined market.”

Brinkema put it this way: “In addition to depriving rivals of the ability to compete, this exclusionary conduct substantially harmed Google’s publisher customers, the competitive process, and, ultimately, consumers of information on the open web.”

Lee-Anne Mulholland, a Google vice president, said in response, “We won half of this case and we will appeal the other half.” I’m pretty sure that losing two out of three is two-thirds, but whatever.

Brinkema will now consider the government’s demand that Google’s ad business be broken up. But given that the company has already said it will appeal, it could be a long time — like, on the order of years — before anything comes of this. Same with an earlier ruling in a different courtroom that Google’s search constitutes an illegal monopoly, which is also the subject of hearings this week.

The News/Media Alliance, a lobbying group for the news business, praised Brinkema’s ruling, saying:

The News/Media Alliance has spent years advocating on behalf of news media publishers against Google’s unlawfully anticompetitive actions. We are strongly supportive of a similar lawsuit in Texas that will follow, as well as the Gannett lawsuit currently being litigated on the same issues. Much of this was prompted in the House Report that documented Google’s abuse in the ad tech ecosystem, the scope of which is wide-reaching.

As the organization observes, Google’s ad tech has been the subject of several suits by the newspaper business. One of them names Facebook as a co-defendant, claiming that the Zuckerborg chose to collude with Google rather than compete directly. Gannett’s suit, on the other hand, only names Google.

The News/Media Alliance also continues to push for passage of the Journalism Competition and Preservation Act, a pet project of Democratic Sen. Amy Klobuchar of Minnesota and Republican Sen. John Kennedy of Louisiana.

The proposal, which never gained much traction and is surely all but dead with Donald Trump back in the White House, would force Google and Facebook to pay for the journalism they repurpose. The legislation is problematic for many reasons, not least that Facebook has made it clear it would rather remove news from its various platforms, as it has done in Canada, than pay for it.

Punishing Google for clearly defined legal violations is a much cleaner solution. Let’s hope Judge Brinkema’s ruling survives the appeals process — not to mention whatever idea starts rattling around Trump’s head to reward Google as a favor for CEO Sundar Pichai’s $1 million kiss. Perhaps this can be the start of making advertising great again.

From pariah to sage: Bill Gates puts some distance between himself and Trump’s supine tech bros

Bill Gates. Photo (cc) 2020 by Greg Rubenstein.

I’m posting this because tomorrow is the last day of January and I still have a bunch of gift links to The New York Times that I haven’t used. The clock resets at midnight on Friday. (Let me know if there are more that you’d like.) Both links below should work even if you’re not a Times subscriber.

David Streitfeld as an interesting interview with Bill Gates, the one-time bad boy of tech who now looks pretty good compared to Elon Musk, Mark Zuckerberg et al. Gates has just published a memoir, “Source Code,” which is the subject of this Times review by Jennifer Szalai.

Unlike his tech brethren, Gates, who co-founded Microsoft, has remained left-of-center and devoted to his philanthropic endeavors. He is far from perfect, of course, and Streitfeld observes that his reputation took a hit when he divorced his much-admired then-wife, Melinda French Gates, and when it was revealed that he’d spent time with the pedophile Jeffrey Epstein (Gates has never been tied to Epstein’s monstrous sex crimes).

But Gates seems to have a mature, bemused attitude about what other people think of him. He also doesn’t shy away from admitting when he’s been wrong. He says he’s paid $14 billion in taxes over the years and adds that it should have been $40 billion if we had a fairer system. We also learn that donated $50 million to a group supporting Kamala Harris’ presidential campaign.

When I listened to Walter Isaacson’s biography of the late Apple co-founder Steve Jobs some years ago, I was struck by Gates thoughtful take. He was by far the most insightful of the many people whom Isaacson interviewed. Jobs is someone I admire, but I wonder if he would have found himself up on the platform with Donald Trump last week. Gates, to his credit, was not.

Playing with AI: Can Otter and ChatGPT produce a good-enough account of a podcast interview?

This post will no doubt have limited appeal, but a few readers might find it interesting. I’ve been thinking about how to produce summaries and news stories based on the podcast that Ellen Clegg and I host, “What Works: The Future of Local News.” The best way would be to pay a student to write it up. But is it also a task that could be turned over to AI?

Purely as an experiment, I took our most recent podcast — an interview with Scott Brodbeck, founder and CEO of Local News Now, in the Virginia suburbs of Washington, D.C. — and turned it over to the robots.

I started by downloading the audio and feeding it into Otter, a web-based transcription service that uses AI to guess at what the speaker might actually be saying. Once I had a transcript, I took a part of it — our conversation with Brodbeck, eliminating the introduction and other features — and fed it into ChatGPT twice, once asking it to produce a 600-word summary and then again to produce a 600-word news story. Important caveat: I did very little to clean up the transcript and did not edit what ChatGPT spit out.

The results were pretty good. I’m guessing it would have been better if I had been using a paid version of ChatGPT, but that would require, you know, money. I’d say that what AI produced would be publishable if some human-powered editing were employed to fix it up. Anyway, here are the results.

The transcript

Q: Scott, so many of the projects that we have looked at are nonprofit, and that trend seems to be accelerating. In fact, we love nonprofit news, but we also worry that there are limits to how much community journalism can be supported by philanthropy. So your project is for profit. How have you made that work? Dan, do you think for profit? Digital only, local news can thrive in other parts of the country as well. Continue reading “Playing with AI: Can Otter and ChatGPT produce a good-enough account of a podcast interview?”