A copyright expert’s big idea: Force Google and other AI companies to pay news publishers

Photo (cc) 2014 by Anthony Quintano.

Journalism faces yet another tech-driven crisis: AI-powered Google search deprives news publishers of as much as 30% to 40% of their web traffic as users stay on Google rather than following the links. What’s more, users of other AI chatbots, such as ChatGPT and Claude, can search for clickless news as well. Now an expert on copyright and licensing has come up with a possible solution.

Follow my Bluesky newsfeed for additional news and commentary. And please join my Patreon for just $6 a month. You’ll receive a supporters-only newsletter every Thursday.

Paul Gerbino, president of Creative Licensing International, writes that publishers need to move away from negotiating one-time deals with AI companies to scrape their content for training purposes. Instead, Gerbino says, they should push for a system by which they will be compensated for the use of their content on a recurring basis, whether through per-use fees or subscriptions. As Gerbino puts it:

Training is a singular, non-recurring event that offers only a front-loaded burst of revenue. It possesses no capacity to scale or recur at the level required to effectively sustain the complex and costly operation of the publishing industry….

The singular, non-negotiable strategic imperative for every publisher is to execute a complete and fundamental pivot from the outdated mindset of “sell content once” to the forward-looking, sustainable model of “monetize access forever.”

It’s a fascinating idea, although we should be cautious given that forcing Google and other platforms to pay for the news they repurpose hasn’t gone much of anywhere over the years. When such schemes have been implemented, they’ve been hampered by unexpected consequences, such as threats to remove all links to news sources. It’s not clear why Google would suddenly flip because it’s now using AI.

Gerbino acknowledges this, arguing that publishers should negotiate with the AI companies collectively, observing: “Individual publishers operating alone possess negligible leverage against the behemoths of the AI industry. Collective frameworks represent the only viable path to successful negotiation.” But that may require passage of a law so that the publishers don’t run afoul of antitrust violations.

Gerbino also says that publishers need to develop paywalls that are impervious to AI. Not all of them are.

The possibility that a substantial part of the news audience will never move beyond AI-generated results — no matter how wrong they may be — represents a significant threat to publishers, who are already dealing with the challenge of finding a path to sustainability in a post-advertising world.

Gerbino has laid out some interesting proposals on how to extract revenues from AI companies, which may represent the biggest threat to news since the internet flickered into view more than 30 years ago. It remains to be seen, though, whether his ideas will form the basis for action — or if, instead, they will simply fade into the ether.

How Claude AI helped improve the look and legibility of Media Nation

Public domain illustration via Pixabay.

For quite a few years I used WordPress’ indent feature for blockquotes rather than the actual blockquote command. The reason was that blockquotes in the theme that I use (Twenty Sixteen) were ugly, with type larger than the regular text (the opposite of what you would see in a book or a printed article) and in italics.

But then I noticed that indents didn’t show up at all in posts that went out by email, leading to confusion among my subscribers — that is, my most engaged readers. I decided to find out if I could modify the blockquote feature. WordPress allows you to add custom CSS to your theme, but I know very little about how to use CSS. I could have asked in a WordPress forum, but I tried to see if I could get an answer from AI instead.

Sign up for free email delivery of Media Nation. You can also become a supporter for just $6 a month and receive a weekly newsletter with exclusive content.

Northeastern has given us all access to the enterprise version of Claude, Anthropic’s AI platform. It’s a mixed blessing, although I’ve found that it’s very good as a search engine — often better than Google, which is now also glopped up by AI. I simply make sure I ask Claude to add the underlying links to its answer so I don’t get taken in by hallucinations. But Claude is also known for being quite good at coding. What I needed was low-level, so I thought maybe it could help.

Indeed it could. I began by asking, “In the Twenty Sixteen WordPress theme, how can I change the CSS so that blockquotes do not appear in italics?” Claude provided me with several options; I chose the simplest one, which was a short bit of custom CSS that I could add to my theme:

blockquote {
     font-style: normal;
}

It worked. A subsequent query enabled me to make the blockquote type smaller. Then, just last week, I noticed that any formatting in the blockquote was stripped out. For instance, a recent memo from Boston Globe Media CEO Linda Henry contained boldface and italicized text, which did not appear when I reproduced her message. The formatting code was there; it just wasn’t visible. Claude produced CSS commands that overrode the theme. You can see the results here, with bold and italic type just as Henry had it in her message.

I make some light use of AI in my other work. When I need to transcribe an audio interview, I use Otter, which is powered by AI. I’ve experimented with using AI to compile summaries from transcripts and even (just for my own use) an actual news story. Very occasionally I’ve used AI to produce illustrations for this blog, which seems to draw more objections than other AI applications, probably because it’s right in people’s faces.

Just the other day, someone complained to me on social media that she was not going to visit a local news outlet I had mentioned because she had encountered an AI-produced illustration there. When I asked why, she replied that it was because AI relies on plagiarism. Oh, I get it. Sometime this year I’m hoping to receive $3,000 as my share of a class-action lawsuit against Anthropic because one of my books, “The Return of the Moguls,” was used to train Claude.

And let’s not overlook the massive amounts of energy that are required to power AI. On a recent New York Times podcast, Ezra Klein and his guests observed that AI is deeply unpopular with the public (sub. req.), even though they’re using it, because all they really know is that it’s going to take away jobs and is driving up electricity costs.

But AI isn’t going anywhere, and if we’re going to use it (and we are, even if we try to avoid it), we need to find ways to do so ethically and responsibly.

How Margaret Sullivan’s erroneous slip of the tongue became (briefly) an AI-generated ‘fact’

Paul Krugman and Margaret Sullivan. Photo via Paul Krugman’s newsletter.

Media critic Margaret Sullivan made an error recently. No big deal — we all do it. But her account of what happened next is worth thinking about.

First, the error. Sullivan writes in her newsletter, American Crisis, that she recently appeared on economist Paul Krugman’s podcast and said that Los Angeles Times owner Patrick Soon-Shiong was among the billionaires who joined Donald Trump at his second inauguration earlier this year, along with the likes of Mark Zuckerberg, Jeff Bezos and Elon Musk. “I was wrong about that,” she notes, although she adds that Soon-Shiong “has been friendly to Trump in other ways.” Then she writes:

But — how’s this for a cautionary tale about the dubious accuracy of artificial intelligence? — a Google “AI overview,” in response to a search, almost immediately took my error and spread it around: “Yes, Dr. Patrick Soon-Shiong attended Donald Trump’s inauguration in 2025. He was seen there alongside other prominent figures like Mark Zuckerberg and Jeff Bezos.” It cited Krugman’s and my conversation. Again, I was wrong and I regret the error.

It does appear that the error was corrected fairly quickly. I asked Google this morning and got this from AI: “Patrick Soon-Shiong did not attend Donald Trump’s second inauguration. Earlier reports and AI overviews that claimed he did were based on an error by a journalist who later issued a correction.” It links to Sullivan’s newsletter.

Unlike Google, Claude makes no mention Sullivan’s original mistake, concluding, accurately: “While the search results don’t show Patrick Soon-Shiong listed among the most prominent billionaires seated in the Capitol Rotunda (such as Musk, Bezos, Zuckerberg, and others who received extensive coverage), the evidence suggests he was engaged with the inauguration events and has maintained a relationship with Trump’s administration.”

And here’s the verdict from ChatGPT: “I found no credible public evidence that Patrick Soon-Shiong attended Donald Trump’s second inauguration.”

You might cite my findings as evidence that AI corrects mistakes quickly, and in this case it did. (By the way, the error has not yet been corrected at Krugman’s site.) But a less careful journalist than Sullivan might have let the original error hang out there, and it would soon have become part of the established record of who did and didn’t pay homage to Trump on that particular occasion.

In other words: always follow your queries back to the source.