A copyright expert’s big idea: Force Google and other AI companies to pay news publishers

Photo (cc) 2014 by Anthony Quintano.

Journalism faces yet another tech-driven crisis: AI-powered Google search deprives news publishers of as much as 30% to 40% of their web traffic as users stay on Google rather than following the links. What’s more, users of other AI chatbots, such as ChatGPT and Claude, can search for clickless news as well. Now an expert on copyright and licensing has come up with a possible solution.

Follow my Bluesky newsfeed for additional news and commentary. And please join my Patreon for just $6 a month. You’ll receive a supporters-only newsletter every Thursday.

Paul Gerbino, president of Creative Licensing International, writes that publishers need to move away from negotiating one-time deals with AI companies to scrape their content for training purposes. Instead, Gerbino says, they should push for a system by which they will be compensated for the use of their content on a recurring basis, whether through per-use fees or subscriptions. As Gerbino puts it:

Training is a singular, non-recurring event that offers only a front-loaded burst of revenue. It possesses no capacity to scale or recur at the level required to effectively sustain the complex and costly operation of the publishing industry….

The singular, non-negotiable strategic imperative for every publisher is to execute a complete and fundamental pivot from the outdated mindset of “sell content once” to the forward-looking, sustainable model of “monetize access forever.”

It’s a fascinating idea, although we should be cautious given that forcing Google and other platforms to pay for the news they repurpose hasn’t gone much of anywhere over the years. When such schemes have been implemented, they’ve been hampered by unexpected consequences, such as threats to remove all links to news sources. It’s not clear why Google would suddenly flip because it’s now using AI.

Gerbino acknowledges this, arguing that publishers should negotiate with the AI companies collectively, observing: “Individual publishers operating alone possess negligible leverage against the behemoths of the AI industry. Collective frameworks represent the only viable path to successful negotiation.” But that may require passage of a law so that the publishers don’t run afoul of antitrust violations.

Gerbino also says that publishers need to develop paywalls that are impervious to AI. Not all of them are.

The possibility that a substantial part of the news audience will never move beyond AI-generated results — no matter how wrong they may be — represents a significant threat to publishers, who are already dealing with the challenge of finding a path to sustainability in a post-advertising world.

Gerbino has laid out some interesting proposals on how to extract revenues from AI companies, which may represent the biggest threat to news since the internet flickered into view more than 30 years ago. It remains to be seen, though, whether his ideas will form the basis for action — or if, instead, they will simply fade into the ether.

A new lawsuit takes aim at Google’s ad monopoly just as the AI train is leaving the station

Photo (cc) 2014 by Anthony Quintano.

There’s an old saying — no doubt you’ve heard it — that justice delayed is justice denied. And so it is with the news business’ longstanding lament that Google engages in monopolistic practices aimed at driving down the value of digital advertising. Gilad Edelman, writing for The Atlantic, describes it this way:

If the story of journalism’s 21st-century decline were purely a tale of technological disruption — of print dinosaurs failing to adapt to the internet — that would be painful enough for those of us who believe in the importance of a robust free press. The truth hurts even more. Big Tech platforms didn’t just out-compete media organizations for the bulk of the advertising-revenue pie. They also cheated them out of much of what was left over, and got away with it.

The Atlantic is among a number of media organizations that filed suit against Google this month. I’m kind of stunned that they are only suing now, because the issue they’ve identified goes back many years. As Charlotte Tobitt reports for the Press Gazette, the federal lawsuit was brought earlier this month by The Atlantic as well as Penske Media Corp., which owns Rolling Stone and She Media; Condé Nast, whose holdings include Advance Publications; Vox Media, owner of The Verge; and the newspaper chain McClatchy, whose papers include the Miami Herald, The Kansas City Star and The Sacramento Bee.

Continue reading “A new lawsuit takes aim at Google’s ad monopoly just as the AI train is leaving the station”

My Northeastern ethics students offer some ideas on practicing journalism in the AI era

Photo by Carlos López via Pixabay.

The Society of Professional Journalists’ Code of Ethics encompasses four broad principles:

    • Seek Truth and Report It
    • Minimize Harm
    • Act Independently
    • Be Accountable and Transparent

Each principle is accompanied by multiple bullet points, which in turn link to background information. But those are the starting points, and I think they provide a good rough guide for how to practice ethical journalism.

Whenever I teach one of our ethics classes, I ask my students to come up with a fifth principle as well as some explanatory material. This semester, I’m teaching our graduate ethics seminar. It’s a small class — five grad students and one undergrad. Last week I divided them into three teams of two and put them to work. Here’s what they came up with. (Longtime readers of Media Nation will recognize this exercise.) I’ve done a little editing, mainly for parallel construction.

Practice Digital Diligence

  • Utilize AI for structural purposes such as transcribing interviews, searching for sources and entering data.
  • Disclose the use of AI software when publishing artificial creations.
  • Give credit by providing hyperlinks to other journalistic sources.
  • Gain verification status on social platforms for credibility purposes.
  • Do not engage with negative comments on social media posts.
  • Engage with subscribers who might use social media to ask questions about a story.
  • Apply AP style to social media posts.
  • Give credit to any artists whose work you might borrow. Respect copyright law.

Use Modern Resources Responsibly

  • Use social media and other digital tools, such as comment sections, to crowdsource information, connect with others and distribute news in a more accessible way.
  • Do not use these tools to engage in ragebait or to get tangled in messy and unproductive discourse online.
  • Acceptable uses of AI include gathering information, reformatting your reporting, transcribing interviews and similar non-public-facing tasks.
  • AI should be used more effectively to guide your reporting rather than replacing it.

Be Compassionate

  • Treat sources and communities with empathy and care.
  • Avoid misleading sources or providing false hope — for instance, don’t promise someone who is suffering that you’ll be able to give them assistance.
  • Do not exploit a source’s lack of media training. Provide a detailed explanation of your reporting methods when warranted.
  • Avoid using jargon both in interacting with sources and in producing a story.
  • Be a human first. If that clashes with your role as a journalist, that should be secondary.

***

In addition to their work on extending the Code of Ethics, I asked them on the first day of class to name one significant ethical issue that they think faces journalism. What follows is my attempt to summarize a longer conversation that we had in class.

► Stand up for our independence as journalists

► Explore and define the role of AI and truth in journalism

► Make sure we include a range of perspectives

► Push back against fake news, ragebait, etc.

► Avoid passive voice that evades responsibility

► Move beyond our preconceptions in pursuit of the truth

I hope you’ll agree that this is good, thought-provoking stuff. I can’t wait to see how the rest of the semester will go.

Follow my Bluesky newsfeed for additional news and commentary. And please join my Patreon for just $6 a month. You’ll receive a supporters-only newsletter every Thursday.

How Claude AI helped improve the look and legibility of Media Nation

Public domain illustration via Pixabay.

For quite a few years I used WordPress’ indent feature for blockquotes rather than the actual blockquote command. The reason was that blockquotes in the theme that I use (Twenty Sixteen) were ugly, with type larger than the regular text (the opposite of what you would see in a book or a printed article) and in italics.

But then I noticed that indents didn’t show up at all in posts that went out by email, leading to confusion among my subscribers — that is, my most engaged readers. I decided to find out if I could modify the blockquote feature. WordPress allows you to add custom CSS to your theme, but I know very little about how to use CSS. I could have asked in a WordPress forum, but I tried to see if I could get an answer from AI instead.

Sign up for free email delivery of Media Nation. You can also become a supporter for just $6 a month and receive a weekly newsletter with exclusive content.

Northeastern has given us all access to the enterprise version of Claude, Anthropic’s AI platform. It’s a mixed blessing, although I’ve found that it’s very good as a search engine — often better than Google, which is now also glopped up by AI. I simply make sure I ask Claude to add the underlying links to its answer so I don’t get taken in by hallucinations. But Claude is also known for being quite good at coding. What I needed was low-level, so I thought maybe it could help.

Indeed it could. I began by asking, “In the Twenty Sixteen WordPress theme, how can I change the CSS so that blockquotes do not appear in italics?” Claude provided me with several options; I chose the simplest one, which was a short bit of custom CSS that I could add to my theme:

blockquote {
     font-style: normal;
}

It worked. A subsequent query enabled me to make the blockquote type smaller. Then, just last week, I noticed that any formatting in the blockquote was stripped out. For instance, a recent memo from Boston Globe Media CEO Linda Henry contained boldface and italicized text, which did not appear when I reproduced her message. The formatting code was there; it just wasn’t visible. Claude produced CSS commands that overrode the theme. You can see the results here, with bold and italic type just as Henry had it in her message.

I make some light use of AI in my other work. When I need to transcribe an audio interview, I use Otter, which is powered by AI. I’ve experimented with using AI to compile summaries from transcripts and even (just for my own use) an actual news story. Very occasionally I’ve used AI to produce illustrations for this blog, which seems to draw more objections than other AI applications, probably because it’s right in people’s faces.

Just the other day, someone complained to me on social media that she was not going to visit a local news outlet I had mentioned because she had encountered an AI-produced illustration there. When I asked why, she replied that it was because AI relies on plagiarism. Oh, I get it. Sometime this year I’m hoping to receive $3,000 as my share of a class-action lawsuit against Anthropic because one of my books, “The Return of the Moguls,” was used to train Claude.

And let’s not overlook the massive amounts of energy that are required to power AI. On a recent New York Times podcast, Ezra Klein and his guests observed that AI is deeply unpopular with the public (sub. req.), even though they’re using it, because all they really know is that it’s going to take away jobs and is driving up electricity costs.

But AI isn’t going anywhere, and if we’re going to use it (and we are, even if we try to avoid it), we need to find ways to do so ethically and responsibly.

How Margaret Sullivan’s erroneous slip of the tongue became (briefly) an AI-generated ‘fact’

Paul Krugman and Margaret Sullivan. Photo via Paul Krugman’s newsletter.

Media critic Margaret Sullivan made an error recently. No big deal — we all do it. But her account of what happened next is worth thinking about.

First, the error. Sullivan writes in her newsletter, American Crisis, that she recently appeared on economist Paul Krugman’s podcast and said that Los Angeles Times owner Patrick Soon-Shiong was among the billionaires who joined Donald Trump at his second inauguration earlier this year, along with the likes of Mark Zuckerberg, Jeff Bezos and Elon Musk. “I was wrong about that,” she notes, although she adds that Soon-Shiong “has been friendly to Trump in other ways.” Then she writes:

But — how’s this for a cautionary tale about the dubious accuracy of artificial intelligence? — a Google “AI overview,” in response to a search, almost immediately took my error and spread it around: “Yes, Dr. Patrick Soon-Shiong attended Donald Trump’s inauguration in 2025. He was seen there alongside other prominent figures like Mark Zuckerberg and Jeff Bezos.” It cited Krugman’s and my conversation. Again, I was wrong and I regret the error.

It does appear that the error was corrected fairly quickly. I asked Google this morning and got this from AI: “Patrick Soon-Shiong did not attend Donald Trump’s second inauguration. Earlier reports and AI overviews that claimed he did were based on an error by a journalist who later issued a correction.” It links to Sullivan’s newsletter.

Unlike Google, Claude makes no mention Sullivan’s original mistake, concluding, accurately: “While the search results don’t show Patrick Soon-Shiong listed among the most prominent billionaires seated in the Capitol Rotunda (such as Musk, Bezos, Zuckerberg, and others who received extensive coverage), the evidence suggests he was engaged with the inauguration events and has maintained a relationship with Trump’s administration.”

And here’s the verdict from ChatGPT: “I found no credible public evidence that Patrick Soon-Shiong attended Donald Trump’s second inauguration.”

You might cite my findings as evidence that AI corrects mistakes quickly, and in this case it did. (By the way, the error has not yet been corrected at Krugman’s site.) But a less careful journalist than Sullivan might have let the original error hang out there, and it would soon have become part of the established record of who did and didn’t pay homage to Trump on that particular occasion.

In other words: always follow your queries back to the source.