A copyright expert’s big idea: Force Google and other AI companies to pay news publishers

Photo (cc) 2014 by Anthony Quintano.

Journalism faces yet another tech-driven crisis: AI-powered Google search deprives news publishers of as much as 30% to 40% of their web traffic as users stay on Google rather than following the links. What’s more, users of other AI chatbots, such as ChatGPT and Claude, can search for clickless news as well. Now an expert on copyright and licensing has come up with a possible solution.

Follow my Bluesky newsfeed for additional news and commentary. And please join my Patreon for just $6 a month. You’ll receive a supporters-only newsletter every Thursday.

Paul Gerbino, president of Creative Licensing International, writes that publishers need to move away from negotiating one-time deals with AI companies to scrape their content for training purposes. Instead, Gerbino says, they should push for a system by which they will be compensated for the use of their content on a recurring basis, whether through per-use fees or subscriptions. As Gerbino puts it:

Training is a singular, non-recurring event that offers only a front-loaded burst of revenue. It possesses no capacity to scale or recur at the level required to effectively sustain the complex and costly operation of the publishing industry….

The singular, non-negotiable strategic imperative for every publisher is to execute a complete and fundamental pivot from the outdated mindset of “sell content once” to the forward-looking, sustainable model of “monetize access forever.”

It’s a fascinating idea, although we should be cautious given that forcing Google and other platforms to pay for the news they repurpose hasn’t gone much of anywhere over the years. When such schemes have been implemented, they’ve been hampered by unexpected consequences, such as threats to remove all links to news sources. It’s not clear why Google would suddenly flip because it’s now using AI.

Gerbino acknowledges this, arguing that publishers should negotiate with the AI companies collectively, observing: “Individual publishers operating alone possess negligible leverage against the behemoths of the AI industry. Collective frameworks represent the only viable path to successful negotiation.” But that may require passage of a law so that the publishers don’t run afoul of antitrust violations.

Gerbino also says that publishers need to develop paywalls that are impervious to AI. Not all of them are.

The possibility that a substantial part of the news audience will never move beyond AI-generated results — no matter how wrong they may be — represents a significant threat to publishers, who are already dealing with the challenge of finding a path to sustainability in a post-advertising world.

Gerbino has laid out some interesting proposals on how to extract revenues from AI companies, which may represent the biggest threat to news since the internet flickered into view more than 30 years ago. It remains to be seen, though, whether his ideas will form the basis for action — or if, instead, they will simply fade into the ether.

A New York Times gift-link bacchanalia, from the hazards of AI to an aging Trump to chatty cats

OpenAI chief executive Sam Altman. Photo (cc) 2019 by TechCrunch.

Here we go again. It’s the last day of the month, and I haven’t shared all of my gift links to The New York Times. Use ’em or lose ’em. These should continue to work for some time to come; what matters is when I post them, not when you access them. So here we go.

Continue reading “A New York Times gift-link bacchanalia, from the hazards of AI to an aging Trump to chatty cats”

How Margaret Sullivan’s erroneous slip of the tongue became (briefly) an AI-generated ‘fact’

Paul Krugman and Margaret Sullivan. Photo via Paul Krugman’s newsletter.

Media critic Margaret Sullivan made an error recently. No big deal — we all do it. But her account of what happened next is worth thinking about.

First, the error. Sullivan writes in her newsletter, American Crisis, that she recently appeared on economist Paul Krugman’s podcast and said that Los Angeles Times owner Patrick Soon-Shiong was among the billionaires who joined Donald Trump at his second inauguration earlier this year, along with the likes of Mark Zuckerberg, Jeff Bezos and Elon Musk. “I was wrong about that,” she notes, although she adds that Soon-Shiong “has been friendly to Trump in other ways.” Then she writes:

But — how’s this for a cautionary tale about the dubious accuracy of artificial intelligence? — a Google “AI overview,” in response to a search, almost immediately took my error and spread it around: “Yes, Dr. Patrick Soon-Shiong attended Donald Trump’s inauguration in 2025. He was seen there alongside other prominent figures like Mark Zuckerberg and Jeff Bezos.” It cited Krugman’s and my conversation. Again, I was wrong and I regret the error.

It does appear that the error was corrected fairly quickly. I asked Google this morning and got this from AI: “Patrick Soon-Shiong did not attend Donald Trump’s second inauguration. Earlier reports and AI overviews that claimed he did were based on an error by a journalist who later issued a correction.” It links to Sullivan’s newsletter.

Unlike Google, Claude makes no mention Sullivan’s original mistake, concluding, accurately: “While the search results don’t show Patrick Soon-Shiong listed among the most prominent billionaires seated in the Capitol Rotunda (such as Musk, Bezos, Zuckerberg, and others who received extensive coverage), the evidence suggests he was engaged with the inauguration events and has maintained a relationship with Trump’s administration.”

And here’s the verdict from ChatGPT: “I found no credible public evidence that Patrick Soon-Shiong attended Donald Trump’s second inauguration.”

You might cite my findings as evidence that AI corrects mistakes quickly, and in this case it did. (By the way, the error has not yet been corrected at Krugman’s site.) But a less careful journalist than Sullivan might have let the original error hang out there, and it would soon have become part of the established record of who did and didn’t pay homage to Trump on that particular occasion.

In other words: always follow your queries back to the source.

The triumph of hope over experience: The latest on how AI is not solving the local news crisis

Illustration produced by AI using DALL-E

This past weekend I listened to a bracingly entertaining conversation that the public radio program “On the Media” conducted with tech journalist Ed Zitron. Co-host Brooke Gladstone had billed it as a chance for Zitron to make sense out of DeepSeek, the new Chinese artificial-intelligence software that purports to do what ChatGPT and its ilk can do for a fraction of the cost — and, presumably, while using a fraction of the electric power burned by American AI companies.

But it was so much more than that. Maybe you’re familiar with Zitron. I wasn’t. As I learned, he is a caustic skeptic of American AI in general. In fact, he doesn’t even regard the large language models (LLMs) that we’ve come to think of as AI as the real thing, saying they are nothing but an error-prone scam that is attracting fast sums of venture capital but will never make any money. Here’s a taste:

The real damage that DeepSeek’s done is they’ve proven that America doesn’t really want to innovate. America doesn’t compete. There is no AI arms race. There is no real killer app to any of this. ChatGPT has 200 million weekly users. People say that’s a sign of something. Yes, that’s what happens when literally every news outlet, all the time, for two years, has been saying that ChatGPT is the biggest thing without sitting down and saying, “What does this bloody thing do and why does it matter?” “Oh, great. It helps me cheat at my college papers.”

And this:

When you actually look at the products, like OpenAI’s operator, they suck. They’re crap. They don’t work. Even now the media is still like, “Well, theoretically this could work.” They can’t. Large language models are not built for distinct tasks. They don’t do things. They are language models. If you are going to make an agent work, you have to find rules for effectively the real world, which AI has proven itself. I mean real AI, not generative AI that isn’t even autonomous is quite difficult.

As you can tell, Zitron has a Brit’s gift for vitriol, which made the program all the more compelling. Now, I am absolutely no expert in AI, but I was intrigued by Zitron’s assertion that LLMs are not AI, and that real AI is already working well in things like autonomous cars. (Really?) But given that we just can’t keep AI — excuse me, LLMs — from infesting journalism, I regarded Gladstone’s interview with Zitron as a reason to be hopeful. Maybe the robots aren’t going to take over after all.

Continue reading “The triumph of hope over experience: The latest on how AI is not solving the local news crisis”

Playing with AI: Can Otter and ChatGPT produce a good-enough account of a podcast interview?

This post will no doubt have limited appeal, but a few readers might find it interesting. I’ve been thinking about how to produce summaries and news stories based on the podcast that Ellen Clegg and I host, “What Works: The Future of Local News.” The best way would be to pay a student to write it up. But is it also a task that could be turned over to AI?

Purely as an experiment, I took our most recent podcast — an interview with Scott Brodbeck, founder and CEO of Local News Now, in the Virginia suburbs of Washington, D.C. — and turned it over to the robots.

I started by downloading the audio and feeding it into Otter, a web-based transcription service that uses AI to guess at what the speaker might actually be saying. Once I had a transcript, I took a part of it — our conversation with Brodbeck, eliminating the introduction and other features — and fed it into ChatGPT twice, once asking it to produce a 600-word summary and then again to produce a 600-word news story. Important caveat: I did very little to clean up the transcript and did not edit what ChatGPT spit out.

The results were pretty good. I’m guessing it would have been better if I had been using a paid version of ChatGPT, but that would require, you know, money. I’d say that what AI produced would be publishable if some human-powered editing were employed to fix it up. Anyway, here are the results.

The transcript

Q: Scott, so many of the projects that we have looked at are nonprofit, and that trend seems to be accelerating. In fact, we love nonprofit news, but we also worry that there are limits to how much community journalism can be supported by philanthropy. So your project is for profit. How have you made that work? Dan, do you think for profit? Digital only, local news can thrive in other parts of the country as well. Continue reading “Playing with AI: Can Otter and ChatGPT produce a good-enough account of a podcast interview?”

A conversation with ChatGPT, given a voice by Siri, shows AI’s promise and limitations

Mike Blinder, publisher of the trade journal Editor & Publisher and the host of its vodcast, “E&P Reports,” tried something unusual recently. He hooked up ChatGPT to Siri and conducted an interview about issues related to artificial intelligence and journalism. The result is like a smarter version of Eliza, a 1960s-vintage AI program that could carry on what seemed like a realistic conversation.

Blinder has become something of an AI evangelist, using to automate some of E&P’s editorial processes and asking ChatGPT to write bios of guests on “E&P Reports.” But AI has a long way to go in terms of carrying on an intelligent conversation that’s also spontaneous.

For instance, as you’ll hear, Blinder’s approach in interviewing ChatGPT is to lead the witness with long, information-packed questions that the chatbot can then use to scour the internet and come back with a plausible-sounding answer.

I also detected at least one error. In response to a question about the possibility that AI-powered search engines will harm news organizations by removing any incentive to click through, ChatGPT says:

When AI tools like Perplexity or Chat GPT provide detailed summaries of content without directing traffic back to the original source, it creates a real challenge for news publishers who rely on page views for ad revenue and subscriptions. I completely understand the concern.

The problem is that Perplexity actually does cite its sources, which differentiates it from ChatGPT and other competitors. It’s why I suggest to my students that Perplexity is a useful tool as long as they click through, and it’s why I use it as well.

Nevertheless, Blinder’s close encounter of the robotic kind is fun and interesting. You can watch it on YouTube or subscribe on any podcast app. Blinder’s been a guest on our “What Works” podcast about local news twice, and Ellen Clegg and I were on “E&P Reports” earlier this year to talk about our book, “What Works in Community News.”

Publishers sound alarm over Google’s AI search plans. But let’s dial it back a bit.

What does AI say about RFK Jr.’s anti-vax activism? Photo (cc) 2023 by Gage Skidmore.

The latest freakout inspired by artificial intelligence has hit the news business. On Tuesday, Google announced that it will merge its AI tool, Gemini, with its search engine. What that means for publishers is that someone seeking information about, say, Robert F. Kennedy Jr.’s anti-vax crusade will receive an answer right in Google rather than being prompted to click to a news source. Let’s try it:

First, I entered <Robert F. Kennedy Jr. anti-vaccine activism> in Google Search. The top five results were from The Associated Press (two), Scientific American, the Annenberg Public Policy Center and the PBS “NewsHour.” In other words, good, respectable sources of independent journalism.

Next, I entered the same search at Gemini and got this: “I’m still learning how to answer this question. In the meantime, try Google Search.” OK! But then I tried the same thing at the free version of ChatGPT, which provided the following answer:

Robert F. Kennedy Jr., son of former U.S. Senator and Attorney General Robert F. Kennedy and nephew of President John F. Kennedy, has been known for his anti-vaccine activism. He has been a prominent figure in the anti-vaccine movement, particularly regarding his views on the safety of vaccines and their alleged connection to autism.

Kennedy founded the Children’s Health Defense organization, which has been criticized for spreading misinformation about vaccines and promoting unfounded claims about their dangers. He has spoken at numerous events and rallies, advocating against vaccination and promoting alternative views on the subject.

It’s important to note that the overwhelming scientific consensus supports the safety and effectiveness of vaccines, and the alleged link between vaccines and autism has been thoroughly debunked by multiple scientific studies. Kennedy’s activism has been met with criticism from public health officials and medical professionals, who argue that his views are not supported by scientific evidence and pose a danger to public health.

I think we can agree that’s pretty good — and it includes exactly zero links to any of its sources, depriving some excellent news organizations of web traffic. I should note that, according to CNN media reporter Oliver Darcy, Google is promising to include links in Gemini, so it may end up looking something like Perplexity, an AI-powered search engine that provides plain-English answers and reliable links.

When I tried our RFK Jr. search at Perplexity, I got a very good answer — as good as ChatGPT, if not better — along with links to Wikipedia, The AP, the “NewsHour” and NBC News. But would I click on them? At the moment, I would, because of AI’s well-known proclivity for providing fake answers. At some point, though, that problem will be solved.

“This will be catastrophic to our traffic, as marketed by Google to further satisfy user queries, leaving even less incentive to click through so that we can monetize our content,” Danielle Coffey, chief executive of the News/Media Alliance, told Darcy. The alliance represents some 2,000 news publishers.

I also took a look at the internal metrics of the only news site I have access to: this one. According to Google Analytics, over the past month Media Nation received 40% of its traffic from “organic search” — that’s traffic from search engines, nearly all Google, that I didn’t boost by paying for ads on Google. And yes, that’s a lot. Next up was direct traffic (25.6%), organic social (21.2%) and referrals (12.1%), which are links from other websites.

Now, I happen to think that some of the lamentations we’re hearing from publishers are overblown. It’s fine to complain that Google is taking steps that will result in fewer clicks on your website. But how much money does that really bring in? These days, you’re likely to hit a paywall when you try to click through from a search. Programmatic ads on news sites are terrible and bring in very little money.

In the end, there is no substitute for building a relationship with your audience. For-profit publishers need to persuade their readers to become digital subscribers and local businesses to advertise. Nonprofits must convince their audience to become voluntary supporters and to raise money from underwriters, foundations, events and whatever else they can think of.

To use Media Nation as an example again: I currently have more than 2,300 subscribers who receive new posts by email. I consider those to be my most engaged readers. I don’t do much to monetize this site, although I have a modest paid supporter program, which, needless to say, you are invited to join. The future of news, though, is being built right now by serving our communities — not through Google search.

Leave a comment | Read comments

Students and AI weigh in on updating the SPJ’s Code of Ethics

Say hello to the new ethics committee. Photo (cc) 2012 by D J Shin.

The Society of Professional Journalists’ Code of Ethics identifies four broad principles: Seek Truth and Report It; Minimize Harm; Act Independently; and Be Accountable. Each of them is fleshed out in some detail. You will note that the slippery concept of objectivity is not mentioned. These days most journalists believe, as I do, that truth-seeking, independence and fairness are the standards to which we should aspire, and that holding up objectivity as a goal only leads to endless debates over what that actually means.

On Wednesday evening, I asked students in my graduate ethics seminar to come up with a fifth principle. There are only eight students in the class, so I had them work in teams of two. I also asked the gremlins of artificial intelligence to weigh in. But first, here’s what my students came up with; I’ve edited them very lightly.

Listen to Others 

  • Be open to feedback
  • Promote cooperative relationships between journalists and the public
  • Be receptive to events in the world
  • Be open to a community that is not your own

Admit Limitations, Accept Your Own Bias

  • Don’t hide your own humanity
  • Accept that you can’t be objective
  • Accept you may not be able to follow all ethics codes all the time
  • Welcome criticism

Respect Relationships with Sources 

  • Don’t get so close with sources that personal feelings might get in the way of reporting
  • Protect them if they will be harmed for their sharing of the truth
  • Provide full disclosure to the sources of what the article is about and where it is being published

Create Spaces for Untold Stories 

One of the main priorities of a journalist should be to be prepared to fight for those who struggle. Journalists should:

  • Be righteous
  • Be brave
  • Create a sense of justice
  • Not be afraid to correct mistakes within their stories
  • Report on marginalized communities and use their personal resources to continue sharing information

I think these are terrific. The SPJ Code provides a pretty wide umbrella, so it’s not easy to come up with a fifth principle that isn’t already covered. But these proposals represent some fresh thinking. The “full disclosure” idea is important because when we’re interviewing people who aren’t accustomed to dealing with reporters, they may not understand the full implications of having their name, their quotes and in some cases their photos put before the public. And I love “Be brave.” That’s obviously true for, say, war correspondents. But journalists also have to deal with a rising wave of harassment and threats, both online and in real life. If they find that they’re unable to stand up to that, then they are not going to be able to serve their community. (They also shouldn’t be expected to put up with harassment, and newsrooms need to develop anti-harassment training and policies.)

Now … what about AI? While my students were working, I took the lazy way out and asked a couple of AI programs to come up with a fifth principle. Here’s the question I asked: “The Society of Professional Journalists’ Code of Ethics has four principles: Seek Truth and Report It; Minimize Harm; Act Independently; Be Accountable. What would be a worthwhile fifth principle?” Once I finished playing, I deleted the answers and then asked them again so my students could watch everything unfold in real time. And here’s the answer from ChatGPT:

The addition of a fifth principle to the Society of Professional Journalists’ Code of Ethics could focus on promoting transparency. Here’s a suggestion:

“Foster Transparency: Journalists should strive to maintain transparency in their reporting processes, disclosing sources, conflicts of interest, and methodologies to enhance the credibility and trustworthiness of their work. By being transparent, journalists contribute to a more informed public and uphold the integrity of journalism.”

Not bad. Now, this is ChatGPT 3.5. You have to pay for the latest version, 4.0, and I’m obviously not going to do that. But I also asked the same question of Google Bard. As I understand it, the version that’s available to the public represents Google’s best tech. So here’s the answer:

There have been many suggestions for a fifth principle to the SPJ Code of Ethics, each reflecting evolving priorities and challenges in the journalism landscape. Here are some intriguing options to consider:

1. Foster Inclusivity and Diversity: This principle could emphasize the importance of representing marginalized voices, amplifying underrepresented perspectives, and ensuring newsrooms and reporting reflect the rich tapestry of society. It could challenge journalists to actively seek out diverse sources, challenge their own biases, and avoid harmful stereotypes.

2. Uphold Digital Responsibility: This principle would address the unique ethical challenges of the digital age, such as misinformation, disinformation, and privacy concerns. It could encourage journalists to verify information meticulously, source responsibly, and be transparent about online tools and methods.

3. Champion Civic Engagement: This principle could encourage journalists to go beyond informing the public and actively engage citizens in civic discourse. It could promote collaborative reporting, fact-checking initiatives, and platforms for audience participation, empowering the public to be informed participants in democracy.

4. Protect the Vulnerable: This principle could emphasize the need to protect vulnerable individuals and communities from exploitation or harm through reporting. It could call for sensitivity in covering sensitive topics, ensuring informed consent, and avoiding sensationalism that could endanger individuals.

5. Champion Transparency and Openness: This principle could focus on promoting transparency within the media itself, advocating for open access to information, and holding powerful institutions accountable. It could encourage self-reflection within journalism, critical examination of industry practices, and fostering a culture of ethical news consumption.

Ultimately, the “best” fifth principle is subjective and depends on the priorities and concerns of the journalism community at any given time. Each suggestion above addresses a critical aspect of ethical journalism in the 21st century, and ultimately, the decision lies with the SPJ and its members to determine which principle would best serve the mission of responsible and ethical journalism.

I hope this provides a helpful starting point for your exploration of this important topic!

This is pretty good. My students were meh about the ChatGPT answer but were impressed with Bard. If you’ve played with either of these programs, you know that the answer comes up almost instantaneously, which in itself is an impressive feat. I also think that though “Uphold Digital Responsibility” is fine as far as it goes, it should state explicitly whether or not AI is being used to help with reporting and writing.

Leave a comment | Read comments

The Bard of AI

I haven’t played enough with the newest version of Google Bard to know whether it’s better than ChatGPT, but Bard has some advantages. You don’t have to log in — if you’re like most people, you’re already logged in through Google. The database is more up to date: It knows that Maura Healey is governor, whereas ChatGPT still thinks Charlie Baker is in the corner office. And it provides links. My misgivings about artificial intelligence aside, I’m impressed.

Leave a comment | Read comments

Following up on how AI summarized a panel discussion

I got some great feedback on my post about using artificial intelligence to summarize a panel discussion. This is an issue I’ll continue to keep a close eye on and to experiment with. I want to surface a few comments I received and gather them together here.

  • From Ed Miller, editor of The Provincetown Independent: “I was there at the Radically Rural panel discussion, and I’m not sure I would call this summary ‘shockingly good,’ Dan. It is pretty good, but it completely misses the most important point in Victoria Bouloubasis’s presentation, which is that the Iowa poultry plant workers’ organizing efforts mostly failed to persuade local officials to help them.” OK, I guess I could have said “shockingly coherent” rather than “shockingly good.”
  • From Tom Johnson: “Any idea what it means to ’empower stereotypes’? Some species sure would help.” Johnson is referring to a section of the summary that says, “The story aimed to empower and defy stereotypes surrounding immigrant communities, contrasting with typical narratives of victimization.” I would agree that ChatGPT is no A.J. Liebling, but overall I thought we got the drift.
  • From Rebecca Rainey, writing on Threads: “Worth noting: The summaries are incredibly boring. I would much rather read your reporting and light analysis, which would tell me what matters most in the grand scheme of things.” My response is that such summaries would be more for internal newsroom use than for public consumption. The next step is to take such a summary and see if ChatGPT can transform it into a news story. I’ll be looking for a suitable event sometime in the near future.

Meanwhile, OpenAI, the company that rolled out ChatGPT a year ago, is in utter turmoil. Co-founder Sam Altman was fired over the weekend and is now moving to Microsoft. The speculation is that the OpenAI board wanted to proceed more slowly and was concerned that Altman was too dismissive of AI’s potential dangers. Presumably Microsoft will let him pick up the pace, so overall this is not good news.

Leave a comment | Read comments