The triumph of hope over experience: The latest on how AI is not solving the local news crisis

Illustration produced by AI using DALL-E

This past weekend I listened to a bracingly entertaining conversation that the public radio program “On the Media” conducted with tech journalist Ed Zitron. Co-host Brooke Gladstone had billed it as a chance for Zitron to make sense out of DeepSeek, the new Chinese artificial-intelligence software that purports to do what ChatGPT and its ilk can do for a fraction of the cost — and, presumably, while using a fraction of the electric power burned by American AI companies.

But it was so much more than that. Maybe you’re familiar with Zitron. I wasn’t. As I learned, he is a caustic skeptic of American AI in general. In fact, he doesn’t even regard the large language models (LLMs) that we’ve come to think of as AI as the real thing, saying they are nothing but an error-prone scam that is attracting fast sums of venture capital but will never make any money. Here’s a taste:

The real damage that DeepSeek’s done is they’ve proven that America doesn’t really want to innovate. America doesn’t compete. There is no AI arms race. There is no real killer app to any of this. ChatGPT has 200 million weekly users. People say that’s a sign of something. Yes, that’s what happens when literally every news outlet, all the time, for two years, has been saying that ChatGPT is the biggest thing without sitting down and saying, “What does this bloody thing do and why does it matter?” “Oh, great. It helps me cheat at my college papers.”

And this:

When you actually look at the products, like OpenAI’s operator, they suck. They’re crap. They don’t work. Even now the media is still like, “Well, theoretically this could work.” They can’t. Large language models are not built for distinct tasks. They don’t do things. They are language models. If you are going to make an agent work, you have to find rules for effectively the real world, which AI has proven itself. I mean real AI, not generative AI that isn’t even autonomous is quite difficult.

As you can tell, Zitron has a Brit’s gift for vitriol, which made the program all the more compelling. Now, I am absolutely no expert in AI, but I was intrigued by Zitron’s assertion that LLMs are not AI, and that real AI is already working well in things like autonomous cars. (Really?) But given that we just can’t keep AI — excuse me, LLMs — from infesting journalism, I regarded Gladstone’s interview with Zitron as a reason to be hopeful. Maybe the robots aren’t going to take over after all.

At the same time, though, AI journalism (or, should I say, AI “journalism”) keeps rearing its ugly head. Last week Nieman Lab published a long article by Andrew Deck about a network of AI-based local newsletters that are parasitically feeding off actual reporting from legitimate news organizations. There are 355 of them in 47 states, and they include fake testimonials from folks like “Matthew K.” and “Michael H.,” who pop up in town after town.

“Local news providers appreciate our work promoting their best local content for free, and often seek out ways for us to promote even more of their content,” founder Matthew Henderson told Deck. To which Rodney Gibbs of the National Trust for Local News replied, “His claim is, frankly, horseshit. The suggestion that he’s helping news deserts is absurd.” The National Trust owns more than 60 papers in Colorado, Maine and Georgia, and its properties are among those that Henderson’s operation has so selflessly promoted.

And here’s some more about AI journalism in Maine: Max Tani of Semafor reports that Reade Brower, the Maine publisher who owned the Portland Press Herald and other papers before the National Trust purchased them in 2023, is adding an AI component to his new venture, the Midcoast Villager. I wrote about Reade’s new venture last August. Tani writes:

The publication will work alongside a small book publisher, and will run events and writers retreats out of a hotel and adjacent Villager Café, a coffee shop that will operate in conjunction with the newspaper, whose offices sit above the café.

The Midcoast Villager will also be the first client of Civic Sunlight, a Maine-based AI startup founded by the former CTO of the Atlantic and a former executive at Fox. The company plans to partner with media companies by using AI to transcribe and summarize local meetings and analyze them for trends.

Brower is being joined by a prominent media executive, NBC News political reporter Alex Seitz-Wald. When Brower got involved with the papers that have been merged into the Midcoast Villager, it sounded like a retirement project. It’s starting to look like more than that.

So what about Massachusetts? For a while, Arlington was served by an experimental AI news site called Inside Arlington. A Northeastern graduate student, Ian Dartley, wrote about the project for What Works, our website about the future of local news. That project, though, appears to have been suspended, as nothing new has been posted since June 2024. Arlington is already well served by YourArlington, a nonprofit news project with an actual editor and reporters.

A similar AI-based project, though, has been launched south of Boston in the form of a Substack newsletter. South Shore News, according to its About page, “is an experiment in AI journalism. Transcripts generated from community access meetings are consolidated into news articles and delivered to your inbox.” The message also includes this helpful warning: “Generative AI still occasionally hallucinates. If you see anything incorrect please reach out and let us know.”

South Shore News appears to be legitimate. According to his Facebook page, Justin Evans, the founder, is a member of the Whitman select board, a state employee and a graduate of Northeastern. South Shore News’ coverage area includes Whitman, and stories from that community are appended with a disclosure about Evans’ service to the town.

In addition, an AI-generated news site called All About Town has popped up in Marblehead, which is already home to two independent local news outlets, the Marblehead Current and the Marblehead Weekly News. All About Town was created by Joel Lederman, a high school junior, “where he is piloting the platform he developed in his home community.” There’s also this:

All About Town provides all information in a good faith effort to improve community engagement and awareness. However, text is generated by artificial intelligence, and we make no representation or warranty of any kind, express or implied, regarding the accuracy, adequacy, validity, reliability, availability, or completeness of any information provided herein. Use of the site and reliance on any information on the site is solely at each individual’s own risk.

From my perspective, there are two major problems with AI-generated (or, as Zitron would say, LLM-generated) journalism. The first is that it’s not reliable, as the warning messages at both South Shore News and All About Town would attest. Although there are real human beings at the switch to make sure that these sites don’t hallucinate a murder story, as once happened in New Jersey, they are nevertheless not the place you want to go for an accurate summation of what happened at a local governmental meeting.

The other problem is more significant because it goes to the heart of what journalism is. Yes, we rely on journalism for information. But at root, it’s the way that members of a community come together, talk among themselves, and learn to solve problems cooperatively. You can’t do that with AI.

There is simply no substitute for actual human beings going out and telling a community’s stories. That, not robot-generated news that may or may not be true, is what is ultimately going to help us overcome the polarization destroying our society.


Discover more from Media Nation

Subscribe to get the latest posts sent to your email.

2 thoughts on “The triumph of hope over experience: The latest on how AI is not solving the local news crisis”

  1. I’ve been following Ed a little while on bsky, and I have deeply techie friends who basically share his skepticism. And what has been the killer app?? So far they’re sinking hundreds of billions of investment money to get smaller and smaller increments of improvement, and still they can’t sell enough of the product to make even a tiny debt on what they spent. They’re hype men, and that’s it. Bubble’s about to burst, which DeepSeek is going to accelerate.

    Tech companies need to stop thinking they’re always going to have exponential growth forever, and admit they’re a mature industry with smaller margins, and less excitement, like the insurance industry.

  2. Google feeds me Generative AI with every search. I skim through it but ignore the results almost every time. It’s often inaccurate to the point of pure falsehood. It’s almost as much of a waste of time as the sponsored results that clutter the next segment of Google searches. Google got to the top of search engine preferences by being accurate and efficient. Now I can’t wait for another thing to come along and turn Google into the next Altavista. Blame AI fever.

Post a Comment. Real names, first and last, are recommended.