
The Washington Post’s plan to bring in a plethora of outside opinion writers, edited by artificial intelligence, is being widely mocked, as it should be. But the idea is not new — at least the non-AI part.
A decade ago, the Post started publishing something called PostEverything, which the paper called “a digital daily magazine for voices from around the world.” Here’s how the 2014 rollout described it:
In PostEverything, outsiders will entertain and inform readers with fresh takes, personal essays, news analyses, and other innovative ways to tell the stories everyone is talking about — and the ones they haven’t yet heard.
PostEverything went PostNothing sometime in 2022, but now it’s back. According to Benjamin Mullin of The New York Times (gift link), the revived feature, known internally as Ripple, will comprise opinion writing from other newspapers, independent writers on Substack and, eventually, nonprofessional writers. Ripple will be digital-only and will be offered outside the Post’s paywall.
What’s hilarious is that Mullin contacted several of the partners the Post is considering, such as The Salt Lake Tribune and The Atlanta Journal-Constitution, and was told they’re not interested. Another potential partner was identified as Jennifer Rubin, who quit the Post over owner Jeff Bezos’ meddling and started her own publication called The Contrarian. Mullin writes: “When told that she had been under consideration at all, Ms. Rubin burst out in laughter. ‘Did they read my public resignation letter?’ she said.”
The AI angle is roundly being ridiculed as well. Sometime after Ripple launches, the Post intends to seek out nonprofessional writers to broaden Ripple’s appeal. An AI editing tool called Ember will be employed to spruce up their writing. Mullin explains:
Early mock-ups of the tool feature a “story strength” tracker that tells writers how their piece is shaping up, with a sidebar that lays out basic parts of story structure: “early thesis,” “supporting points” and “memorable ending.” A live A.I. assistant would provide developmental questions, with writing prompts inviting authors to add “solid supporting points,” one of the people said.
Good Lord.
Ripple does not strike me as a bad idea except for the AI part. PostEverything was a success, and there’s no reason this can’t be, too. The problem is that Bezos has so damaged the reputation of the Post’s opinion section that anything the Post tries now is greeted with skepticism.
Mea culpa x2
After previously publishing a brief statement about how an AI-generated guide to summer books that don’t actually exist made its way into the Chicago Sun-Times, Chicago Public Media chief executive Melissa Bell has written a much longer mea culpa, posted on May 29. The Philadelphia Inquirer, which published the same fake news, has published an acknowledgment as well.
The Sun-Times is a nonprofit that’s part of Chicago Public Media, a broadcasting operation. To recap, the Sun-Times and the Inquirer ran a Sunday supplement last month from King Features, part of Hearst, that contained some fake AI-generated content. As Bell writes, other papers would likely have carried the supplement the following weekend if the Sun-Times and the Inquirer had not been called out.
“The summer section was intended to be a supplemental value to our subscribers alongside our own journalism,” she writes. “Instead it detracted and distracted from our work.” She adds:
So, what will we take away from this? First, Chicago Public Media will not back away from experimenting and learning how to properly use AI. We will not be using AI agents to write our stories, but we will work to find ways to use AI technology to help our work and serve our audiences. We’ve started that work recently, in part thanks to a grant from The Lenfest Institute that helps fund an AI fellow to work alongside our journalists on responsible experiments.
Ah, yes, the Lenfest Institute. That’s the nonprofit organization that owns The Philadelphia Inquirer. Lenfest has gone all-in on AI, and it recently announced that its AI Collaborative and Fellowship Program was welcoming five new members, including The Boston Globe.
I should note that has nothing to do with the Inquirer’s decision to run a summer supplement from King Features. As with the Sun-Times, Inquirer executives had no way of knowing it contained AI slop.
On May 21, the Inquirer published a news story about the incident, quoting its editor and senior vice president, Gabriel Escobar, calling the incident a “violation of our own internal policies and a serious breach.”
Inquirer columnist Will Bunch mentioned the Inquirer’s embarrassment in a June 1 column dedicated to bashing AI. He wrote:
There’s been little or no public debate about the lack of AI regulations, even as the warning of a human job apocalypse isn’t the only way that programs like OpenAI’s ChatGPT are threatening the planet. The massive energy demand for powering these supercomputers makes little sense amid a crisis of global warming, and overuse of AI by students could turn the brains of our younger generations into pulp. Yet in America’s Summer of Hallucination, no one is seeing through the purple haze.
This isn’t good
On Tuesday I wrote about Business Insider’s plans to eliminate 21% of its staff and to embrace AI as a way of producing more churnalism, even as the outlet tries to convince its audience to pay for digital subscriptions.
There’s no question that AI holds the potential to eliminate drudgery from certain types of newsroom work, such as production-related tasks, writing social media posts (but check them!) and the like. The Midcoast Villager in Camden, Maine, is using AI to keep track of what’s going on in 43 communities so that actual reporters can follow up.
Too often, though, AI is being used as a shortcut and a way to cut costs. That is no substitute for the relationships that journalists build with their communities. What’s worse, AI remains unreliable. We are not in a good place.
Discover more from Media Nation
Subscribe to get the latest posts sent to your email.