Six years ago, The Washington Post announced that it would begin producing stories about high school football games using artificial intelligence. The expanded use of Heliograf, the Post’s “in-house automated storytelling technology,” would allow the news organization “to cover all Washington, D.C.-area high school football games every week,” according to a press release. The press release linked to an example of such coverage — a mundane article that begins:
The Yorktown Patriots triumphed over the visiting Wilson Tigers in a close game on Thursday, 20-14.
The game began with a scoreless first quarter.
In the second quarter, The Patriots’ Paul Dalzell was the first to put points on the board with a two-yard touchdown reception off a pass from quarterback William Porter.
Yet now, with AI tools having improved considerably, Gannett is running into trouble for doing exactly the same thing. Writing for Axios Columbus, Tyler Buchanan reports that The Columbus Dispatch had suspended AI-generated local sports coverage after the tool, LedeAI, came in for criticism and mockery. As Buchanan observes, one such article “was blasted on social media for its robotic style, lack of player names and use of awkward phrases like ‘close encounter of the athletic kind.’”
Please become a supporter of this free source of news and commentary for $5 a month. Just click here.
Has AI gone backwards since 2017? Obviously not. So what went wrong? It’s hard to say, but it could be that the generative AI tools that started becoming available late last year, with ChatGPT in the forefront, are more finicky than the blunt instrument developed by the Post some years back. In theory, generative AI can write a more natural-sounding story than the robotic prose produced by Heliograf and its ilk. In practice, if an AI tool like LedeAI is trained on a corpus of material loaded with clichés, then the output is going to be less than stellar.
Clare Duffy of CNN found that Gannett’s use of AI was not limited to Columbus. Other outlets that ran LedeAI-generated sports stories included the Courier Journal of Louisville, Kentucky; AZ Central; Florida Today, and the Journal Sentinel of Milwaukee, Wisconsin. Duffy reported that one story, before it was revised, included this Grantland Rice-quality gem: “The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday.”
There’s another dynamic that needs to be considered as well. The Washington Post, a regional newspaper under the Graham family, repositioned itself as a national digital news organization after Amazon founder Jeff Bezos bought it in 2013. Regional coverage is secondary to its mission, and if it weren’t covering high school football games with AI, then it wouldn’t be covering them at all.
By contrast, you’d think that high school sports would be central to the mission at Gannett’s local and regional dailies. Turning such coverage over to AI and then not bothering to check what they were publishing is exactly the sort of move you’d expect from the bottom-line-obsessed chain, though it obviously falls short of its obligation to the communities it serves.
Poynter media columnist Tom Jones, a former sportswriter, raises another issue worth pondering — the elimination of an important training ground for aspiring sports journalists:
There is still a contentious debate about how publishers should use AI. Obviously, journalists will be (and should be) upset if AI is being used to replace human beings to cover events. As someone who started his career covering high school football, I can tell you that invaluable lessons learned under the Friday night lights laid the foundation for covering events such as the Olympics and Stanley Cup finals and college football national championships in the years after that.
At a moment when AI is the hottest of topics in journalistic circles, Gannett’s botched experiment demonstrated that there is no substitute for actual reporters.
By the way, I asked ChatGPT to write a six- to eight-word headline for this post. The result: “AI-Generated Sports Coverage Faces Scrutiny: What Went Wrong?” Not bad, but lacking the specificity I was looking for.