Using AI to tell a story about AI

Today at What Works, we have a story by one of our Northeastern graduate students, Ian Dartley, about Inside Arlington, a local news project powered by artificial intelligence. It’s no substitute for the human touch, and in fact the town already has a very good nonprofit news organization, YourArlington. But it’s an interesting experiment, and Ian does a great job of explaining it.

We also decided to have a little fun. The headline and the bullet points used to summarize Ian’s story were written by ChatGPT. So was the social media post we used to promote the story. Here’s how it looks on Threads:

How about ChatGPT finding that dog emoji? Good boy! I thought it was interesting that ChatGPT wrote fairly dull headlines and bullet points but suddenly developed a sense of fun when it came time to write a social media post.

Leave a comment | Read comments

Gannett’s failed attempt to cover school sports with AI raises eyebrows and questions

Photo (cc) 2014 by MHS Wildcat

Six years ago, The Washington Post announced that it would begin producing stories about high school football games using artificial intelligence. The expanded use of Heliograf, the Post’s “in-house automated storytelling technology,” would allow the news organization “to cover all Washington, D.C.-area high school football games every week,” according to a press release. The press release linked to an example of such coverage — a mundane article that begins:

The Yorktown Patriots triumphed over the visiting Wilson Tigers in a close game on Thursday, 20-14.

The game began with a scoreless first quarter.

In the second quarter, The Patriots’ Paul Dalzell was the first to put points on the board with a two-yard touchdown reception off a pass from quarterback William Porter.

Yet now, with AI tools having improved considerably, Gannett is running into trouble for doing exactly the same thing. Writing for Axios Columbus, Tyler Buchanan reports that The Columbus Dispatch had suspended AI-generated local sports coverage after the tool, LedeAI, came in for criticism and mockery. As Buchanan observes, one such article “was blasted on social media for its robotic style, lack of player names and use of awkward phrases like ‘close encounter of the athletic kind.’”

Please become a supporter of this free source of news and commentary for $5 a month. Just click here.

Has AI gone backwards since 2017? Obviously not. So what went wrong? It’s hard to say, but it could be that the generative AI tools that started becoming available late last year, with ChatGPT in the forefront, are more finicky than the blunt instrument developed by the Post some years back. In theory, generative AI can write a more natural-sounding story than the robotic prose produced by Heliograf and its ilk. In practice, if an AI tool like LedeAI is trained on a corpus of material loaded with clichés, then the output is going to be less than stellar.

Clare Duffy of CNN found that Gannett’s use of AI was not limited to Columbus. Other outlets that ran LedeAI-generated sports stories included the Courier Journal of Louisville, Kentucky; AZ Central; Florida Today, and the Journal Sentinel of Milwaukee, Wisconsin. Duffy reported that one story, before it was revised, included this Grantland Rice-quality gem: “The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday.”

There’s another dynamic that needs to be considered as well. The Washington Post, a regional newspaper under the Graham family, repositioned itself as a national digital news organization after Amazon founder Jeff Bezos bought it in 2013. Regional coverage is secondary to its mission, and if it weren’t covering high school football games with AI, then it wouldn’t be covering them at all.

By contrast, you’d think that high school sports would be central to the mission at Gannett’s local and regional dailies. Turning such coverage over to AI and then not bothering to check what they were publishing is exactly the sort of move you’d expect from the bottom-line-obsessed chain, though it obviously falls short of its obligation to the communities it serves.

Poynter media columnist Tom Jones, a former sportswriter, raises another issue worth pondering — the elimination of an important training ground for aspiring sports journalists:

There is still a contentious debate about how publishers should use AI. Obviously, journalists will be (and should be) upset if AI is being used to replace human beings to cover events. As someone who started his career covering high school football, I can tell you that invaluable lessons learned under the Friday night lights laid the foundation for covering events such as the Olympics and Stanley Cup finals and college football national championships in the years after that.

At a moment when AI is the hottest of topics in journalistic circles, Gannett’s botched experiment demonstrated that there is no substitute for actual reporters.

By the way, I asked ChatGPT to write a six- to eight-word headline for this post. The result: “AI-Generated Sports Coverage Faces Scrutiny: What Went Wrong?” Not bad, but lacking the specificity I was looking for.

Leave a comment | Read comments

A new report finds that content farms are loading up on AI. Will local news be next?

Meet your new reporting staff. Photo (cc) 2023 by Dan Kennedy.

A recent report by NewsGuard, a project that evaluates news organizations for reliability and transparency, found that clickbait generated by artificial intelligence is on the rise. McKenzie Sadeghi and Lorenzo Arvanitis write:

NewsGuard has identified 49 news and information sites that appear to be almost entirely written by artificial intelligence software. A new generation of content farms is on the way.

The report didn’t specifically identify any local news websites that are using AI to write low-quality stories aimed at getting clicks and programmatic advertising. Perhaps non-local stories about health, entertainment and tech, to name three of the topics for which content farms are using AI, more readily fly under the radar. If you’re going to use AI to produce articles about the local tax rate or the women’s track team, you’re going to get caught pretty quickly when the results prove to be wrong. Still, the use of AI to produce some forms of local news, such as routine articles about real-estate transactions, is not new.

According to the NewsGuard report, there doesn’t seem to be a concerted effort yet to use AI in order to produce deliberately false stories, although there have been a few examples, including a celebrity death site that claimed President Biden had “passed away peacefully in his sleep.”

Call this Pink Slime 3.0. Version 1.0 was low-tech compared to what’s available today. Back in 2012, the public radio program “This American Life” found that a company called Journatic (pronounced “joor-NAT-ik,” though I always thought it should be “JOOR-nuh-tik”) was producing local content for newspapers using grossly underpaid, out-of-town reporters — including cheap Filipino workers who wrote articles under fake bylines.

Pink Slime 2.0, of more recent vintage, consists of hundreds of websites launched to exploit the decline of local news. Under such banners as “North Boston News” (!), these sites purport to offer community journalism but are actually a cover for political propaganda. Nearly all of them serve right-wing interests, thought there were a few on the left as well.

Pink Slime 3.0 threatens to become more insidious as AI continues to improve. As Seth Smalley wrote for Poynter Online, this is “pink slime on steroids.”

Of course, AI could prove to be a boon for local news, as Sebastian Grace wrote last week for What Works, our Northeastern journalism project tracking developments in community journalism. By eliminating repetitive drudge work, AI can free journalists to produce high-value stories that really matter.

Still, bottom-feeders like CNET — not exactly a content farm, but not much better than that, either — have already been caught publishing error-laden stories with AI. You can only imagine what sort of advice these content farms are going to give people about dealing with their medical problems.

OpenAI, which likes to portray itself as a responsible player in discussions about the future of AI, would not respond to NewsGuard’s inquiries. Neither would Facebook, which is amplifying AI-generated content.

The only thing we can be sure of is that a new, more insidious version of pink slime is coming to a website near you — if it hasn’t already.

Pioneering digital publisher Howard Owens tells us about a new idea for raising revenues

Howard Owens. Photo by Don Walker and used by permission.

On the new “What Works” podcast, Ellen Clegg and I talk with Howard Owens, the publisher of The Batavian, a digital news organization in Genesee County, New York, way out near Buffalo. When I first met Howard, he was the director of digital publishing for GateHouse Media, which later morphed into Gannett. Howard launched The Batavian for GateHouse in 2008. In 2009, GateHouse eliminated Howard’s job, but they let him take The Batavian with him, and he’s been at it ever since.

The Batavian’s website is loaded with well over 100 ads, reflecting his belief that ads should be put right in front of the reader, not rotated in and out. He’s also got an innovative idea to raise money from his readers while keeping The Batavian free, which we ask him about during our conversation with him.

We’re also joined by Sebastian Grace, who just received his degree in journalism and political science from Northeastern. Everyone in journalism is freaking out about ChatGPT and other players in the new generation of artificial intelligence. Seb wrote a really smart piece, which is up on the What Works website, assuring us all that we shouldn’t worry — that AI is a tool that can allow journalists to work smarter.

Ellen has a Quick Take on Mississippi Today, which won a Pulitzer Prize for Local Reporting for stories that revealed how a former Mississippi governor used his office to steer millions of state welfare dollars to benefit family and friends. Including NFL quarterback Brett Favre! We interviewed Mary Margaret White, the CEO of Mississippi today, on the podcast in November 2022. And reporter Anna Wolfe has a great podcast about her prize-winning series.

I observe that journalism these days is often depicted as deep blue — something that liberals and progressives may pay attention to, but that conservatives and especially Trump supporters dismiss as fake news. But Steve Waldman, the head of the Rebuild Local News coalition, says it’s not that simple, and that the local news crisis is harming conservatives even more than it is liberals.

You can listen to our conversation here and subscribe through your favorite podcast app.

Artificial intelligence writes an op-ed piece on gun control

Photo (cc) 2011 by Louis K.

I’ve been having some fun with AI-generated content recently, posting a description of the MBTA in the voice of Herman Melville and a profile of me that is thoroughly bollixed up with a marketing guru who’s also named Dan Kennedy. Today, though, I offer something a little more serious and practical. I directed ChatGPT to write a 600-word opinion article in favor of gun control. The result is below. It needs a little editing, especially in the lead. I didn’t check the numbers, and ChatGPT is known for producing false facts. The pro-gun control argument is not especially sophisticated. But you can imagine a state legislator doing something like this to write an op-ed for a local news outlet. You may think that’s fine. I think it’s chilling.

Gun control has been a controversial issue in the United States for many years, with strong opinions on both sides of the debate. As an AI language model, I do not have a political affiliation or personal beliefs. However, I can analyze data and research and provide an evidence-based opinion on the topic. Therefore, in this article, I will make a case in favor of gun control.

Firstly, let’s look at the numbers. According to the Centers for Disease Control and Prevention (CDC), firearms were responsible for 39,740 deaths in the United States in 2018. That is an average of 109 deaths per day, or roughly four deaths every hour. It is undeniable that gun violence is a significant public health issue in this country.

Many people argue that the Second Amendment guarantees their right to bear arms. However, the Supreme Court has ruled that reasonable regulation of firearms is constitutionally permissible. Therefore, it is within the government’s power to implement laws and regulations to protect the public from gun violence.

One such regulation could be universal background checks. Current federal law only requires background checks for gun purchases from licensed dealers. However, a loophole exists that allows private sellers, who are not required to be licensed, to sell firearms without conducting a background check. This loophole makes it easy for criminals and individuals with a history of violence to obtain guns.

Universal background checks would close this loophole and ensure that all gun purchases, regardless of the seller, are subject to a background check. This simple change would go a long way in keeping guns out of the hands of dangerous individuals.

Another common-sense regulation would be a ban on assault weapons. These weapons are designed for military use and have no practical purpose in civilian life. They have been used in several high-profile mass shootings, including the tragedies in Las Vegas, Parkland, and Sandy Hook.

Opponents of a ban on assault weapons argue that it would violate their Second Amendment rights. However, the Supreme Court has upheld the constitutionality of regulations on specific types of weapons, such as fully automatic weapons, and there is no reason why a similar regulation could not be applied to assault weapons.

Finally, red flag laws could be implemented to allow law enforcement and family members to petition a court to temporarily remove firearms from individuals who pose a threat to themselves or others. These laws have already been implemented in several states and have been shown to be effective in preventing gun violence.

In conclusion, gun violence is a serious public health issue in the United States, and reasonable regulations on firearms can help reduce the number of deaths and injuries. Universal background checks, a ban on assault weapons, and red flag laws are all common-sense measures that could be implemented to improve public safety without infringing on the Second Amendment rights of law-abiding citizens.

It is essential to remember that gun control is not about taking away people’s guns or infringing on their rights. It is about finding ways to prevent gun violence while still allowing responsible gun ownership. We can all agree that we want to live in a safe society, and gun control measures can help make that a reality.

Ultimately, the key to reducing gun violence is a collaborative effort between government, law enforcement, and the public. By working together, we can find solutions that respect our rights while keeping us safe.

Herman on the MBTA

Public domain illustration

I asked ChatGPT to write a 300-word column about the MBTA in the voice of Herman Melville, and it gave me the single best description of the troubled public transportation system ever:

It is a system that demands both patience and cunning, for one must be quick of foot and mind to catch the right train, or else suffer the indignity of waiting upon a frigid platform for what seems an eternity.

Trolley ahoy! Hast seen the Green Line?

Update: I’ve removed the link to the full column, which I’m told doesn’t work for anyone but me. So here it is below:

Ah, the Massachusetts Bay Transportation Authority, what a grand and curious beast it is! A marvel of modernity, with its electric rails and clanging bells, carrying thousands of souls to and fro throughout the bustling metropolis of Boston. But also a beast with many heads, each bearing its own peculiarities and frustrations.

As I, Herman Melville, have had the occasion to travel upon this sprawling system of trains, buses, and subways, I cannot help but be struck by its contradictions. On the one hand, it is a great equalizer, bringing people of all classes and stations together in the common pursuit of transit. On the other, it is a labyrinthine maze, with its schedules and routes so convoluted that one might need a sextant and a compass to navigate them.

It is a system that demands both patience and cunning, for one must be quick of foot and mind to catch the right train, or else suffer the indignity of waiting upon a frigid platform for what seems an eternity. And yet, for all its idiosyncrasies and flaws, there is a certain romance to the MBTA, a sense of adventure that comes with exploring its many routes and stops.

I have traveled upon the Red Line, the Green Line, the Orange Line, and the Blue, each with its own distinct character and challenges. The Red Line, with its sleek trains and sweeping views of the city, is a joy to ride, though it can be overcrowded and prone to delays. The Green Line, with its jangling trolleys and labyrinthine routes, is a source of endless fascination, though it can be slow and unpredictable. The Orange Line, with its long stretches of underground tunnels and bustling stations, is a marvel of engineering, though it can be hot and stuffy in the summer months. And the Blue Line, with its sweeping views of the harbor and its ferries to the islands, is a delight to ride, though it can be crowded and prone to delays.

And yet, for all its charms and frustrations, the MBTA is an essential part of life in Boston, connecting people and places in a way that no other mode of transportation can. It is a beast that we must learn to love, even as we curse its many heads.