Playing with AI: Can Otter and ChatGPT produce a good-enough account of a podcast interview?

This post will no doubt have limited appeal, but a few readers might find it interesting. I’ve been thinking about how to produce summaries and news stories based on the podcast that Ellen Clegg and I host, “What Works: The Future of Local News.” The best way would be to pay a student to write it up. But is it also a task that could be turned over to AI?

Purely as an experiment, I took our most recent podcast — an interview with Scott Brodbeck, founder and CEO of Local News Now, in the Virginia suburbs of Washington, D.C. — and turned it over to the robots.

I started by downloading the audio and feeding it into Otter, a web-based transcription service that uses AI to guess at what the speaker might actually be saying. Once I had a transcript, I took a part of it — our conversation with Brodbeck, eliminating the introduction and other features — and fed it into ChatGPT twice, once asking it to produce a 600-word summary and then again to produce a 600-word news story. Important caveat: I did very little to clean up the transcript and did not edit what ChatGPT spit out.

The results were pretty good. I’m guessing it would have been better if I had been using a paid version of ChatGPT, but that would require, you know, money. I’d say that what AI produced would be publishable if some human-powered editing were employed to fix it up. Anyway, here are the results.

The transcript

Q: Scott, so many of the projects that we have looked at are nonprofit, and that trend seems to be accelerating. In fact, we love nonprofit news, but we also worry that there are limits to how much community journalism can be supported by philanthropy. So your project is for profit. How have you made that work? Dan, do you think for profit? Digital only, local news can thrive in other parts of the country as well. Continue reading “Playing with AI: Can Otter and ChatGPT produce a good-enough account of a podcast interview?”

A conversation with ChatGPT, given a voice by Siri, shows AI’s promise and limitations

Mike Blinder, publisher of the trade journal Editor & Publisher and the host of its vodcast, “E&P Reports,” tried something unusual recently. He hooked up ChatGPT to Siri and conducted an interview about issues related to artificial intelligence and journalism. The result is like a smarter version of Eliza, a 1960s-vintage AI program that could carry on what seemed like a realistic conversation.

Blinder has become something of an AI evangelist, using to automate some of E&P’s editorial processes and asking ChatGPT to write bios of guests on “E&P Reports.” But AI has a long way to go in terms of carrying on an intelligent conversation that’s also spontaneous.

For instance, as you’ll hear, Blinder’s approach in interviewing ChatGPT is to lead the witness with long, information-packed questions that the chatbot can then use to scour the internet and come back with a plausible-sounding answer.

I also detected at least one error. In response to a question about the possibility that AI-powered search engines will harm news organizations by removing any incentive to click through, ChatGPT says:

When AI tools like Perplexity or Chat GPT provide detailed summaries of content without directing traffic back to the original source, it creates a real challenge for news publishers who rely on page views for ad revenue and subscriptions. I completely understand the concern.

The problem is that Perplexity actually does cite its sources, which differentiates it from ChatGPT and other competitors. It’s why I suggest to my students that Perplexity is a useful tool as long as they click through, and it’s why I use it as well.

Nevertheless, Blinder’s close encounter of the robotic kind is fun and interesting. You can watch it on YouTube or subscribe on any podcast app. Blinder’s been a guest on our “What Works” podcast about local news twice, and Ellen Clegg and I were on “E&P Reports” earlier this year to talk about our book, “What Works in Community News.”

Publishers sound alarm over Google’s AI search plans. But let’s dial it back a bit.

What does AI say about RFK Jr.’s anti-vax activism? Photo (cc) 2023 by Gage Skidmore.

The latest freakout inspired by artificial intelligence has hit the news business. On Tuesday, Google announced that it will merge its AI tool, Gemini, with its search engine. What that means for publishers is that someone seeking information about, say, Robert F. Kennedy Jr.’s anti-vax crusade will receive an answer right in Google rather than being prompted to click to a news source. Let’s try it:

First, I entered <Robert F. Kennedy Jr. anti-vaccine activism> in Google Search. The top five results were from The Associated Press (two), Scientific American, the Annenberg Public Policy Center and the PBS “NewsHour.” In other words, good, respectable sources of independent journalism.

Next, I entered the same search at Gemini and got this: “I’m still learning how to answer this question. In the meantime, try Google Search.” OK! But then I tried the same thing at the free version of ChatGPT, which provided the following answer:

Robert F. Kennedy Jr., son of former U.S. Senator and Attorney General Robert F. Kennedy and nephew of President John F. Kennedy, has been known for his anti-vaccine activism. He has been a prominent figure in the anti-vaccine movement, particularly regarding his views on the safety of vaccines and their alleged connection to autism.

Kennedy founded the Children’s Health Defense organization, which has been criticized for spreading misinformation about vaccines and promoting unfounded claims about their dangers. He has spoken at numerous events and rallies, advocating against vaccination and promoting alternative views on the subject.

It’s important to note that the overwhelming scientific consensus supports the safety and effectiveness of vaccines, and the alleged link between vaccines and autism has been thoroughly debunked by multiple scientific studies. Kennedy’s activism has been met with criticism from public health officials and medical professionals, who argue that his views are not supported by scientific evidence and pose a danger to public health.

I think we can agree that’s pretty good — and it includes exactly zero links to any of its sources, depriving some excellent news organizations of web traffic. I should note that, according to CNN media reporter Oliver Darcy, Google is promising to include links in Gemini, so it may end up looking something like Perplexity, an AI-powered search engine that provides plain-English answers and reliable links.

When I tried our RFK Jr. search at Perplexity, I got a very good answer — as good as ChatGPT, if not better — along with links to Wikipedia, The AP, the “NewsHour” and NBC News. But would I click on them? At the moment, I would, because of AI’s well-known proclivity for providing fake answers. At some point, though, that problem will be solved.

“This will be catastrophic to our traffic, as marketed by Google to further satisfy user queries, leaving even less incentive to click through so that we can monetize our content,” Danielle Coffey, chief executive of the News/Media Alliance, told Darcy. The alliance represents some 2,000 news publishers.

I also took a look at the internal metrics of the only news site I have access to: this one. According to Google Analytics, over the past month Media Nation received 40% of its traffic from “organic search” — that’s traffic from search engines, nearly all Google, that I didn’t boost by paying for ads on Google. And yes, that’s a lot. Next up was direct traffic (25.6%), organic social (21.2%) and referrals (12.1%), which are links from other websites.

Now, I happen to think that some of the lamentations we’re hearing from publishers are overblown. It’s fine to complain that Google is taking steps that will result in fewer clicks on your website. But how much money does that really bring in? These days, you’re likely to hit a paywall when you try to click through from a search. Programmatic ads on news sites are terrible and bring in very little money.

In the end, there is no substitute for building a relationship with your audience. For-profit publishers need to persuade their readers to become digital subscribers and local businesses to advertise. Nonprofits must convince their audience to become voluntary supporters and to raise money from underwriters, foundations, events and whatever else they can think of.

To use Media Nation as an example again: I currently have more than 2,300 subscribers who receive new posts by email. I consider those to be my most engaged readers. I don’t do much to monetize this site, although I have a modest paid supporter program, which, needless to say, you are invited to join. The future of news, though, is being built right now by serving our communities — not through Google search.

Leave a comment | Read comments

Students and AI weigh in on updating the SPJ’s Code of Ethics

Say hello to the new ethics committee. Photo (cc) 2012 by D J Shin.

The Society of Professional Journalists’ Code of Ethics identifies four broad principles: Seek Truth and Report It; Minimize Harm; Act Independently; and Be Accountable. Each of them is fleshed out in some detail. You will note that the slippery concept of objectivity is not mentioned. These days most journalists believe, as I do, that truth-seeking, independence and fairness are the standards to which we should aspire, and that holding up objectivity as a goal only leads to endless debates over what that actually means.

On Wednesday evening, I asked students in my graduate ethics seminar to come up with a fifth principle. There are only eight students in the class, so I had them work in teams of two. I also asked the gremlins of artificial intelligence to weigh in. But first, here’s what my students came up with; I’ve edited them very lightly.

Listen to Others 

  • Be open to feedback
  • Promote cooperative relationships between journalists and the public
  • Be receptive to events in the world
  • Be open to a community that is not your own

Admit Limitations, Accept Your Own Bias

  • Don’t hide your own humanity
  • Accept that you can’t be objective
  • Accept you may not be able to follow all ethics codes all the time
  • Welcome criticism

Respect Relationships with Sources 

  • Don’t get so close with sources that personal feelings might get in the way of reporting
  • Protect them if they will be harmed for their sharing of the truth
  • Provide full disclosure to the sources of what the article is about and where it is being published

Create Spaces for Untold Stories 

One of the main priorities of a journalist should be to be prepared to fight for those who struggle. Journalists should:

  • Be righteous
  • Be brave
  • Create a sense of justice
  • Not be afraid to correct mistakes within their stories
  • Report on marginalized communities and use their personal resources to continue sharing information

I think these are terrific. The SPJ Code provides a pretty wide umbrella, so it’s not easy to come up with a fifth principle that isn’t already covered. But these proposals represent some fresh thinking. The “full disclosure” idea is important because when we’re interviewing people who aren’t accustomed to dealing with reporters, they may not understand the full implications of having their name, their quotes and in some cases their photos put before the public. And I love “Be brave.” That’s obviously true for, say, war correspondents. But journalists also have to deal with a rising wave of harassment and threats, both online and in real life. If they find that they’re unable to stand up to that, then they are not going to be able to serve their community. (They also shouldn’t be expected to put up with harassment, and newsrooms need to develop anti-harassment training and policies.)

Now … what about AI? While my students were working, I took the lazy way out and asked a couple of AI programs to come up with a fifth principle. Here’s the question I asked: “The Society of Professional Journalists’ Code of Ethics has four principles: Seek Truth and Report It; Minimize Harm; Act Independently; Be Accountable. What would be a worthwhile fifth principle?” Once I finished playing, I deleted the answers and then asked them again so my students could watch everything unfold in real time. And here’s the answer from ChatGPT:

The addition of a fifth principle to the Society of Professional Journalists’ Code of Ethics could focus on promoting transparency. Here’s a suggestion:

“Foster Transparency: Journalists should strive to maintain transparency in their reporting processes, disclosing sources, conflicts of interest, and methodologies to enhance the credibility and trustworthiness of their work. By being transparent, journalists contribute to a more informed public and uphold the integrity of journalism.”

Not bad. Now, this is ChatGPT 3.5. You have to pay for the latest version, 4.0, and I’m obviously not going to do that. But I also asked the same question of Google Bard. As I understand it, the version that’s available to the public represents Google’s best tech. So here’s the answer:

There have been many suggestions for a fifth principle to the SPJ Code of Ethics, each reflecting evolving priorities and challenges in the journalism landscape. Here are some intriguing options to consider:

1. Foster Inclusivity and Diversity: This principle could emphasize the importance of representing marginalized voices, amplifying underrepresented perspectives, and ensuring newsrooms and reporting reflect the rich tapestry of society. It could challenge journalists to actively seek out diverse sources, challenge their own biases, and avoid harmful stereotypes.

2. Uphold Digital Responsibility: This principle would address the unique ethical challenges of the digital age, such as misinformation, disinformation, and privacy concerns. It could encourage journalists to verify information meticulously, source responsibly, and be transparent about online tools and methods.

3. Champion Civic Engagement: This principle could encourage journalists to go beyond informing the public and actively engage citizens in civic discourse. It could promote collaborative reporting, fact-checking initiatives, and platforms for audience participation, empowering the public to be informed participants in democracy.

4. Protect the Vulnerable: This principle could emphasize the need to protect vulnerable individuals and communities from exploitation or harm through reporting. It could call for sensitivity in covering sensitive topics, ensuring informed consent, and avoiding sensationalism that could endanger individuals.

5. Champion Transparency and Openness: This principle could focus on promoting transparency within the media itself, advocating for open access to information, and holding powerful institutions accountable. It could encourage self-reflection within journalism, critical examination of industry practices, and fostering a culture of ethical news consumption.

Ultimately, the “best” fifth principle is subjective and depends on the priorities and concerns of the journalism community at any given time. Each suggestion above addresses a critical aspect of ethical journalism in the 21st century, and ultimately, the decision lies with the SPJ and its members to determine which principle would best serve the mission of responsible and ethical journalism.

I hope this provides a helpful starting point for your exploration of this important topic!

This is pretty good. My students were meh about the ChatGPT answer but were impressed with Bard. If you’ve played with either of these programs, you know that the answer comes up almost instantaneously, which in itself is an impressive feat. I also think that though “Uphold Digital Responsibility” is fine as far as it goes, it should state explicitly whether or not AI is being used to help with reporting and writing.

Leave a comment | Read comments

The Bard of AI

I haven’t played enough with the newest version of Google Bard to know whether it’s better than ChatGPT, but Bard has some advantages. You don’t have to log in — if you’re like most people, you’re already logged in through Google. The database is more up to date: It knows that Maura Healey is governor, whereas ChatGPT still thinks Charlie Baker is in the corner office. And it provides links. My misgivings about artificial intelligence aside, I’m impressed.

Leave a comment | Read comments

Following up on how AI summarized a panel discussion

I got some great feedback on my post about using artificial intelligence to summarize a panel discussion. This is an issue I’ll continue to keep a close eye on and to experiment with. I want to surface a few comments I received and gather them together here.

  • From Ed Miller, editor of The Provincetown Independent: “I was there at the Radically Rural panel discussion, and I’m not sure I would call this summary ‘shockingly good,’ Dan. It is pretty good, but it completely misses the most important point in Victoria Bouloubasis’s presentation, which is that the Iowa poultry plant workers’ organizing efforts mostly failed to persuade local officials to help them.” OK, I guess I could have said “shockingly coherent” rather than “shockingly good.”
  • From Tom Johnson: “Any idea what it means to ’empower stereotypes’? Some species sure would help.” Johnson is referring to a section of the summary that says, “The story aimed to empower and defy stereotypes surrounding immigrant communities, contrasting with typical narratives of victimization.” I would agree that ChatGPT is no A.J. Liebling, but overall I thought we got the drift.
  • From Rebecca Rainey, writing on Threads: “Worth noting: The summaries are incredibly boring. I would much rather read your reporting and light analysis, which would tell me what matters most in the grand scheme of things.” My response is that such summaries would be more for internal newsroom use than for public consumption. The next step is to take such a summary and see if ChatGPT can transform it into a news story. I’ll be looking for a suitable event sometime in the near future.

Meanwhile, OpenAI, the company that rolled out ChatGPT a year ago, is in utter turmoil. Co-founder Sam Altman was fired over the weekend and is now moving to Microsoft. The speculation is that the OpenAI board wanted to proceed more slowly and was concerned that Altman was too dismissive of AI’s potential dangers. Presumably Microsoft will let him pick up the pace, so overall this is not good news.

Leave a comment | Read comments

Could ChatGPT summarize an hour-long panel discussion? Here’s what I learned.

Be sure to see this follow-up.

Strictly as an experiment, I produced an AI-generated summary of a panel discussion that took place in September at the Radically Rural conference in Keene, New Hampshire. Titled “How’d They Do That? Groundbreaking Journalism,” the panel was moderated by Jack Rooney, managing editor for audience development at The Keene Sentinel, which co-sponsored the event along with the Hannah Grimes Center for Entrepreneurship.

The hour-long panel featured Victoria Bouloubasis, a filmmaker who showed part of her Emmy-nominated documentary, “Rising Up in the Heartland: Latino Workers Fight for Pandemic Relief,” and discussed it; Samantha Hogan, an investigative reporter with The Maine Monitor, who talked about her reporting on the state’s public defense system for the indigent; and Adam Ganucheau, editor-in-chief of Mississippi Today, who described his news organization’s Pulitzer Prize-winning reporting into the Backchannel scandal, which encompassed figures ranging from former Gov. Phil Bryant to former NFL quarterback Brett Favre.

A note on methodology: I took some real chances with this. I captured the audio using Otter.ai, a transcription service that uses artificial intelligence. Then I fed each of the three parts of the panel discussion into ChatGPT and asked it to produce 400-word summaries. I left out the audience Q&A that followed the panel. Let me stipulate for the record that I did not watch the video except for a few brief clips, and I skimmed quickly through the Otter transcript, which was 7,000 words long. I don’t recommend this as sound journalistic practice, but, as I said, I was experimenting. What I did not try to do was ask ChatGPT to generate a news story; I’ll try that another time.

What appears below, produced entirely by ChatGPT, looks shockingly good. I just hope that it’s an accurate and useful summary.

***

Victoria Bouloubasis, an Emmy-nominated journalist and filmmaker, based in Durham, North Carolina, shared insights into her work and a specific project during a panel discussion. Here’s a summary of the key points:

  • Journalist’s Background: Victoria Bouloubasis is an experienced journalist with a focus on dispelling myths about the global South. Her career, primarily in community journalism, spans 15 years, emphasizing labor rights, healthcare, and language access in North Carolina’s agricultural and food industries. Growing up in the South, she covers labor issues, especially concerning first-generation Americans and their families in Latin America.
  • Coverage of Rural Communities: Bouloubasis delves into the complexities of covering rural communities, particularly in North Carolina, known for its prominence in pork and tobacco production. She highlights the evolving demographics, with a shift from primarily Mexican to more Central American and Indigenous communities. Language diversity adds complexity to her reporting, as many interviewed speak languages other than Spanish.
  • Project Overview: The discussed project, an Emmy-nominated piece for Univision, originally focused on long COVID’s impact on poultry plant workers. However, Bouloubasis shifted the narrative after discovering a grassroots organizing story among immigrant workers in Iowa. The story aimed to empower and defy stereotypes surrounding immigrant communities, contrasting with typical narratives of victimization.
  • Challenges Faced: The challenges in the project included last-minute changes to the story focus and navigating language barriers during interviews. Bouloubasis emphasizes the importance of terminology, noting the shifting dynamics of minority-majority populations in small towns. The story showcased the struggles of immigrant workers, demanding pandemic relief funds for essential workers who didn’t receive stimulus checks due to eligibility criteria.
  • Community Perspectives: Bouloubasis highlights the discomfort and confusion of local officials who couldn’t understand Spanish, providing a unique perspective on the challenges faced by non-English speakers. The mayor’s lack of understanding of the struggles faced by undocumented immigrants, particularly in receiving stimulus checks, became a focal point in the story.
  • Positive Outcomes: Despite the challenges, Bouloubasis expresses pride in the project’s recognition at the Emmys, emphasizing its hopeful and empowering narrative. The immigrant community in Iowa successfully distributed federal pandemic relief aid to meatpacking and farm workers, showcasing the impact of grassroots organizing.
  • Inspiration from the Community: Bouloubasis commends the courage of immigrant workers who put themselves at risk by publicly acknowledging their undocumented status. The project seeks to elevate their voices and bring attention to social determinants of health, emphasizing the importance of dialogue and understanding in journalism.

This summary captures Victoria Bouloubasis’s insights into her journalism career, the challenges of covering immigrant communities, and the impact of her Emmy-nominated project, which sheds light on the struggles and resilience of rural immigrant workers.

***

Samantha Hogan, investigative reporter with The Maine Monitor, shared insights into her work, focusing on Maine’s indigent public defense system. Here’s a summary of the key points:

  • Reporter’s Background: Samantha Hogan, an investigative reporter, joined The Maine Monitor in 2019, specializing in court system reporting. Her work, supported by Report for America, delves into Maine’s troubled indigent public defense system.
  • Maine’s Demographics: Maine, with a predominantly white population, faces economic disparities rather than racial issues. The community includes both lifelong residents and seasonal “snowbirds.” Hogan decided to investigate how individuals unable to afford lawyers were represented in the state.
  • Indigent Public Defense System Issues: Hogan discovered issues within Maine’s indigent public defense system, where private defense lawyers were contracted without proper oversight. Unlike most states, Maine lacked a statewide public defender system, relying on contracts with private lawyers. Some lawyers had criminal convictions and professional misconduct violations.
  • Investigative Approach: A whistleblower alerted Hogan to problems in the indigent defense system. She initiated a data-driven investigation, utilizing public records requests and analyzing billing data. Hogan revealed instances of lawyers billing implausible hours and identified disciplinary issues among contracted lawyers.
  • Scandalous Cases and Red Flags: Hogan uncovered cases where lawyers with serious criminal convictions or professional misconduct allegations were contracted for public defense. The investigation raised red flags, such as lawyers billing excessive hours and serious ethical violations within the system.
  • Impact and Legal Consequences: The investigative report led to significant consequences, including a government watchdog verifying Hogan’s findings. The Maine Commission on Indigent Legal Services faced a civil lawsuit, acknowledging negligent misrepresentation of billing by an attorney. The ACLU filed a lawsuit against the state, citing an ineffective public defense system.
  • Persistence and Impactful Journalism: Hogan’s relentless pursuit of public records, court filings, and data analysis turned anecdotal evidence into hard numbers. The investigative reporting became a pivotal factor in legal actions against the flawed public defense system. The ACLU lawsuit, citing Hogan’s work, puts the spotlight on the need for a functional public defense system in Maine. (1)
  • Ongoing Legal Battles: Hogan highlighted ongoing legal battles, including the ACLU lawsuit, indicating that the impact of investigative journalism extends beyond immediate results. The story, a four-year project, continues to evolve, emphasizing the importance of persistent reporting and its contribution to systemic change.

Samantha Hogan’s work exemplifies the power of investigative journalism in exposing systemic flaws, holding institutions accountable, and driving legal actions to address critical issues within the indigent public defense system in Maine.

***

Adam Ganucheau, editor-in-chief of Mississippi Today, shared insights into the impactful investigative journalism that led to winning a Pulitzer Prize for the Backchannel investigation. Here’s a summary of the key points:

  • Introduction: Adam expressed his gratitude to be on stage with fellow journalists and introduced Adam Ganucheau, editor-in-chief of Mississippi Today. Ganucheau oversees Mississippi’s largest newsroom, focusing on stories relevant to everyday Mississippians.
  • Mississippi’s Communities: Ganucheau highlighted the diversity of Mississippi’s communities, dispelling monolithic views. He emphasized the shared dreams and goals of Mississippians across different backgrounds, beliefs, and economic statuses. The challenge lies in providing equal access and opportunities for these dreams, especially in a state facing numerous problems.
  • Mississippi Today’s Role: Mississippi Today, a nonprofit newsroom launched in 2016, positions itself between everyday Mississippians and decision-makers. With a team of about 15, mostly Mississippi natives deeply connected to their community, the newsroom aims to bridge the gap by listening to people’s struggles, reporting on their concerns, and holding decision-makers accountable.
  • The Backchannel Investigation: The focus shifted to the Backchannel investigation, which earned a Pulitzer Prize in 2022. The investigation began with reporter Anna Wolfe’s exploration of poverty in Mississippi. She discovered that federal funds, intended for the state’s poorest residents, were being diverted to nonprofit programs with little accountability. (2)
  • Welfare Spending Anomalies: Wolfe delved into the anomalies in welfare spending, revealing that Mississippi approved less than 1% of applications for cash assistance. The investigation exposed a shift in funds from direct cash assistance to nonprofit initiatives with questionable accounting. Wolfe’s collaboration with the state auditor ultimately led to six arrests, including the welfare agency director.
  • Role of Former Governor: The investigation extended to the role of the former governor in redirecting federal funds and changing the philosophy behind poverty assistance programs. Wolfe spent years probing the governor’s involvement, culminating in a seven-part series titled “The Backchannel.”
  • Impactful Visuals and Reporting: Mississippi Today used charts and visuals to illustrate the impact of welfare spending changes over time. They scrutinized the former governor’s philosophy shift and its effects on assistance programs. The investigation revealed instances like wealthy individuals, including NFL quarterback Brett Favre, receiving welfare funds. (3)
  • Series Conclusion and Everyday Stories: The series aimed to provide accountability to state leaders while keeping the stories of everyday Mississippians at the forefront. Ganucheau highlighted the series’ conclusion in December 2022, where they circled back to the people who inspired the investigation. Everyday Mississippians, like Daniel Thomas, shared their experiences, expressing gratitude for the reporting but emphasizing the lack of tangible help.
  • Journalistic Mission: Ganucheau underscored that Mississippi Today’s mission is not driven by awards or notoriety but by a commitment to helping everyday people better understand their situations and hold decision-makers accountable.

The Backchannel investigation exemplifies the impactful role of investigative journalism in bringing systemic issues to light, holding those in power accountable, and giving a voice to those affected by government decisions.

Notes

  1. Did someone say “lawsuit”? When I saw that word in the ChatGPT summary, I decided I’d better check it out. The verdict? It’s true.

  2. Although I had already provided Otter, and thus ChatGPT, with the correct names of the panelists, ChatGPT was on its own with Anna Wolfe — and it spelled her name correctly.

  3. ChatGPT got Brett Favre’s name right, too. I did not, and I had to scroll back to the introduction to fix it.

Leave a comment | Read comments

Using AI to tell a story about AI

Today at What Works, we have a story by one of our Northeastern graduate students, Ian Dartley, about Inside Arlington, a local news project powered by artificial intelligence. It’s no substitute for the human touch, and in fact the town already has a very good nonprofit news organization, YourArlington. But it’s an interesting experiment, and Ian does a great job of explaining it.

We also decided to have a little fun. The headline and the bullet points used to summarize Ian’s story were written by ChatGPT. So was the social media post we used to promote the story. Here’s how it looks on Threads:

How about ChatGPT finding that dog emoji? Good boy! I thought it was interesting that ChatGPT wrote fairly dull headlines and bullet points but suddenly developed a sense of fun when it came time to write a social media post.

Leave a comment | Read comments

Gannett’s failed attempt to cover school sports with AI raises eyebrows and questions

Photo (cc) 2014 by MHS Wildcat

Six years ago, The Washington Post announced that it would begin producing stories about high school football games using artificial intelligence. The expanded use of Heliograf, the Post’s “in-house automated storytelling technology,” would allow the news organization “to cover all Washington, D.C.-area high school football games every week,” according to a press release. The press release linked to an example of such coverage — a mundane article that begins:

The Yorktown Patriots triumphed over the visiting Wilson Tigers in a close game on Thursday, 20-14.

The game began with a scoreless first quarter.

In the second quarter, The Patriots’ Paul Dalzell was the first to put points on the board with a two-yard touchdown reception off a pass from quarterback William Porter.

Yet now, with AI tools having improved considerably, Gannett is running into trouble for doing exactly the same thing. Writing for Axios Columbus, Tyler Buchanan reports that The Columbus Dispatch had suspended AI-generated local sports coverage after the tool, LedeAI, came in for criticism and mockery. As Buchanan observes, one such article “was blasted on social media for its robotic style, lack of player names and use of awkward phrases like ‘close encounter of the athletic kind.’”

Please become a supporter of this free source of news and commentary for $5 a month. Just click here.

Has AI gone backwards since 2017? Obviously not. So what went wrong? It’s hard to say, but it could be that the generative AI tools that started becoming available late last year, with ChatGPT in the forefront, are more finicky than the blunt instrument developed by the Post some years back. In theory, generative AI can write a more natural-sounding story than the robotic prose produced by Heliograf and its ilk. In practice, if an AI tool like LedeAI is trained on a corpus of material loaded with clichés, then the output is going to be less than stellar.

Clare Duffy of CNN found that Gannett’s use of AI was not limited to Columbus. Other outlets that ran LedeAI-generated sports stories included the Courier Journal of Louisville, Kentucky; AZ Central; Florida Today, and the Journal Sentinel of Milwaukee, Wisconsin. Duffy reported that one story, before it was revised, included this Grantland Rice-quality gem: “The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday.”

There’s another dynamic that needs to be considered as well. The Washington Post, a regional newspaper under the Graham family, repositioned itself as a national digital news organization after Amazon founder Jeff Bezos bought it in 2013. Regional coverage is secondary to its mission, and if it weren’t covering high school football games with AI, then it wouldn’t be covering them at all.

By contrast, you’d think that high school sports would be central to the mission at Gannett’s local and regional dailies. Turning such coverage over to AI and then not bothering to check what they were publishing is exactly the sort of move you’d expect from the bottom-line-obsessed chain, though it obviously falls short of its obligation to the communities it serves.

Poynter media columnist Tom Jones, a former sportswriter, raises another issue worth pondering — the elimination of an important training ground for aspiring sports journalists:

There is still a contentious debate about how publishers should use AI. Obviously, journalists will be (and should be) upset if AI is being used to replace human beings to cover events. As someone who started his career covering high school football, I can tell you that invaluable lessons learned under the Friday night lights laid the foundation for covering events such as the Olympics and Stanley Cup finals and college football national championships in the years after that.

At a moment when AI is the hottest of topics in journalistic circles, Gannett’s botched experiment demonstrated that there is no substitute for actual reporters.

By the way, I asked ChatGPT to write a six- to eight-word headline for this post. The result: “AI-Generated Sports Coverage Faces Scrutiny: What Went Wrong?” Not bad, but lacking the specificity I was looking for.

Leave a comment | Read comments

A new report finds that content farms are loading up on AI. Will local news be next?

Meet your new reporting staff. Photo (cc) 2023 by Dan Kennedy.

A recent report by NewsGuard, a project that evaluates news organizations for reliability and transparency, found that clickbait generated by artificial intelligence is on the rise. McKenzie Sadeghi and Lorenzo Arvanitis write:

NewsGuard has identified 49 news and information sites that appear to be almost entirely written by artificial intelligence software. A new generation of content farms is on the way.

The report didn’t specifically identify any local news websites that are using AI to write low-quality stories aimed at getting clicks and programmatic advertising. Perhaps non-local stories about health, entertainment and tech, to name three of the topics for which content farms are using AI, more readily fly under the radar. If you’re going to use AI to produce articles about the local tax rate or the women’s track team, you’re going to get caught pretty quickly when the results prove to be wrong. Still, the use of AI to produce some forms of local news, such as routine articles about real-estate transactions, is not new.

According to the NewsGuard report, there doesn’t seem to be a concerted effort yet to use AI in order to produce deliberately false stories, although there have been a few examples, including a celebrity death site that claimed President Biden had “passed away peacefully in his sleep.”

Call this Pink Slime 3.0. Version 1.0 was low-tech compared to what’s available today. Back in 2012, the public radio program “This American Life” found that a company called Journatic (pronounced “joor-NAT-ik,” though I always thought it should be “JOOR-nuh-tik”) was producing local content for newspapers using grossly underpaid, out-of-town reporters — including cheap Filipino workers who wrote articles under fake bylines.

Pink Slime 2.0, of more recent vintage, consists of hundreds of websites launched to exploit the decline of local news. Under such banners as “North Boston News” (!), these sites purport to offer community journalism but are actually a cover for political propaganda. Nearly all of them serve right-wing interests, thought there were a few on the left as well.

Pink Slime 3.0 threatens to become more insidious as AI continues to improve. As Seth Smalley wrote for Poynter Online, this is “pink slime on steroids.”

Of course, AI could prove to be a boon for local news, as Sebastian Grace wrote last week for What Works, our Northeastern journalism project tracking developments in community journalism. By eliminating repetitive drudge work, AI can free journalists to produce high-value stories that really matter.

Still, bottom-feeders like CNET — not exactly a content farm, but not much better than that, either — have already been caught publishing error-laden stories with AI. You can only imagine what sort of advice these content farms are going to give people about dealing with their medical problems.

OpenAI, which likes to portray itself as a responsible player in discussions about the future of AI, would not respond to NewsGuard’s inquiries. Neither would Facebook, which is amplifying AI-generated content.

The only thing we can be sure of is that a new, more insidious version of pink slime is coming to a website near you — if it hasn’t already.