Thank you to Paul Letendre, the host of “SouthCoast Matters,” and state Rep. Carol Doherty, D-Taunton, for having me on for two recent episodes. We talked about media trust, the challenges facing local news, artificial intelligence and more. “SouthCoast Matters” is recorded at Taunton Community Access and Media and is carried on cable stations in Taunton and the surrounding area. You can watch the two episodes below. (If you’re an email reader, you may have to click through for the web version of this post.)
Tag: artificial intelligence
Gannett’s failed attempt to cover school sports with AI raises eyebrows and questions
Six years ago, The Washington Post announced that it would begin producing stories about high school football games using artificial intelligence. The expanded use of Heliograf, the Post’s “in-house automated storytelling technology,” would allow the news organization “to cover all Washington, D.C.-area high school football games every week,” according to a press release. The press release linked to an example of such coverage — a mundane article that begins:
The Yorktown Patriots triumphed over the visiting Wilson Tigers in a close game on Thursday, 20-14.
The game began with a scoreless first quarter.
In the second quarter, The Patriots’ Paul Dalzell was the first to put points on the board with a two-yard touchdown reception off a pass from quarterback William Porter.
Yet now, with AI tools having improved considerably, Gannett is running into trouble for doing exactly the same thing. Writing for Axios Columbus, Tyler Buchanan reports that The Columbus Dispatch had suspended AI-generated local sports coverage after the tool, LedeAI, came in for criticism and mockery. As Buchanan observes, one such article “was blasted on social media for its robotic style, lack of player names and use of awkward phrases like ‘close encounter of the athletic kind.’”
Please become a supporter of this free source of news and commentary for $5 a month. Just click here.
Has AI gone backwards since 2017? Obviously not. So what went wrong? It’s hard to say, but it could be that the generative AI tools that started becoming available late last year, with ChatGPT in the forefront, are more finicky than the blunt instrument developed by the Post some years back. In theory, generative AI can write a more natural-sounding story than the robotic prose produced by Heliograf and its ilk. In practice, if an AI tool like LedeAI is trained on a corpus of material loaded with clichés, then the output is going to be less than stellar.
Clare Duffy of CNN found that Gannett’s use of AI was not limited to Columbus. Other outlets that ran LedeAI-generated sports stories included the Courier Journal of Louisville, Kentucky; AZ Central; Florida Today, and the Journal Sentinel of Milwaukee, Wisconsin. Duffy reported that one story, before it was revised, included this Grantland Rice-quality gem: “The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday.”
There’s another dynamic that needs to be considered as well. The Washington Post, a regional newspaper under the Graham family, repositioned itself as a national digital news organization after Amazon founder Jeff Bezos bought it in 2013. Regional coverage is secondary to its mission, and if it weren’t covering high school football games with AI, then it wouldn’t be covering them at all.
By contrast, you’d think that high school sports would be central to the mission at Gannett’s local and regional dailies. Turning such coverage over to AI and then not bothering to check what they were publishing is exactly the sort of move you’d expect from the bottom-line-obsessed chain, though it obviously falls short of its obligation to the communities it serves.
Poynter media columnist Tom Jones, a former sportswriter, raises another issue worth pondering — the elimination of an important training ground for aspiring sports journalists:
There is still a contentious debate about how publishers should use AI. Obviously, journalists will be (and should be) upset if AI is being used to replace human beings to cover events. As someone who started his career covering high school football, I can tell you that invaluable lessons learned under the Friday night lights laid the foundation for covering events such as the Olympics and Stanley Cup finals and college football national championships in the years after that.
At a moment when AI is the hottest of topics in journalistic circles, Gannett’s botched experiment demonstrated that there is no substitute for actual reporters.
By the way, I asked ChatGPT to write a six- to eight-word headline for this post. The result: “AI-Generated Sports Coverage Faces Scrutiny: What Went Wrong?” Not bad, but lacking the specificity I was looking for.
Three recent developments show how AI is already changing journalism
I don’t want to come off as a total Luddite when it comes to artificial intelligence and journalism. Well, OK, maybe I do. Because even though I have no problem with using AI for certain mindless, repetitive tasks, such as transcribing interviews and finding patterns in public records, I think we need to be cautious about using such tools to actually produce journalism — whether it be reports about real estate transactions (thus missing the opportunity to dig more deeply) or stories about high school sports. With that in mind, I want to call your attention to three troubling developments.
For those who thought the notion of robot reporters was still quite a ways off, the first development is the most alarming. According to a recent article at Nieman Lab by Sophie Culpepper, an independent publisher has been experimenting with just that in his spare time, and the results are, well, not bad.
Mark Talkington, who runs a hyperlocal start-up called The Palm Beach Post in California, has been feeding governmental meetings that are available on YouTube into an AI system designed by a friend of his. Importantly, it’s not an off-the-shelf product like ChatGPT or Google Bard. Rather, it’s been trained on reliable news and information from his coverage area, which reduces if not eliminates the likelihood of “hallucinations,” the term for false but plausible-sounding output produced by AI.
The example Culpepper quoted from reads like what journalism professors disapprovingly tell their students is an “agenda story” — that is, it begins with something like Members of the board of sewer commissioners began their meeting by saluting the flag rather than with a lead grounded in the most interesting thing that happened. Nor has Talkington actually published any AI-generated stories yet. He said in his interview with Culpepper that he’s concerned about AI missing out on body language and, of course, on the ability to snag an elected official in the hallway during a break in the meeting.
But he said he could see using it to take notes and, eventually, to cover meetings that his thinly stretched staff can’t get to. And that’s how it begins: with a sympathetic hyperlocal publisher using AI to extend his reach, only to see the technology adopted by cost-cutting newspaper chains looking to dump reporters.
My second example might be called “speaking of which.” Because Gannett, whose 200 or so daily newspapers make it the largest corporate chain, announced recently that it, too, is experimenting with generative AI. Helen Coster of Reuters reports that, at first, AI will be used to generate content like bullet points that summarize the most important facts in a story, and that humans will check its work. That feature will be rolled out in the chain’s flagship newspaper, USA Today, later this year.
Gannett is hardly the only news organization that’s playing with AI; The New York Times, The Washington Post, The Boston Globe and others are all looking into ways to make use of it. But Gannett is notoriously tight-fisted and, as Coster notes, has slashed and burned its way to tenuous profitability. “Gannett’s journalists are fighting to ensure that they aren’t replaced by the technology,” she wrote. “Hundreds walked off the job over staff cuts and stagnant wages on June 5. Generative AI is a sticking point in some negotiations with the company, the union said.”
The third warning sign comes from Sebastian Grace, who passed along a truly disturbing item that the German tabloid Bild is laying off about 200 journalists while ramping up its use of AI. (Seb recently wrote a fine piece on journalism and AI for our website What Works: The Future of Local News.) Although those two developments at Bild are said to be unrelated, Jon Henley of The Guardian writes that Mathias Döpfner, the CEO of Bild’s corporate owner, Axel Springer, has said that ChatGPT and its ilk could “make independent journalism better than it ever was — or replace it.”
Axel Springer, by the way, also owns Politico, an important U.S. outlet for news about politics and public policy.
Do I think AI will soon replace reporters who do the hard work of going out and getting stories? No — at least not right away. But we’ve been losing journalists for 25 years now, and it seems certain that AI will be used, misused and abused in ways that accelerate that trend.
A new report finds that content farms are loading up on AI. Will local news be next?
A recent report by NewsGuard, a project that evaluates news organizations for reliability and transparency, found that clickbait generated by artificial intelligence is on the rise. McKenzie Sadeghi and Lorenzo Arvanitis write:
NewsGuard has identified 49 news and information sites that appear to be almost entirely written by artificial intelligence software. A new generation of content farms is on the way.
The report didn’t specifically identify any local news websites that are using AI to write low-quality stories aimed at getting clicks and programmatic advertising. Perhaps non-local stories about health, entertainment and tech, to name three of the topics for which content farms are using AI, more readily fly under the radar. If you’re going to use AI to produce articles about the local tax rate or the women’s track team, you’re going to get caught pretty quickly when the results prove to be wrong. Still, the use of AI to produce some forms of local news, such as routine articles about real-estate transactions, is not new.
According to the NewsGuard report, there doesn’t seem to be a concerted effort yet to use AI in order to produce deliberately false stories, although there have been a few examples, including a celebrity death site that claimed President Biden had “passed away peacefully in his sleep.”
Call this Pink Slime 3.0. Version 1.0 was low-tech compared to what’s available today. Back in 2012, the public radio program “This American Life” found that a company called Journatic (pronounced “joor-NAT-ik,” though I always thought it should be “JOOR-nuh-tik”) was producing local content for newspapers using grossly underpaid, out-of-town reporters — including cheap Filipino workers who wrote articles under fake bylines.
Pink Slime 2.0, of more recent vintage, consists of hundreds of websites launched to exploit the decline of local news. Under such banners as “North Boston News” (!), these sites purport to offer community journalism but are actually a cover for political propaganda. Nearly all of them serve right-wing interests, thought there were a few on the left as well.
Pink Slime 3.0 threatens to become more insidious as AI continues to improve. As Seth Smalley wrote for Poynter Online, this is “pink slime on steroids.”
Of course, AI could prove to be a boon for local news, as Sebastian Grace wrote last week for What Works, our Northeastern journalism project tracking developments in community journalism. By eliminating repetitive drudge work, AI can free journalists to produce high-value stories that really matter.
Still, bottom-feeders like CNET — not exactly a content farm, but not much better than that, either — have already been caught publishing error-laden stories with AI. You can only imagine what sort of advice these content farms are going to give people about dealing with their medical problems.
OpenAI, which likes to portray itself as a responsible player in discussions about the future of AI, would not respond to NewsGuard’s inquiries. Neither would Facebook, which is amplifying AI-generated content.
The only thing we can be sure of is that a new, more insidious version of pink slime is coming to a website near you — if it hasn’t already.
Artificial intelligence writes an op-ed piece on gun control
I’ve been having some fun with AI-generated content recently, posting a description of the MBTA in the voice of Herman Melville and a profile of me that is thoroughly bollixed up with a marketing guru who’s also named Dan Kennedy. Today, though, I offer something a little more serious and practical. I directed ChatGPT to write a 600-word opinion article in favor of gun control. The result is below. It needs a little editing, especially in the lead. I didn’t check the numbers, and ChatGPT is known for producing false facts. The pro-gun control argument is not especially sophisticated. But you can imagine a state legislator doing something like this to write an op-ed for a local news outlet. You may think that’s fine. I think it’s chilling.
Gun control has been a controversial issue in the United States for many years, with strong opinions on both sides of the debate. As an AI language model, I do not have a political affiliation or personal beliefs. However, I can analyze data and research and provide an evidence-based opinion on the topic. Therefore, in this article, I will make a case in favor of gun control.
Firstly, let’s look at the numbers. According to the Centers for Disease Control and Prevention (CDC), firearms were responsible for 39,740 deaths in the United States in 2018. That is an average of 109 deaths per day, or roughly four deaths every hour. It is undeniable that gun violence is a significant public health issue in this country.
Many people argue that the Second Amendment guarantees their right to bear arms. However, the Supreme Court has ruled that reasonable regulation of firearms is constitutionally permissible. Therefore, it is within the government’s power to implement laws and regulations to protect the public from gun violence.
One such regulation could be universal background checks. Current federal law only requires background checks for gun purchases from licensed dealers. However, a loophole exists that allows private sellers, who are not required to be licensed, to sell firearms without conducting a background check. This loophole makes it easy for criminals and individuals with a history of violence to obtain guns.
Universal background checks would close this loophole and ensure that all gun purchases, regardless of the seller, are subject to a background check. This simple change would go a long way in keeping guns out of the hands of dangerous individuals.
Another common-sense regulation would be a ban on assault weapons. These weapons are designed for military use and have no practical purpose in civilian life. They have been used in several high-profile mass shootings, including the tragedies in Las Vegas, Parkland, and Sandy Hook.
Opponents of a ban on assault weapons argue that it would violate their Second Amendment rights. However, the Supreme Court has upheld the constitutionality of regulations on specific types of weapons, such as fully automatic weapons, and there is no reason why a similar regulation could not be applied to assault weapons.
Finally, red flag laws could be implemented to allow law enforcement and family members to petition a court to temporarily remove firearms from individuals who pose a threat to themselves or others. These laws have already been implemented in several states and have been shown to be effective in preventing gun violence.
In conclusion, gun violence is a serious public health issue in the United States, and reasonable regulations on firearms can help reduce the number of deaths and injuries. Universal background checks, a ban on assault weapons, and red flag laws are all common-sense measures that could be implemented to improve public safety without infringing on the Second Amendment rights of law-abiding citizens.
It is essential to remember that gun control is not about taking away people’s guns or infringing on their rights. It is about finding ways to prevent gun violence while still allowing responsible gun ownership. We can all agree that we want to live in a safe society, and gun control measures can help make that a reality.
Ultimately, the key to reducing gun violence is a collaborative effort between government, law enforcement, and the public. By working together, we can find solutions that respect our rights while keeping us safe.
The Bard of Google writes a bio of me. Hilarity ensues.
My friend Bill Mitchell sent along a bio of me that he asked Google’s Bard to write — proving, once again, that artificial intelligence may be artificial, but it sure isn’t intelligent. Throughout this short bio, Bard mixes me up with marketing guy Dan Kennedy. Here’s the whole thing, with the mistakes highlighted in blue.
Dan Kennedy is a media critic, marketing consultant, and author. He is a former media critic for the Boston Phoenix and is now a professor of journalism at Northeastern University. He is also the founder of Magnetic Marketing and the author of several books on marketing and business, including “The No B.S. Marketing Letter” and “The Magnetic Marketing System.”
Kennedy is a frequent media commentator and has appeared on CNN, CNBC, and Fox News. He is also a regular contributor to Forbes and The Huffington Post. [Note: Years ago I appeared a couple of times on CNN and once on Fox News. I don’t believe I ever wrote anything for Forbes, but I could be mistaken.]
Kennedy is a controversial figure in the media industry. He has been criticized for his outspoken views on the state of journalism and his willingness to criticize his colleagues. However, he is also respected for his knowledge of the industry and his ability to help businesses grow.
Kennedy is a graduate of the University of Massachusetts Amherst and the Boston University School of Public Communication. [A bachelor’s degree in journalism from Northeastern University and a master’s in American history from BU, but not from SPC.] He lives in Boston, Massachusetts. [Not since 1980.]
Here are some of his notable works:
* The No B.S. Marketing Letter (2009)
* The Magnetic Marketing System (2012)
* The Renegade Challenge (2016)
* The No B.S. Inner Circle (2017) [LOL.]
Way back when, Boston Globe columnist Alex Beam wrote about me, the other DK and yet another DK. At least he was trying to be funny. And I should add that I’ll be smiling all week about being called “a controversial figure in the media industry.”
How local news helped Callie Crossley with her research for ‘Eyes on the Prize’
Callie Crossley of GBH News is a multitalented broadcast journalist and producer. She hosts “Under the Radar with Callie Crossley” and shares radio essays each Monday on GBH’s “Morning Edition.” She also hosts “Basic Black,” which covers news events that have an impact on communities of color. Crossley’s work on “Eyes on the Prize: America’s Civil Rights Years” won numerous awards.
In a wide-ranging conversation with Ellen and Dan, Crossley shares her views on the thinning out of local news outlets and offers sage advice for next-generation journalists. Callie and Dan were regulars on “Beat the Press,” the award-winning GBH-TV show that featured media commentary, which ended its 22-year run in 2021. In 2019, both of them received the Yankee Quill Award from the New England Society of Newspaper Editors.
In Quick Takes on developments in local news, Dan laments the rise of robot journalism, and Ellen reports on an effort by publisher Lee Enterprises to fight off a takeover bid by the hedge fund Alden Global Capital.
You can listen to our conversation here and subscribe through your favorite podcast app.
The latest bad idea for chain newspapers: Robot reporting on real estate
At least two New England newspaper publishers have begun using artificial intelligence rather than carbon-based life forms to report on real-estate transactions.
The Republican of Springfield, online as MassLive, and Hearst Connecticut Media, comprising the New Haven Register and seven other daily newspapers, are running stories put together by an outfit called United Robots. MassLive’s stories are behind a hard paywall, but here’s a taste from the Register of what such articles look like.
United Robots, a Swedish company, touts itself as offering “news automation at massive scale using AI and data science.”
Last year I wrote about artificial intelligence and journalism for GBH News. I’m skeptical, but it depends on how you use it. In some ways AI has made our lives easier by, for instance, enhancing online search and powering the inexpensive transcription of audio interviews. But using it to write stories? Not good. As I wrote last year:
Such a system has been in use at The Washington Post for several years to produce reports about high school football. Input a box score and out comes a story that looks more or less like an actual person wrote it. Some news organizations are doing the same with financial data. It sounds innocuous enough given that much of this work would probably go undone if it couldn’t be automated. But let’s curb our enthusiasm.
Using AI to produce stories about real-estate transactions may seem fairly harmless. But let me give you an example of why it’s anything but.
In November, I accompanied Tom Breen, the managing editor of the New Haven Independent, as he knocked on the doors of houses that had been foreclosed on recently. The Independent is a digital nonprofit news site.
Breen has spent a considerable amount of time and effort in housing court and poring through online real-estate transactions. From doing that, he could see patterns that had emerged. Like Boston and many other cities, New Haven has experienced an explosion in real-estate prices, and a lot of owners are flipping their properties to cash in. In too many cases there are victims — low-income renters whose new landlords, often absentee, jack up the rents. Breen takes the data he’s gathered and rides his bike into the neighborhoods, knocking on doors and talking with residents. It’s difficult, occasionally dangerous work. Once he was attacked by a pit bull.
We didn’t have much luck on our excursion. No one was home at either of the two houses we visited, so Breen left notes behind asking the residents to call him.
“If investors are swapping properties at $100,000, $200,000 above the appraised value and tens of thousands of dollars above what they bought it for two days prior,” Breen told me, “all that can do is drive up costs that are passed down to the renters — to the people actually living in the building.”
The result of Breen’s enterprise has been a series of stories like this one. The lead:
Tenants of a three-family “lemon” of a house on Liberty Street are wondering how two landlords managed to walk away with $180,000 by double-selling a property that they say remains a dump.
You’re not going to get that kind of reporting from artificial intelligence.
Now, of course, you might argue — and some have, as I noted in my GBH News piece — that AI saves journalists from drudge work, freeing them up to do exactly the kind of enterprise reporting that Breen does. But story ideas often arise from immersion in boring data and sitting through lengthy proceedings; outsource the data collection to a robot, and it’s likely that will be the end of it.
At the corporate chains that own so many of our newspapers, there’s little doubt that AI will be used as just another opportunity to cut. Hearst and Advance, the national chain that owns The Republican, are not the worst or most greedy newspapers chains by any means. But both of them have engaged in more than their share of cost-cutting over the years.
And it’s spreading. United Robots’ U.S. clients include the McClatchy newspaper chain and The Atlanta Journal-Constitution, part of the Cox chain. No doubt the Big Two — Gannett and the groups owned by Alden Global Capital — won’t be far behind.
Can artificial intelligence help local news? Sure. And it can cause great harm as well.
I’ll admit that I was more than a little skeptical when the Knight Foundation announced last week that it would award $3 million in grants to help local news organizations use artificial intelligence. My first reaction was that dousing the cash with gasoline and tossing a match would be just as effective.
But then I started thinking about how AI has enhanced my own work as a journalist. For instance, just a few years ago I had two unappetizing choices after I recorded an interview: transcribing it myself or sending it out to an actual human being to do the work at considerable expense. Now I use an automated system, based on AI, that does a decent job at a fraction of the cost.
Or consider Google, whose search engine makes use of AI. At one time, I’d have to travel to Beacon Hill if I wanted to look up state and local campaign finance records — and then pore through them by hand, taking notes or making photocopies as long as the quarters held out. These days I can search for “Massachusetts campaign finance reports” and have what I need in a few seconds.
Given that local journalism is in crisis, what’s not to like about the idea of helping community news organizations develop the tools they need to automate more of what they do?
Well, a few things, in fact.
Foremost among the downsides is the use of AI to produce robot-written news stories. Such a system has been in use at The Washington Post for several years to produce reports about high school football. Input a box score and out comes a story that looks more or less like an actual person wrote it. Some news organizations are doing the same with financial data. It sounds innocuous enough given that much of this work would probably go undone if it couldn’t be automated. But let’s curb our enthusiasm.
Patrick White, a journalism professor at the University of Quebec in Montreal, sounded this unrealistically hopeful note in a piece for The Conversation about a year ago: “Artificial intelligence is not there to replace journalists or eliminate jobs.” According to one estimate cited by White, AI would have only a minimal effect on newsroom employment and would “reorient editors and journalists towards value-added content: long-form journalism, feature interviews, analysis, data-driven journalism and investigative journalism.”
Uh, Professor White, let me introduce you to the two most bottom line-obsessed newspaper publishers in the United States — Alden Global Capital and Gannett. If they could, they’d unleash the algorithms to cover everything up to and including city council meetings, mayoral speeches and development proposals. And if they could figure out how to program the robots to write human-interest stories and investigative reports, well, they’d do that too.
Another danger AI poses is that it can track scrolling and clicking patterns to personalize a news report. Over time, for instance, your Boston Globe would look different from mine. Remember the “Daily Me,” an early experiment in individualized news popularized by MIT Media Lab founder Nicholas Negroponte? That didn’t quite come to pass. But it’s becoming increasingly feasible, and it represents one more step away from a common culture and a common set of facts, potentially adding another layer to the polarization that’s tearing us apart.
“Personalization of news … puts the public record at risk,” according to a report published in 2017 by Columbia’s Tow Center for Digital Journalism. “When everyone sees a different version of a story, there is no authoritative version to cite. The internet has also made it possible to remove content from the web, which may not be archived anywhere. There is no guarantee that what you see will be what everyone sees — or that it will be there in the future.”
Of course, AI has also made journalism better — and not just for transcribing interviews or Googling public records. As the Tow Center report also points out, AI makes it possible for investigative reporters to sift through thousands of records to find patterns, instances of wrongdoing or trends.
The Knight Foundation, in its press release announcing the grant, held out the promise that AI could reduce costs on the business side of news organizations — a crucial goal given how financially strapped most of them are. The $3 million will go to The Associated Press, Columbia University, the NYC Media Lab and the Partnership on AI. Under the terms of the grant, the four organizations will work together on projects such as training local journalists, developing revenue strategies and studying the ethical use of AI. It all sounds eminently worthy.
But there are always unintended consequences. The highly skilled people whom I used to pay to transcribe my interviews no longer have those jobs. High school students who might have gotten an opportunity to write up the exploits of their sports teams for a few bucks have been deprived of a chance at an early connection with news — an experience that might have turned them into paying customers or even journalists when they got older.
And local news, much of which is already produced at distant outposts, some of them overseas, is about to become that much more impersonal and removed from the communities they serve.