My Northeastern ethics students offer some ideas on practicing journalism in the AI era

Photo by Carlos López via Pixabay.

The Society of Professional Journalists’ Code of Ethics encompasses four broad principles:

    • Seek Truth and Report It
    • Minimize Harm
    • Act Independently
    • Be Accountable and Transparent

Each principle is accompanied by multiple bullet points, which in turn link to background information. But those are the starting points, and I think they provide a good rough guide for how to practice ethical journalism.

Whenever I teach one of our ethics classes, I ask my students to come up with a fifth principle as well as some explanatory material. This semester, I’m teaching our graduate ethics seminar. It’s a small class — five grad students and one undergrad. Last week I divided them into three teams of two and put them to work. Here’s what they came up with. (Longtime readers of Media Nation will recognize this exercise.) I’ve done a little editing, mainly for parallel construction.

Practice Digital Diligence

  • Utilize AI for structural purposes such as transcribing interviews, searching for sources and entering data.
  • Disclose the use of AI software when publishing artificial creations.
  • Give credit by providing hyperlinks to other journalistic sources.
  • Gain verification status on social platforms for credibility purposes.
  • Do not engage with negative comments on social media posts.
  • Engage with subscribers who might use social media to ask questions about a story.
  • Apply AP style to social media posts.
  • Give credit to any artists whose work you might borrow. Respect copyright law.

Use Modern Resources Responsibly

  • Use social media and other digital tools, such as comment sections, to crowdsource information, connect with others and distribute news in a more accessible way.
  • Do not use these tools to engage in ragebait or to get tangled in messy and unproductive discourse online.
  • Acceptable uses of AI include gathering information, reformatting your reporting, transcribing interviews and similar non-public-facing tasks.
  • AI should be used more effectively to guide your reporting rather than replacing it.

Be Compassionate

  • Treat sources and communities with empathy and care.
  • Avoid misleading sources or providing false hope — for instance, don’t promise someone who is suffering that you’ll be able to give them assistance.
  • Do not exploit a source’s lack of media training. Provide a detailed explanation of your reporting methods when warranted.
  • Avoid using jargon both in interacting with sources and in producing a story.
  • Be a human first. If that clashes with your role as a journalist, that should be secondary.

***

In addition to their work on extending the Code of Ethics, I asked them on the first day of class to name one significant ethical issue that they think faces journalism. What follows is my attempt to summarize a longer conversation that we had in class.

► Stand up for our independence as journalists

► Explore and define the role of AI and truth in journalism

► Make sure we include a range of perspectives

► Push back against fake news, ragebait, etc.

► Avoid passive voice that evades responsibility

► Move beyond our preconceptions in pursuit of the truth

I hope you’ll agree that this is good, thought-provoking stuff. I can’t wait to see how the rest of the semester will go.

Follow my Bluesky newsfeed for additional news and commentary. And please join my Patreon for just $6 a month. You’ll receive a supporters-only newsletter every Thursday.

How Claude AI helped improve the look and legibility of Media Nation

Public domain illustration via Pixabay.

For quite a few years I used WordPress’ indent feature for blockquotes rather than the actual blockquote command. The reason was that blockquotes in the theme that I use (Twenty Sixteen) were ugly, with type larger than the regular text (the opposite of what you would see in a book or a printed article) and in italics.

But then I noticed that indents didn’t show up at all in posts that went out by email, leading to confusion among my subscribers — that is, my most engaged readers. I decided to find out if I could modify the blockquote feature. WordPress allows you to add custom CSS to your theme, but I know very little about how to use CSS. I could have asked in a WordPress forum, but I tried to see if I could get an answer from AI instead.

Sign up for free email delivery of Media Nation. You can also become a supporter for just $6 a month and receive a weekly newsletter with exclusive content.

Northeastern has given us all access to the enterprise version of Claude, Anthropic’s AI platform. It’s a mixed blessing, although I’ve found that it’s very good as a search engine — often better than Google, which is now also glopped up by AI. I simply make sure I ask Claude to add the underlying links to its answer so I don’t get taken in by hallucinations. But Claude is also known for being quite good at coding. What I needed was low-level, so I thought maybe it could help.

Indeed it could. I began by asking, “In the Twenty Sixteen WordPress theme, how can I change the CSS so that blockquotes do not appear in italics?” Claude provided me with several options; I chose the simplest one, which was a short bit of custom CSS that I could add to my theme:

blockquote {
     font-style: normal;
}

It worked. A subsequent query enabled me to make the blockquote type smaller. Then, just last week, I noticed that any formatting in the blockquote was stripped out. For instance, a recent memo from Boston Globe Media CEO Linda Henry contained boldface and italicized text, which did not appear when I reproduced her message. The formatting code was there; it just wasn’t visible. Claude produced CSS commands that overrode the theme. You can see the results here, with bold and italic type just as Henry had it in her message.

I make some light use of AI in my other work. When I need to transcribe an audio interview, I use Otter, which is powered by AI. I’ve experimented with using AI to compile summaries from transcripts and even (just for my own use) an actual news story. Very occasionally I’ve used AI to produce illustrations for this blog, which seems to draw more objections than other AI applications, probably because it’s right in people’s faces.

Just the other day, someone complained to me on social media that she was not going to visit a local news outlet I had mentioned because she had encountered an AI-produced illustration there. When I asked why, she replied that it was because AI relies on plagiarism. Oh, I get it. Sometime this year I’m hoping to receive $3,000 as my share of a class-action lawsuit against Anthropic because one of my books, “The Return of the Moguls,” was used to train Claude.

And let’s not overlook the massive amounts of energy that are required to power AI. On a recent New York Times podcast, Ezra Klein and his guests observed that AI is deeply unpopular with the public (sub. req.), even though they’re using it, because all they really know is that it’s going to take away jobs and is driving up electricity costs.

But AI isn’t going anywhere, and if we’re going to use it (and we are, even if we try to avoid it), we need to find ways to do so ethically and responsibly.

How Margaret Sullivan’s erroneous slip of the tongue became (briefly) an AI-generated ‘fact’

Paul Krugman and Margaret Sullivan. Photo via Paul Krugman’s newsletter.

Media critic Margaret Sullivan made an error recently. No big deal — we all do it. But her account of what happened next is worth thinking about.

First, the error. Sullivan writes in her newsletter, American Crisis, that she recently appeared on economist Paul Krugman’s podcast and said that Los Angeles Times owner Patrick Soon-Shiong was among the billionaires who joined Donald Trump at his second inauguration earlier this year, along with the likes of Mark Zuckerberg, Jeff Bezos and Elon Musk. “I was wrong about that,” she notes, although she adds that Soon-Shiong “has been friendly to Trump in other ways.” Then she writes:

But — how’s this for a cautionary tale about the dubious accuracy of artificial intelligence? — a Google “AI overview,” in response to a search, almost immediately took my error and spread it around: “Yes, Dr. Patrick Soon-Shiong attended Donald Trump’s inauguration in 2025. He was seen there alongside other prominent figures like Mark Zuckerberg and Jeff Bezos.” It cited Krugman’s and my conversation. Again, I was wrong and I regret the error.

It does appear that the error was corrected fairly quickly. I asked Google this morning and got this from AI: “Patrick Soon-Shiong did not attend Donald Trump’s second inauguration. Earlier reports and AI overviews that claimed he did were based on an error by a journalist who later issued a correction.” It links to Sullivan’s newsletter.

Unlike Google, Claude makes no mention Sullivan’s original mistake, concluding, accurately: “While the search results don’t show Patrick Soon-Shiong listed among the most prominent billionaires seated in the Capitol Rotunda (such as Musk, Bezos, Zuckerberg, and others who received extensive coverage), the evidence suggests he was engaged with the inauguration events and has maintained a relationship with Trump’s administration.”

And here’s the verdict from ChatGPT: “I found no credible public evidence that Patrick Soon-Shiong attended Donald Trump’s second inauguration.”

You might cite my findings as evidence that AI corrects mistakes quickly, and in this case it did. (By the way, the error has not yet been corrected at Krugman’s site.) But a less careful journalist than Sullivan might have let the original error hang out there, and it would soon have become part of the established record of who did and didn’t pay homage to Trump on that particular occasion.

In other words: always follow your queries back to the source.