How Claude AI helped improve the look and legibility of Media Nation

Public domain illustration via Pixabay.

For quite a few years I used WordPress’ indent feature for blockquotes rather than the actual blockquote command. The reason was that blockquotes in the theme that I use (Twenty Sixteen) were ugly, with type larger than the regular text (the opposite of what you would see in a book or a printed article) and in italics.

But then I noticed that indents didn’t show up at all in posts that went out by email, leading to confusion among my subscribers — that is, my most engaged readers. I decided to find out if I could modify the blockquote feature. WordPress allows you to add custom CSS to your theme, but I know very little about how to use CSS. I could have asked in a WordPress forum, but I tried to see if I could get an answer from AI instead.

Sign up for free email delivery of Media Nation. You can also become a supporter for just $6 a month and receive a weekly newsletter with exclusive content.

Northeastern has given us all access to the enterprise version of Claude, Anthropic’s AI platform. It’s a mixed blessing, although I’ve found that it’s very good as a search engine — often better than Google, which is now also glopped up by AI. I simply make sure I ask Claude to add the underlying links to its answer so I don’t get taken in by hallucinations. But Claude is also known for being quite good at coding. What I needed was low-level, so I thought maybe it could help.

Indeed it could. I began by asking, “In the Twenty Sixteen WordPress theme, how can I change the CSS so that blockquotes do not appear in italics?” Claude provided me with several options; I chose the simplest one, which was a short bit of custom CSS that I could add to my theme:

blockquote {
     font-style: normal;
}

It worked. A subsequent query enabled me to make the blockquote type smaller. Then, just last week, I noticed that any formatting in the blockquote was stripped out. For instance, a recent memo from Boston Globe Media CEO Linda Henry contained boldface and italicized text, which did not appear when I reproduced her message. The formatting code was there; it just wasn’t visible. Claude produced CSS commands that overrode the theme. You can see the results here, with bold and italic type just as Henry had it in her message.

I make some light use of AI in my other work. When I need to transcribe an audio interview, I use Otter, which is powered by AI. I’ve experimented with using AI to compile summaries from transcripts and even (just for my own use) an actual news story. Very occasionally I’ve used AI to produce illustrations for this blog, which seems to draw more objections than other AI applications, probably because it’s right in people’s faces.

Just the other day, someone complained to me on social media that she was not going to visit a local news outlet I had mentioned because she had encountered an AI-produced illustration there. When I asked why, she replied that it was because AI relies on plagiarism. Oh, I get it. Sometime this year I’m hoping to receive $3,000 as my share of a class-action lawsuit against Anthropic because one of my books, “The Return of the Moguls,” was used to train Claude.

And let’s not overlook the massive amounts of energy that are required to power AI. On a recent New York Times podcast, Ezra Klein and his guests observed that AI is deeply unpopular with the public (sub. req.), even though they’re using it, because all they really know is that it’s going to take away jobs and is driving up electricity costs.

But AI isn’t going anywhere, and if we’re going to use it (and we are, even if we try to avoid it), we need to find ways to do so ethically and responsibly.

How Margaret Sullivan’s erroneous slip of the tongue became (briefly) an AI-generated ‘fact’

Paul Krugman and Margaret Sullivan. Photo via Paul Krugman’s newsletter.

Media critic Margaret Sullivan made an error recently. No big deal — we all do it. But her account of what happened next is worth thinking about.

First, the error. Sullivan writes in her newsletter, American Crisis, that she recently appeared on economist Paul Krugman’s podcast and said that Los Angeles Times owner Patrick Soon-Shiong was among the billionaires who joined Donald Trump at his second inauguration earlier this year, along with the likes of Mark Zuckerberg, Jeff Bezos and Elon Musk. “I was wrong about that,” she notes, although she adds that Soon-Shiong “has been friendly to Trump in other ways.” Then she writes:

But — how’s this for a cautionary tale about the dubious accuracy of artificial intelligence? — a Google “AI overview,” in response to a search, almost immediately took my error and spread it around: “Yes, Dr. Patrick Soon-Shiong attended Donald Trump’s inauguration in 2025. He was seen there alongside other prominent figures like Mark Zuckerberg and Jeff Bezos.” It cited Krugman’s and my conversation. Again, I was wrong and I regret the error.

It does appear that the error was corrected fairly quickly. I asked Google this morning and got this from AI: “Patrick Soon-Shiong did not attend Donald Trump’s second inauguration. Earlier reports and AI overviews that claimed he did were based on an error by a journalist who later issued a correction.” It links to Sullivan’s newsletter.

Unlike Google, Claude makes no mention Sullivan’s original mistake, concluding, accurately: “While the search results don’t show Patrick Soon-Shiong listed among the most prominent billionaires seated in the Capitol Rotunda (such as Musk, Bezos, Zuckerberg, and others who received extensive coverage), the evidence suggests he was engaged with the inauguration events and has maintained a relationship with Trump’s administration.”

And here’s the verdict from ChatGPT: “I found no credible public evidence that Patrick Soon-Shiong attended Donald Trump’s second inauguration.”

You might cite my findings as evidence that AI corrects mistakes quickly, and in this case it did. (By the way, the error has not yet been corrected at Krugman’s site.) But a less careful journalist than Sullivan might have let the original error hang out there, and it would soon have become part of the established record of who did and didn’t pay homage to Trump on that particular occasion.

In other words: always follow your queries back to the source.