By Dan Kennedy • The press, politics, technology, culture and other passions

Category: Technology Page 3 of 19

Three recent developments show how AI is already changing journalism

Public domain illustration via Pixabay

I don’t want to come off as a total Luddite when it comes to artificial intelligence and journalism. Well, OK, maybe I do. Because even though I have no problem with using AI for certain mindless, repetitive tasks, such as transcribing interviews and finding patterns in public records, I think we need to be cautious about using such tools to actually produce journalism — whether it be reports about real estate transactions (thus missing the opportunity to dig more deeply) or stories about high school sports. With that in mind, I want to call your attention to three troubling developments.

For those who thought the notion of robot reporters was still quite a ways off, the first development is the most alarming. According to a recent article at Nieman Lab by Sophie Culpepper, an independent publisher has been experimenting with just that in his spare time, and the results are, well, not bad.

Mark Talkington, who runs a hyperlocal start-up called The Palm Beach Post in California, has been feeding governmental meetings that are available on YouTube into an AI system designed by a friend of his. Importantly, it’s not an off-the-shelf product like ChatGPT or Google Bard. Rather, it’s been trained on reliable news and information from his coverage area, which reduces if not eliminates the likelihood of “hallucinations,” the term for false but plausible-sounding output produced by AI.

The example Culpepper quoted from reads like what journalism professors disapprovingly tell their students is an “agenda story” — that is, it begins with something like Members of the board of sewer commissioners began their meeting by saluting the flag rather than with a lead grounded in the most interesting thing that happened. Nor has Talkington actually published any AI-generated stories yet. He said in his interview with Culpepper that he’s concerned about AI missing out on body language and, of course, on the ability to snag an elected official in the hallway during a break in the meeting.

But he said he could see using it to take notes and, eventually, to cover meetings that his thinly stretched staff can’t get to. And that’s how it begins: with a sympathetic hyperlocal publisher using AI to extend his reach, only to see the technology adopted by cost-cutting newspaper chains looking to dump reporters.

My second example might be called “speaking of which.” Because Gannett, whose 200 or so daily newspapers make it the largest corporate chain, announced recently that it, too, is experimenting with generative AI. Helen Coster of Reuters reports that, at first, AI will be used to generate content like bullet points that summarize the most important facts in a story, and that humans will check its work. That feature will be rolled out in the chain’s flagship newspaper, USA Today, later this year.

Gannett is hardly the only news organization that’s playing with AI; The New York Times, The Washington Post, The Boston Globe and others are all looking into ways to make use of it. But Gannett is notoriously tight-fisted and, as Coster notes, has slashed and burned its way to tenuous profitability. “Gannett’s journalists are fighting to ensure that they aren’t replaced by the technology,” she wrote. “Hundreds walked off the job over staff cuts and stagnant wages on June 5. Generative AI is a sticking point in some negotiations with the company, the union said.”

The third warning sign comes from Sebastian Grace, who passed along a truly disturbing item that the German tabloid Bild is laying off about 200 journalists while ramping up its use of AI. (Seb recently wrote a fine piece on journalism and AI for our website What Works: The Future of Local News.) Although those two developments at Bild are said to be unrelated, Jon Henley of The Guardian writes that Mathias Döpfner, the CEO of Bild’s corporate owner, Axel Springer, has said that ChatGPT and its ilk could “make independent journalism better than it ever was — or replace it.”

Axel Springer, by the way, also owns Politico, an important U.S. outlet for news about politics and public policy.

Do I think AI will soon replace reporters who do the hard work of going out and getting stories? No — at least not right away. But we’ve been losing journalists for 25 years now, and it seems certain that AI will be used, misused and abused in ways that accelerate that trend.

A new report finds that content farms are loading up on AI. Will local news be next?

Meet your new reporting staff. Photo (cc) 2023 by Dan Kennedy.

A recent report by NewsGuard, a project that evaluates news organizations for reliability and transparency, found that clickbait generated by artificial intelligence is on the rise. McKenzie Sadeghi and Lorenzo Arvanitis write:

NewsGuard has identified 49 news and information sites that appear to be almost entirely written by artificial intelligence software. A new generation of content farms is on the way.

The report didn’t specifically identify any local news websites that are using AI to write low-quality stories aimed at getting clicks and programmatic advertising. Perhaps non-local stories about health, entertainment and tech, to name three of the topics for which content farms are using AI, more readily fly under the radar. If you’re going to use AI to produce articles about the local tax rate or the women’s track team, you’re going to get caught pretty quickly when the results prove to be wrong. Still, the use of AI to produce some forms of local news, such as routine articles about real-estate transactions, is not new.

According to the NewsGuard report, there doesn’t seem to be a concerted effort yet to use AI in order to produce deliberately false stories, although there have been a few examples, including a celebrity death site that claimed President Biden had “passed away peacefully in his sleep.”

Call this Pink Slime 3.0. Version 1.0 was low-tech compared to what’s available today. Back in 2012, the public radio program “This American Life” found that a company called Journatic (pronounced “joor-NAT-ik,” though I always thought it should be “JOOR-nuh-tik”) was producing local content for newspapers using grossly underpaid, out-of-town reporters — including cheap Filipino workers who wrote articles under fake bylines.

Pink Slime 2.0, of more recent vintage, consists of hundreds of websites launched to exploit the decline of local news. Under such banners as “North Boston News” (!), these sites purport to offer community journalism but are actually a cover for political propaganda. Nearly all of them serve right-wing interests, thought there were a few on the left as well.

Pink Slime 3.0 threatens to become more insidious as AI continues to improve. As Seth Smalley wrote for Poynter Online, this is “pink slime on steroids.”

Of course, AI could prove to be a boon for local news, as Sebastian Grace wrote last week for What Works, our Northeastern journalism project tracking developments in community journalism. By eliminating repetitive drudge work, AI can free journalists to produce high-value stories that really matter.

Still, bottom-feeders like CNET — not exactly a content farm, but not much better than that, either — have already been caught publishing error-laden stories with AI. You can only imagine what sort of advice these content farms are going to give people about dealing with their medical problems.

OpenAI, which likes to portray itself as a responsible player in discussions about the future of AI, would not respond to NewsGuard’s inquiries. Neither would Facebook, which is amplifying AI-generated content.

The only thing we can be sure of is that a new, more insidious version of pink slime is coming to a website near you — if it hasn’t already.

Artificial intelligence writes an op-ed piece on gun control

Photo (cc) 2011 by Louis K.

I’ve been having some fun with AI-generated content recently, posting a description of the MBTA in the voice of Herman Melville and a profile of me that is thoroughly bollixed up with a marketing guru who’s also named Dan Kennedy. Today, though, I offer something a little more serious and practical. I directed ChatGPT to write a 600-word opinion article in favor of gun control. The result is below. It needs a little editing, especially in the lead. I didn’t check the numbers, and ChatGPT is known for producing false facts. The pro-gun control argument is not especially sophisticated. But you can imagine a state legislator doing something like this to write an op-ed for a local news outlet. You may think that’s fine. I think it’s chilling.

Gun control has been a controversial issue in the United States for many years, with strong opinions on both sides of the debate. As an AI language model, I do not have a political affiliation or personal beliefs. However, I can analyze data and research and provide an evidence-based opinion on the topic. Therefore, in this article, I will make a case in favor of gun control.

Firstly, let’s look at the numbers. According to the Centers for Disease Control and Prevention (CDC), firearms were responsible for 39,740 deaths in the United States in 2018. That is an average of 109 deaths per day, or roughly four deaths every hour. It is undeniable that gun violence is a significant public health issue in this country.

Many people argue that the Second Amendment guarantees their right to bear arms. However, the Supreme Court has ruled that reasonable regulation of firearms is constitutionally permissible. Therefore, it is within the government’s power to implement laws and regulations to protect the public from gun violence.

One such regulation could be universal background checks. Current federal law only requires background checks for gun purchases from licensed dealers. However, a loophole exists that allows private sellers, who are not required to be licensed, to sell firearms without conducting a background check. This loophole makes it easy for criminals and individuals with a history of violence to obtain guns.

Universal background checks would close this loophole and ensure that all gun purchases, regardless of the seller, are subject to a background check. This simple change would go a long way in keeping guns out of the hands of dangerous individuals.

Another common-sense regulation would be a ban on assault weapons. These weapons are designed for military use and have no practical purpose in civilian life. They have been used in several high-profile mass shootings, including the tragedies in Las Vegas, Parkland, and Sandy Hook.

Opponents of a ban on assault weapons argue that it would violate their Second Amendment rights. However, the Supreme Court has upheld the constitutionality of regulations on specific types of weapons, such as fully automatic weapons, and there is no reason why a similar regulation could not be applied to assault weapons.

Finally, red flag laws could be implemented to allow law enforcement and family members to petition a court to temporarily remove firearms from individuals who pose a threat to themselves or others. These laws have already been implemented in several states and have been shown to be effective in preventing gun violence.

In conclusion, gun violence is a serious public health issue in the United States, and reasonable regulations on firearms can help reduce the number of deaths and injuries. Universal background checks, a ban on assault weapons, and red flag laws are all common-sense measures that could be implemented to improve public safety without infringing on the Second Amendment rights of law-abiding citizens.

It is essential to remember that gun control is not about taking away people’s guns or infringing on their rights. It is about finding ways to prevent gun violence while still allowing responsible gun ownership. We can all agree that we want to live in a safe society, and gun control measures can help make that a reality.

Ultimately, the key to reducing gun violence is a collaborative effort between government, law enforcement, and the public. By working together, we can find solutions that respect our rights while keeping us safe.

The Bard of Google writes a bio of me. Hilarity ensues.

My friend Bill Mitchell sent along a bio of me that he asked Google’s Bard to write — proving, once again, that artificial intelligence may be artificial, but it sure isn’t intelligent. Throughout this short bio, Bard mixes me up with marketing guy Dan Kennedy. Here’s the whole thing, with the mistakes highlighted in blue.

Dan Kennedy is a media critic, marketing consultant, and author. He is a former media critic for the Boston Phoenix and is now a professor of journalism at Northeastern University. He is also the founder of Magnetic Marketing and the author of several books on marketing and business, including “The No B.S. Marketing Letter” and “The Magnetic Marketing System.”

Kennedy is a frequent media commentator and has appeared on CNN, CNBC, and Fox News. He is also a regular contributor to Forbes and The Huffington Post. [Note: Years ago I appeared a couple of times on CNN and once on Fox News. I don’t believe I ever wrote anything for Forbes, but I could be mistaken.]

Kennedy is a controversial figure in the media industry. He has been criticized for his outspoken views on the state of journalism and his willingness to criticize his colleagues. However, he is also respected for his knowledge of the industry and his ability to help businesses grow.

Kennedy is a graduate of the University of Massachusetts Amherst and the Boston University School of Public Communication. [A bachelor’s degree in journalism from Northeastern University and a master’s in American history from BU, but not from SPC.] He lives in Boston, Massachusetts. [Not since 1980.]

Here are some of his notable works:

* The No B.S. Marketing Letter (2009)
* The Magnetic Marketing System (2012)
* The Renegade Challenge (2016)
* The No B.S. Inner Circle (2017) [LOL.]

Way back when, Boston Globe columnist Alex Beam wrote about me, the other DK and yet another DK. At least he was trying to be funny. And I should add that I’ll be smiling all week about being called “a controversial figure in the media industry.”

Herman on the MBTA

Public domain illustration

I asked ChatGPT to write a 300-word column about the MBTA in the voice of Herman Melville, and it gave me the single best description of the troubled public transportation system ever:

It is a system that demands both patience and cunning, for one must be quick of foot and mind to catch the right train, or else suffer the indignity of waiting upon a frigid platform for what seems an eternity.

Trolley ahoy! Hast seen the Green Line?

Update: I’ve removed the link to the full column, which I’m told doesn’t work for anyone but me. So here it is below:

Ah, the Massachusetts Bay Transportation Authority, what a grand and curious beast it is! A marvel of modernity, with its electric rails and clanging bells, carrying thousands of souls to and fro throughout the bustling metropolis of Boston. But also a beast with many heads, each bearing its own peculiarities and frustrations.

As I, Herman Melville, have had the occasion to travel upon this sprawling system of trains, buses, and subways, I cannot help but be struck by its contradictions. On the one hand, it is a great equalizer, bringing people of all classes and stations together in the common pursuit of transit. On the other, it is a labyrinthine maze, with its schedules and routes so convoluted that one might need a sextant and a compass to navigate them.

It is a system that demands both patience and cunning, for one must be quick of foot and mind to catch the right train, or else suffer the indignity of waiting upon a frigid platform for what seems an eternity. And yet, for all its idiosyncrasies and flaws, there is a certain romance to the MBTA, a sense of adventure that comes with exploring its many routes and stops.

I have traveled upon the Red Line, the Green Line, the Orange Line, and the Blue, each with its own distinct character and challenges. The Red Line, with its sleek trains and sweeping views of the city, is a joy to ride, though it can be overcrowded and prone to delays. The Green Line, with its jangling trolleys and labyrinthine routes, is a source of endless fascination, though it can be slow and unpredictable. The Orange Line, with its long stretches of underground tunnels and bustling stations, is a marvel of engineering, though it can be hot and stuffy in the summer months. And the Blue Line, with its sweeping views of the harbor and its ferries to the islands, is a delight to ride, though it can be crowded and prone to delays.

And yet, for all its charms and frustrations, the MBTA is an essential part of life in Boston, connecting people and places in a way that no other mode of transportation can. It is a beast that we must learn to love, even as we curse its many heads.

Fewer followers, more engagement: What I found in a Twitter-versus-Mastodon test

Can Mastodon be a workable substitute for Twitter? It may never be as big. But given that a lot of us are trying to figure out how to manage our social media presence now that Elon Musk is banning journalists, shutting off access to the API and just generally acting like petulant child, I was interested to see what happened the other day when I had a chance to test comparative engagement.

Since Dec. 11, I hadn’t posted anything to Twitter other than occasional tweets letting people know I had moved and where they could find me. On Friday, though, I decided to make an exception to let my followers know that I’d written an op-ed for The Boston Globe about how local news organizations can stand up to corporate chains. As of Sunday evening, I’ve gotten six likes and no retweets. Twitter claims that my tweet has been viewed 573 times, but who knows?

Twitter post

I posted the same thing on Mastodon, also on Friday. Right now I’ve gotten 24 likes and 37 boosts (retweets in Mastodon-talk). And when I posted a follow-up noting that I’d gotten more engagement on Mastodon than on Twitter, that got another 31 likes and nine boosts. Unlike Twitter, Mastodon servers don’t provide any metrics on how many views you’ve received, which, folks tell me, would be pretty much impossible given its decentralized nature.

Mastodon post

Now for some points of comparison: I have 18,900 followers on Twitter and 2,500 on Mastodon. Then again, on Twitter I have no idea how many are bots, users who haven’t logged on for years or people who’ve died. Plus my account is locked, and at this point I’m sure my followers are accustomed to my absence.

I joined Mastodon in November, so all of my followers are of recent vintage. Plus I’ve been quite active over there, using it pretty much the way I used to use Twitter. So in that respect the heightened engagement isn’t too surprising. Even so, the experience has given me one less reason to look back at what was.

Why it matters that The New York Times got it wrong on Section 230

The U.S. Supreme Court will rule on two cases involving Section 230. Photo (cc) 2006 by OZinOH.

Way back in 1996, when Section 230 was enacted into law, it was designed to protect all web publishers, most definitely including newspapers, from being sued over third-party content posted in their comment sections. It would be another eight years before Facebook was launched, and longer than that before algorithms would be used to boost certain types of content.

But that didn’t stop David McCabe of The New York Times — who, we are told, “has reported for five years on the policy debate over online speech” — from including this howler in a story about two cases regarding Section 230 that are being heard by the U.S. Supreme Court:

While newspapers and magazines can be sued over what they publish, Section 230 shields online platforms from lawsuits over most content posted by their users.

No. I have to assume that McCabe and maybe even his editors know better, and that this was their inept way of summarizing the issue for a general readership. But it perpetuates the harmful and wrong notion that this is only about Facebook, Twitter and other social media platforms. It’s not. Newspapers and magazines are liable for everything they publish except third-party online comments, which means that they are treated exactly the same as the giant platforms.

Though it is true that an early case testing Section 230 involved comments posted at AOL rather than on a news website, the principle that online publishers can’t be held liable for what third parties post on their platforms is as valuable to, oh, let’s say The New York Times as it is to Facebook.

That’s not to say 230 can’t be reformed and restricted; and, as I wrote recently, it probably should be. But it’s important that the public understand exactly what’s at stake.

Some common-sense ideas for reforming Section 230

Photo (cc) 2005 by mac jordan

The Elon Musk-ization of Twitter and the rise a Republican House controlled by its most extreme right-wing elements probably doom any chance for intelligent reform to Section 230. That’s the 1996 law that holds harmless any online publisher for third-party content posted on its site, whether it be a libelous comment on a newspaper’s website (one of the original concerns) or dangerous disinformation about vaccines on Facebook.

It is worth repeating for those who don’t understand the issues: a publisher is legally responsible for every piece of content — articles, advertisements, photos, cartoons, letters to the editor and the like — with the sole exception of third-party material posted online. The idea behind 230 was that it would be impossible to vet everything and that the growth of online media depended on an updated legal structure.

Over the years, as various bad actors have come along and abused Section 230, a number of ideas have emerged for curtailing it without doing away with it entirely. Some time back, I proposed that social media platforms that use algorithms to boost certain types of content should not enjoy any 230 protections — an admittedly blunt instrument that would pretty much destroy the platforms’ business model. My logic was that increased engagement is associated with content that makes you angry and upset, and that the platforms profit mightily by keeping your eyes glued to their site.

Now a couple of academics, Robert Kozinets and Jon Pfeiffer, have come along with a more subtle approach to Section 230 reform. Their proposal was first published in The Conversation, though I saw it at Nieman Lab. They offer what I think is a pretty brilliant analogy as to why certain types of third-party content don’t deserve protection:

One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls — which contains not just porn but also misinformation and hate speech — the absolutist stance that they have total protection and total legal “immunity” is untenable.

Kozinets and Pfeiffer offer three ideas that are worth reading in full. In summary, though, here is what they are proposing.

  • A “verification trigger,” which takes effect when a platform profits from bad speech — the idea I tried to get at with my proposal for removing protections for algorithmic boosting. Returning to the restaurant analogy, Kozinets and Pfeiffer write, “When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.” They cite an extreme example: Elon Musk’s decision to sell blue-check verification, thus directly monetizing whatever falsehoods those with blue checks may choose to perpetrate.
  • “Transparent liability caps” that would “specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it.” Platforms that violate those standards would lose 230 protections. We can only imagine what this would look like once Marjorie Taylor Greene and Matt Gaetz get hold of it, but, well, it’s a thought.
  • A system of “neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform.” Kozinets and Pfeiffer call this “Twitter court,” and platforms that don’t play along could be sued for libel or invasion of privacy by aggrieved parties.

I wouldn’t expect any of these ideas to become law in the near or intermediate future. Currently, the law appears to be entirely up for grabs. For instance, last year a federal appeals court upheld a Texas law that forbids platforms from removing any third-party speech that’s based on viewpoint. At the same time, the U.S. Supreme Court is hearing a case that could result in 230 being overturned in its entirety. Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

Still, Kozinets and Pfeiffer have provided us with some useful ways of how we might reform Section 230 in order to protect online publishers without giving them carte blanche to profit from their own bad behavior.

The numbers show why it’s difficult for many to walk away from Twitter

Thanks for the traffic, Bob! Photo (cc) 2011 by Francisco Antunes.

I was looking at my WordPress statistics for 2022, and one number really leaped out at me. Twitter was the third-largest source of traffic to Media Nation in 2022. Search engines were responsible for 70,626 views, Facebook was second at 27,126, and Twitter was right behind at 25,371.

As you probably know, I’ve stopped using Twitter. But it shows you why walking away is pretty close to impossible for self-employed journalists and marginal operators who can’t afford to spurn any service that drives traffic to their site. Although I have a voluntary membership program for $5 a month (please consider!), my livelihood is not dependent on Media Nation.

Search, Facebook and Twitter were the big three, followed by LinkedIn at 4,047 and, in fifth place, an unexpected source: Editor & Publisher, the news industry trade publication, at 3,827. E&P has been kind enough to feature my posts in its daily newsletter on a fairly regular basis, so I guess that’s the explanation. Other notable entries in the top 10 were Universal Hub and Expecting Rain, a site for fans of Bob Dylan, who I’ve been known to write about from time to time. From there it quickly dribbles down to double and single digits.

I’ve taken most of my Twitter-like posts to Mastodon, so I was curious to see that there was nothing. The explanation, I found out, is that Mastodon contains code that makes referrals invisible, which is supposedly some sort of privacy protection. I don’t quite get it, and I’ve learned about a workaround that will supposedly make Mastodon referrals show up. I am getting some referrals from Post News, which, like Mastodon, is emerging as a leading Twitter replacement.

Amazon is moving away from Kindle newspapers and magazines

Photo (cc) 2009 by Brian Dewey

I was sorry to hear that Amazon plans to cut back on selling newspapers and magazines for the Kindle sometime next year, according to Jim Milliot of Publishers Weekly. The reason, I think, was the combination of a really bad deal for readers along with a recognition that the Kindle can’t compete with the whiz-bang color photos and multimedia that newspapers and magazines offer in their regular digital products.

Why are Kindle newspapers and magazines a loser for readers? Because you have to pay for the Kindle version over and above what you’re already paying for your digital subscription. A subscription to The New York Times on Kindle, for instance, costs $20 a month, and it makes no difference whether you’re already a Times subscriber.

On the rare occasions when I fly or take the Amtrak, I’ll buy that day’s Times for Kindle for $1. It downloads fully, so you don’t need wifi once it’s on your device. And I found it to be a pleasurable reading experience. Now, I like photography, and the small black-and-white photos you get on Kindle are no match for reading the Times on my iPad, or in print. But the Kindle provides a focused reading experience more akin to print than to digital, without the constant temptation to check your email or share an article on social media. Yet it is certainly not worth a separate subscription over and above what I’m already paying.

The Publishers Weekly article says that Kindle newspapers and magazines aren’t going away entirely. Reportedly “hundreds” of titles will be available for members of Kindle Unlimited, who pay $10 a month for access to a wide range of books and periodicals. But I think it’s still to be determined if you’ll be able to download a quality newspaper every day as part of that fee, especially since that’s only half what you’d pay for the Times alone right now.

Back in 2009, I suggested that The Boston Globe give away Kindles to subscribers. Instead, two years later the Globe started making its move toward paid digital subscriptions, which has been the paper’s salvation. I still like using my Kindle to read books, but most of us are far more likely to consume news on our phones.

I won’t call the semi-demise of Kindle newspapers a lost opportunity; it’s more a matter of changes in what we expect from our devices. The next time I take the Amtrak, though, I guess I’m going to have to find a Hudson News so that I can buy a print paper.

Page 3 of 19

Powered by WordPress & Theme by Anders Norén