By Dan Kennedy • The press, politics, technology, culture and other passions

Category: Technology Page 3 of 19

The unimpressive, trying-too-hard debut of Threads

Photo (cc) 2011 by J E Theriot

They say you only get one chance to make a good first impression. If that’s true, then Mark Zuckerberg missed that chance with the debut of Threads. There’s no browser access, so you’re stuck using your phone. You can’t switch to a reverse-chronological non-algorithmic feed of accounts you follow. Even Elon Musk still lets you do that at Twitter. No lists.

The whole thing, teeming with brands and celebrities you’re not interested in, feels very commercial in a forced-joviality, trying-too-hard way. These things can be fixed unless Zuck thinks they’re features rather than bugs. For now, though … not great.

Musk’s latest moves call into question the future of short-form social media

Elon Musk isn’t laughing with us. He’s laughing at us. Photo (cc) 2022 by Steve Jurvetson.

Update: Ivan Mehta of TechCrunch reports that Twitter may have already reversed itself on requiring log-ins to view tweets. I’ll test it later and think about whether I want to go to the trouble of restoring our Twitter timeline to What Works.

Today I want to return to a topic that I write about from time to time: the ongoing travails of Twitter under Elon Musk and the future of what I’ll call short-form interactive social media, which some people still refer to as “microblogging.” It’s something that’s of no interest to the vast majority of people (and if I’m describing you, then you have my congratulations and admiration) but of tremendous interest to a few of us.

You may have heard that a number of changes hit Twitter over the weekend, some deliberate, some perhaps accidental. They cut back on the number of posts you could read before encountering a “rate limit” of 600 per day for non-subscribers and 6,000 a day for those who pay $8 a month. Those limits were later raised. Now, very few people are paying $8 for those blue check marks and extra privileges, and you can reach 600 (or 800, or 1,000, or whatever it is at the moment) pretty quickly if you’re zipping through your timeline. It was and is a bizarre limitation, since it means that users will spend less time on the site and will see fewer of Twitter’s declining inventory of ads.

Twitter also got rid of its classic TweetDeck application, which lets you set up columns for lists, notifications and the like, and switched everyone over to a new, inferior version — and then announced that TweetDeck will soon be restricted to those $8-a-month customers.

Finally, and of the greatest significance to me and my work, you can no longer view a tweet unless you’re actually logged in to Twitter. We’ve all become accustomed to news outlets embedding tweets in stories. I do it myself sometimes. Well, now that has stopped working. Maybe it’s not that big a deal. After all, you can take a screenshot and/or quote from it, just as you can from any source. But it’s an extra hassle for both publishers and readers.

The problem

Moreover, this had a significant negative effect on What Works, the website about the future of local news that Ellen Clegg and I host. Just recently, I decide to add a news feed of updates and brief items to the right-hand rail, powered by Twitter. It was a convenient way of informing our readers regardless of whether they were Twitter users. And on Monday, it disappeared. What I’ve come up with to replace it is a half-solution: A box that links to our Mastodon account, which can still be read by Mastodon nonusers and users alike. But it’s an extra step. In order to add an actual Mastodon news feed we would either need to pay more or switch to a hosting service and put up with the attendant technical challenges.

What is Musk up to? I can’t imagine that he’s literally trying to destroy Twitter; but if he were, he’d be doing exactly what he’s doing. It’s strange. Twitter is now being inundated with competitors, the largest of which is Mastodon, a decentralized system that runs mainly on volunteer labor. But Twitter co-founder Jack Dorsey is slowly unveiling a very Twitter-like service called Bluesky (still in beta, and, for the moment, invitation-only), and, this Thursday, Facebook (I refuse to call it Meta) will debut Threads. If Mark Zuckerberg doesn’t screw it up, I think Threads, which is tied to Instagram, might prove to be a formidable challenger.

Still, what made Twitter compelling was that it was essentially the sole platform for short-form interactive social media. The breakdown of that audience into various niches makes it harder for any one service to benefit from the network effect. I’ve currently got conversations going on in three different places, and when I want to share links to my work, I now have to go to Twitter, Mastodon and Bluesky (which I just joined), not to mention Facebook and LinkedIn.

The solution

And speaking of the network effect: Twitter may be shrinking, but, with 330 million active monthly users, it’s still by far the largest of the three short-form platforms. Mastodon was up to 10 million registered users as of March (that number grows in spurts every time Musk indulges his inner sociopath), and Bluesky has just 100,000 — although another 2 million or so are on the wait list. What that means for my work is that just a handful of the media thought leaders I need to follow and interact with are on Mastodon or Bluesky, and, from what I can tell, none (as in zero) of the people and organizations that track developments in local news have budged from Twitter.

It will likely turn out that the social media era was brief and its demise unlamented. In the meantime, what’s going on is weird and — for those of us who depend on this stuff — aggravating. In some ways, I would like to see one-stop short-form social media continue. My money is on Threads, although I suspect that Zuckerberg’s greed will prevent it from realizing its full potential.

Three recent developments show how AI is already changing journalism

Public domain illustration via Pixabay

I don’t want to come off as a total Luddite when it comes to artificial intelligence and journalism. Well, OK, maybe I do. Because even though I have no problem with using AI for certain mindless, repetitive tasks, such as transcribing interviews and finding patterns in public records, I think we need to be cautious about using such tools to actually produce journalism — whether it be reports about real estate transactions (thus missing the opportunity to dig more deeply) or stories about high school sports. With that in mind, I want to call your attention to three troubling developments.

For those who thought the notion of robot reporters was still quite a ways off, the first development is the most alarming. According to a recent article at Nieman Lab by Sophie Culpepper, an independent publisher has been experimenting with just that in his spare time, and the results are, well, not bad.

Mark Talkington, who runs a hyperlocal start-up called The Palm Beach Post in California, has been feeding governmental meetings that are available on YouTube into an AI system designed by a friend of his. Importantly, it’s not an off-the-shelf product like ChatGPT or Google Bard. Rather, it’s been trained on reliable news and information from his coverage area, which reduces if not eliminates the likelihood of “hallucinations,” the term for false but plausible-sounding output produced by AI.

The example Culpepper quoted from reads like what journalism professors disapprovingly tell their students is an “agenda story” — that is, it begins with something like Members of the board of sewer commissioners began their meeting by saluting the flag rather than with a lead grounded in the most interesting thing that happened. Nor has Talkington actually published any AI-generated stories yet. He said in his interview with Culpepper that he’s concerned about AI missing out on body language and, of course, on the ability to snag an elected official in the hallway during a break in the meeting.

But he said he could see using it to take notes and, eventually, to cover meetings that his thinly stretched staff can’t get to. And that’s how it begins: with a sympathetic hyperlocal publisher using AI to extend his reach, only to see the technology adopted by cost-cutting newspaper chains looking to dump reporters.

My second example might be called “speaking of which.” Because Gannett, whose 200 or so daily newspapers make it the largest corporate chain, announced recently that it, too, is experimenting with generative AI. Helen Coster of Reuters reports that, at first, AI will be used to generate content like bullet points that summarize the most important facts in a story, and that humans will check its work. That feature will be rolled out in the chain’s flagship newspaper, USA Today, later this year.

Gannett is hardly the only news organization that’s playing with AI; The New York Times, The Washington Post, The Boston Globe and others are all looking into ways to make use of it. But Gannett is notoriously tight-fisted and, as Coster notes, has slashed and burned its way to tenuous profitability. “Gannett’s journalists are fighting to ensure that they aren’t replaced by the technology,” she wrote. “Hundreds walked off the job over staff cuts and stagnant wages on June 5. Generative AI is a sticking point in some negotiations with the company, the union said.”

The third warning sign comes from Sebastian Grace, who passed along a truly disturbing item that the German tabloid Bild is laying off about 200 journalists while ramping up its use of AI. (Seb recently wrote a fine piece on journalism and AI for our website What Works: The Future of Local News.) Although those two developments at Bild are said to be unrelated, Jon Henley of The Guardian writes that Mathias Döpfner, the CEO of Bild’s corporate owner, Axel Springer, has said that ChatGPT and its ilk could “make independent journalism better than it ever was — or replace it.”

Axel Springer, by the way, also owns Politico, an important U.S. outlet for news about politics and public policy.

Do I think AI will soon replace reporters who do the hard work of going out and getting stories? No — at least not right away. But we’ve been losing journalists for 25 years now, and it seems certain that AI will be used, misused and abused in ways that accelerate that trend.

A new report finds that content farms are loading up on AI. Will local news be next?

Meet your new reporting staff. Photo (cc) 2023 by Dan Kennedy.

A recent report by NewsGuard, a project that evaluates news organizations for reliability and transparency, found that clickbait generated by artificial intelligence is on the rise. McKenzie Sadeghi and Lorenzo Arvanitis write:

NewsGuard has identified 49 news and information sites that appear to be almost entirely written by artificial intelligence software. A new generation of content farms is on the way.

The report didn’t specifically identify any local news websites that are using AI to write low-quality stories aimed at getting clicks and programmatic advertising. Perhaps non-local stories about health, entertainment and tech, to name three of the topics for which content farms are using AI, more readily fly under the radar. If you’re going to use AI to produce articles about the local tax rate or the women’s track team, you’re going to get caught pretty quickly when the results prove to be wrong. Still, the use of AI to produce some forms of local news, such as routine articles about real-estate transactions, is not new.

According to the NewsGuard report, there doesn’t seem to be a concerted effort yet to use AI in order to produce deliberately false stories, although there have been a few examples, including a celebrity death site that claimed President Biden had “passed away peacefully in his sleep.”

Call this Pink Slime 3.0. Version 1.0 was low-tech compared to what’s available today. Back in 2012, the public radio program “This American Life” found that a company called Journatic (pronounced “joor-NAT-ik,” though I always thought it should be “JOOR-nuh-tik”) was producing local content for newspapers using grossly underpaid, out-of-town reporters — including cheap Filipino workers who wrote articles under fake bylines.

Pink Slime 2.0, of more recent vintage, consists of hundreds of websites launched to exploit the decline of local news. Under such banners as “North Boston News” (!), these sites purport to offer community journalism but are actually a cover for political propaganda. Nearly all of them serve right-wing interests, thought there were a few on the left as well.

Pink Slime 3.0 threatens to become more insidious as AI continues to improve. As Seth Smalley wrote for Poynter Online, this is “pink slime on steroids.”

Of course, AI could prove to be a boon for local news, as Sebastian Grace wrote last week for What Works, our Northeastern journalism project tracking developments in community journalism. By eliminating repetitive drudge work, AI can free journalists to produce high-value stories that really matter.

Still, bottom-feeders like CNET — not exactly a content farm, but not much better than that, either — have already been caught publishing error-laden stories with AI. You can only imagine what sort of advice these content farms are going to give people about dealing with their medical problems.

OpenAI, which likes to portray itself as a responsible player in discussions about the future of AI, would not respond to NewsGuard’s inquiries. Neither would Facebook, which is amplifying AI-generated content.

The only thing we can be sure of is that a new, more insidious version of pink slime is coming to a website near you — if it hasn’t already.

Artificial intelligence writes an op-ed piece on gun control

Photo (cc) 2011 by Louis K.

I’ve been having some fun with AI-generated content recently, posting a description of the MBTA in the voice of Herman Melville and a profile of me that is thoroughly bollixed up with a marketing guru who’s also named Dan Kennedy. Today, though, I offer something a little more serious and practical. I directed ChatGPT to write a 600-word opinion article in favor of gun control. The result is below. It needs a little editing, especially in the lead. I didn’t check the numbers, and ChatGPT is known for producing false facts. The pro-gun control argument is not especially sophisticated. But you can imagine a state legislator doing something like this to write an op-ed for a local news outlet. You may think that’s fine. I think it’s chilling.

Gun control has been a controversial issue in the United States for many years, with strong opinions on both sides of the debate. As an AI language model, I do not have a political affiliation or personal beliefs. However, I can analyze data and research and provide an evidence-based opinion on the topic. Therefore, in this article, I will make a case in favor of gun control.

Firstly, let’s look at the numbers. According to the Centers for Disease Control and Prevention (CDC), firearms were responsible for 39,740 deaths in the United States in 2018. That is an average of 109 deaths per day, or roughly four deaths every hour. It is undeniable that gun violence is a significant public health issue in this country.

Many people argue that the Second Amendment guarantees their right to bear arms. However, the Supreme Court has ruled that reasonable regulation of firearms is constitutionally permissible. Therefore, it is within the government’s power to implement laws and regulations to protect the public from gun violence.

One such regulation could be universal background checks. Current federal law only requires background checks for gun purchases from licensed dealers. However, a loophole exists that allows private sellers, who are not required to be licensed, to sell firearms without conducting a background check. This loophole makes it easy for criminals and individuals with a history of violence to obtain guns.

Universal background checks would close this loophole and ensure that all gun purchases, regardless of the seller, are subject to a background check. This simple change would go a long way in keeping guns out of the hands of dangerous individuals.

Another common-sense regulation would be a ban on assault weapons. These weapons are designed for military use and have no practical purpose in civilian life. They have been used in several high-profile mass shootings, including the tragedies in Las Vegas, Parkland, and Sandy Hook.

Opponents of a ban on assault weapons argue that it would violate their Second Amendment rights. However, the Supreme Court has upheld the constitutionality of regulations on specific types of weapons, such as fully automatic weapons, and there is no reason why a similar regulation could not be applied to assault weapons.

Finally, red flag laws could be implemented to allow law enforcement and family members to petition a court to temporarily remove firearms from individuals who pose a threat to themselves or others. These laws have already been implemented in several states and have been shown to be effective in preventing gun violence.

In conclusion, gun violence is a serious public health issue in the United States, and reasonable regulations on firearms can help reduce the number of deaths and injuries. Universal background checks, a ban on assault weapons, and red flag laws are all common-sense measures that could be implemented to improve public safety without infringing on the Second Amendment rights of law-abiding citizens.

It is essential to remember that gun control is not about taking away people’s guns or infringing on their rights. It is about finding ways to prevent gun violence while still allowing responsible gun ownership. We can all agree that we want to live in a safe society, and gun control measures can help make that a reality.

Ultimately, the key to reducing gun violence is a collaborative effort between government, law enforcement, and the public. By working together, we can find solutions that respect our rights while keeping us safe.

The Bard of Google writes a bio of me. Hilarity ensues.

My friend Bill Mitchell sent along a bio of me that he asked Google’s Bard to write — proving, once again, that artificial intelligence may be artificial, but it sure isn’t intelligent. Throughout this short bio, Bard mixes me up with marketing guy Dan Kennedy. Here’s the whole thing, with the mistakes highlighted in blue.

Dan Kennedy is a media critic, marketing consultant, and author. He is a former media critic for the Boston Phoenix and is now a professor of journalism at Northeastern University. He is also the founder of Magnetic Marketing and the author of several books on marketing and business, including “The No B.S. Marketing Letter” and “The Magnetic Marketing System.”

Kennedy is a frequent media commentator and has appeared on CNN, CNBC, and Fox News. He is also a regular contributor to Forbes and The Huffington Post. [Note: Years ago I appeared a couple of times on CNN and once on Fox News. I don’t believe I ever wrote anything for Forbes, but I could be mistaken.]

Kennedy is a controversial figure in the media industry. He has been criticized for his outspoken views on the state of journalism and his willingness to criticize his colleagues. However, he is also respected for his knowledge of the industry and his ability to help businesses grow.

Kennedy is a graduate of the University of Massachusetts Amherst and the Boston University School of Public Communication. [A bachelor’s degree in journalism from Northeastern University and a master’s in American history from BU, but not from SPC.] He lives in Boston, Massachusetts. [Not since 1980.]

Here are some of his notable works:

* The No B.S. Marketing Letter (2009)
* The Magnetic Marketing System (2012)
* The Renegade Challenge (2016)
* The No B.S. Inner Circle (2017) [LOL.]

Way back when, Boston Globe columnist Alex Beam wrote about me, the other DK and yet another DK. At least he was trying to be funny. And I should add that I’ll be smiling all week about being called “a controversial figure in the media industry.”

Herman on the MBTA

Public domain illustration

I asked ChatGPT to write a 300-word column about the MBTA in the voice of Herman Melville, and it gave me the single best description of the troubled public transportation system ever:

It is a system that demands both patience and cunning, for one must be quick of foot and mind to catch the right train, or else suffer the indignity of waiting upon a frigid platform for what seems an eternity.

Trolley ahoy! Hast seen the Green Line?

Update: I’ve removed the link to the full column, which I’m told doesn’t work for anyone but me. So here it is below:

Ah, the Massachusetts Bay Transportation Authority, what a grand and curious beast it is! A marvel of modernity, with its electric rails and clanging bells, carrying thousands of souls to and fro throughout the bustling metropolis of Boston. But also a beast with many heads, each bearing its own peculiarities and frustrations.

As I, Herman Melville, have had the occasion to travel upon this sprawling system of trains, buses, and subways, I cannot help but be struck by its contradictions. On the one hand, it is a great equalizer, bringing people of all classes and stations together in the common pursuit of transit. On the other, it is a labyrinthine maze, with its schedules and routes so convoluted that one might need a sextant and a compass to navigate them.

It is a system that demands both patience and cunning, for one must be quick of foot and mind to catch the right train, or else suffer the indignity of waiting upon a frigid platform for what seems an eternity. And yet, for all its idiosyncrasies and flaws, there is a certain romance to the MBTA, a sense of adventure that comes with exploring its many routes and stops.

I have traveled upon the Red Line, the Green Line, the Orange Line, and the Blue, each with its own distinct character and challenges. The Red Line, with its sleek trains and sweeping views of the city, is a joy to ride, though it can be overcrowded and prone to delays. The Green Line, with its jangling trolleys and labyrinthine routes, is a source of endless fascination, though it can be slow and unpredictable. The Orange Line, with its long stretches of underground tunnels and bustling stations, is a marvel of engineering, though it can be hot and stuffy in the summer months. And the Blue Line, with its sweeping views of the harbor and its ferries to the islands, is a delight to ride, though it can be crowded and prone to delays.

And yet, for all its charms and frustrations, the MBTA is an essential part of life in Boston, connecting people and places in a way that no other mode of transportation can. It is a beast that we must learn to love, even as we curse its many heads.

Fewer followers, more engagement: What I found in a Twitter-versus-Mastodon test

Can Mastodon be a workable substitute for Twitter? It may never be as big. But given that a lot of us are trying to figure out how to manage our social media presence now that Elon Musk is banning journalists, shutting off access to the API and just generally acting like petulant child, I was interested to see what happened the other day when I had a chance to test comparative engagement.

Since Dec. 11, I hadn’t posted anything to Twitter other than occasional tweets letting people know I had moved and where they could find me. On Friday, though, I decided to make an exception to let my followers know that I’d written an op-ed for The Boston Globe about how local news organizations can stand up to corporate chains. As of Sunday evening, I’ve gotten six likes and no retweets. Twitter claims that my tweet has been viewed 573 times, but who knows?

Twitter post

I posted the same thing on Mastodon, also on Friday. Right now I’ve gotten 24 likes and 37 boosts (retweets in Mastodon-talk). And when I posted a follow-up noting that I’d gotten more engagement on Mastodon than on Twitter, that got another 31 likes and nine boosts. Unlike Twitter, Mastodon servers don’t provide any metrics on how many views you’ve received, which, folks tell me, would be pretty much impossible given its decentralized nature.

Mastodon post

Now for some points of comparison: I have 18,900 followers on Twitter and 2,500 on Mastodon. Then again, on Twitter I have no idea how many are bots, users who haven’t logged on for years or people who’ve died. Plus my account is locked, and at this point I’m sure my followers are accustomed to my absence.

I joined Mastodon in November, so all of my followers are of recent vintage. Plus I’ve been quite active over there, using it pretty much the way I used to use Twitter. So in that respect the heightened engagement isn’t too surprising. Even so, the experience has given me one less reason to look back at what was.

Why it matters that The New York Times got it wrong on Section 230

The U.S. Supreme Court will rule on two cases involving Section 230. Photo (cc) 2006 by OZinOH.

Way back in 1996, when Section 230 was enacted into law, it was designed to protect all web publishers, most definitely including newspapers, from being sued over third-party content posted in their comment sections. It would be another eight years before Facebook was launched, and longer than that before algorithms would be used to boost certain types of content.

But that didn’t stop David McCabe of The New York Times — who, we are told, “has reported for five years on the policy debate over online speech” — from including this howler in a story about two cases regarding Section 230 that are being heard by the U.S. Supreme Court:

While newspapers and magazines can be sued over what they publish, Section 230 shields online platforms from lawsuits over most content posted by their users.

No. I have to assume that McCabe and maybe even his editors know better, and that this was their inept way of summarizing the issue for a general readership. But it perpetuates the harmful and wrong notion that this is only about Facebook, Twitter and other social media platforms. It’s not. Newspapers and magazines are liable for everything they publish except third-party online comments, which means that they are treated exactly the same as the giant platforms.

Though it is true that an early case testing Section 230 involved comments posted at AOL rather than on a news website, the principle that online publishers can’t be held liable for what third parties post on their platforms is as valuable to, oh, let’s say The New York Times as it is to Facebook.

That’s not to say 230 can’t be reformed and restricted; and, as I wrote recently, it probably should be. But it’s important that the public understand exactly what’s at stake.

Some common-sense ideas for reforming Section 230

Photo (cc) 2005 by mac jordan

The Elon Musk-ization of Twitter and the rise a Republican House controlled by its most extreme right-wing elements probably doom any chance for intelligent reform to Section 230. That’s the 1996 law that holds harmless any online publisher for third-party content posted on its site, whether it be a libelous comment on a newspaper’s website (one of the original concerns) or dangerous disinformation about vaccines on Facebook.

It is worth repeating for those who don’t understand the issues: a publisher is legally responsible for every piece of content — articles, advertisements, photos, cartoons, letters to the editor and the like — with the sole exception of third-party material posted online. The idea behind 230 was that it would be impossible to vet everything and that the growth of online media depended on an updated legal structure.

Over the years, as various bad actors have come along and abused Section 230, a number of ideas have emerged for curtailing it without doing away with it entirely. Some time back, I proposed that social media platforms that use algorithms to boost certain types of content should not enjoy any 230 protections — an admittedly blunt instrument that would pretty much destroy the platforms’ business model. My logic was that increased engagement is associated with content that makes you angry and upset, and that the platforms profit mightily by keeping your eyes glued to their site.

Now a couple of academics, Robert Kozinets and Jon Pfeiffer, have come along with a more subtle approach to Section 230 reform. Their proposal was first published in The Conversation, though I saw it at Nieman Lab. They offer what I think is a pretty brilliant analogy as to why certain types of third-party content don’t deserve protection:

One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls — which contains not just porn but also misinformation and hate speech — the absolutist stance that they have total protection and total legal “immunity” is untenable.

Kozinets and Pfeiffer offer three ideas that are worth reading in full. In summary, though, here is what they are proposing.

  • A “verification trigger,” which takes effect when a platform profits from bad speech — the idea I tried to get at with my proposal for removing protections for algorithmic boosting. Returning to the restaurant analogy, Kozinets and Pfeiffer write, “When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.” They cite an extreme example: Elon Musk’s decision to sell blue-check verification, thus directly monetizing whatever falsehoods those with blue checks may choose to perpetrate.
  • “Transparent liability caps” that would “specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it.” Platforms that violate those standards would lose 230 protections. We can only imagine what this would look like once Marjorie Taylor Greene and Matt Gaetz get hold of it, but, well, it’s a thought.
  • A system of “neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform.” Kozinets and Pfeiffer call this “Twitter court,” and platforms that don’t play along could be sued for libel or invasion of privacy by aggrieved parties.

I wouldn’t expect any of these ideas to become law in the near or intermediate future. Currently, the law appears to be entirely up for grabs. For instance, last year a federal appeals court upheld a Texas law that forbids platforms from removing any third-party speech that’s based on viewpoint. At the same time, the U.S. Supreme Court is hearing a case that could result in 230 being overturned in its entirety. Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

Still, Kozinets and Pfeiffer have provided us with some useful ways of how we might reform Section 230 in order to protect online publishers without giving them carte blanche to profit from their own bad behavior.

Page 3 of 19

Powered by WordPress & Theme by Anders Norén