Coming to terms with the false promise of Twitter

Photo (cc) 2014 by =Nahemoth=

Roxane Gay brilliantly captures my own love/hate relationship with Twitter. In a New York Times essay published on Sunday, she writes:

After a while, the lines blur, and it’s not at all clear what friend or foe look like, or how we as humans should interact in this place. After being on the receiving end of enough aggression, everything starts to feel like an attack. Your skin thins until you have no defenses left. It becomes harder and harder to distinguish good-faith criticism from pettiness or cruelty. It becomes harder to disinvest from pointless arguments that have nothing at all to do with you. An experience that was once charming and fun becomes stressful and largely unpleasant. I don’t think I’m alone in feeling this way. We have all become hammers in search of nails.

This is perfect. It’s not that people are terrible on Twitter, although they are. It’s that it’s nearly impossible to avoid becoming our own worst versions of ourselves.

Twitter may not be as harmful to the culture as Facebook, but for some reason I’ve found interactions on Facebook — as well as my own behavior — to be more congenial than on Twitter. Of course, on Facebook you have more control over whom you choose to interact with, and there’s a lot more sharing of family photos and other cheerful content. Twitter, by contrast, can feel like a never-ending exercise in hyper-aggression and performative defensiveness.

From time to time I’ve tried to cut back and use Twitter only for professional reasons — promoting my work and that of others, tweeting less and reading more of what others have to say. It works to an extent, but I always slide back. Twitter seems to reward snark, but what, really, is the reward? More likes and retweets? Who cares?

I can’t leave — Twitter is too important to my work. But Gay’s fine piece is a reminder that social media have fallen far short of what we were hoping for 12 to 15 years ago, and that we ourselves are largely to blame.

A small example of how racially biased algorithms distort social media

You may have heard that the algorithms used by Facebook and other social media platforms are racially biased. I ran into a small but interesting example of that earlier today.

My previous post is about a webinar on news co-ops that I attended last week. I used a photo of Kevon Paynter, co-founder of Bloc by Block News, as the lead art and a photo of Jasper Wang, co-founder of The Defector, well down in the piece.

But when I posted links on Facebook, Twitter and LinkedIn, all three of them automatically grabbed the photo of Wang as the image that would go with the link. For example, here’s how it appeared on Twitter.

I don’t know what happened. Paynter was more central to what I was writing, which is why I led with his photo. Paynter is Black; Wang is of Asian descent. There’s more contrast in the image of Wang, which may be why the algorithms identified it as a superior picture. But in so doing they ignored my choice of Paynter as the lead.

File this under “Things that make you go hmmmm.”

Why I’m asking you to become a member of Media Nation

At the beginning of 2021, I decided to shift my online activities — I was going to blog more and use Facebook and Twitter less. At the same time, I decided to start offering memberships to Media Nation for $5 a month, following the lead of Boston College historian Heather Cox Richardson, pundits such as Andrew Sullivan, reporters such as Patrice Peck and others.

Most of these other folks are using Substack, a newsletter platform. I figured I had sunk way too many years — 16 — into writing Media Nation as a blog, and I didn’t want to switch to a platform that’s reliant on venture capital and could eventually go the way of most such companies. So here I am, still blogging at WordPress.com, and asking readers to consider becoming members by supporting me on Patreon.

And yes, I have been blogging more as I try to stay on top of various media stories, especially involving local journalism, as well as politics, culture and the news of the day. Just this week I’ve written about Larry Flynt and the First Amendment, Duke Ellington’s legacy, a new partnership between The Boston Globe and the Portland Press Herald, and a Louisiana reporter who’s been sued for — believe it or not — filing a public-records request.

If you value this work, I hope you’ll consider supporting it for $5 a month. Members receive a newsletter every Friday morning with exclusive content.

And if you’ve already become a member, thank you.

Twitter reportedly bans Mass. political gadfly Shiva Ayyadurai

Shiva Ayyadurai, in white hat. Photo (cc) 2019 by Marc Nozell.

Massachusetts Republican gadfly Shiva Ayyadurai has been banned from Twitter, most likely for claiming that he’d lost his most recent race for the U.S. Senate only because Secretary of State Bill Galvin’s office destroyed a million electronic ballots. Adam Gaffin of Universal Hub has the details.

In 2018, I gave the City of Cambridge a GBH News New England Muzzle Award for ordering Ayyadurai to dismantle an wildly offensive sign on his company’s Cambridge property that criticized Democratic Sen. Elizabeth Warren. City officials told him that the sign, which read “Only a REAL INDIAN Can Defeat the Fake Indian,” violated the city’s building code.

Ayyadurai threatened to sue, which led the city to back off.

From the Department of Unintended Consequences

The Washington Post reports:

Right-wing groups on chat apps like Telegram are swelling with new members after Parler disappeared and a backlash against Facebook and Twitter, making it harder for law enforcement to track where the next attack could come from….

Trump supporters looking for communities of like-minded people will likely find Telegram to be more extreme than the Facebook groups and Twitter feeds they are used to, said Amarasingam. [Amarnath Amarasingam is described as a researcher who specializes in terrorism and extremism.]

“It’s not simply pro-Trump content, mildly complaining about election fraud. Instead, it’s openly anti-Semitic, violent, bomb making materials and so on. People coming to Telegram may be in for a surprise in that sense,” Amarasingam said.

Entirely predictable, needless to say.

Amazon’s move against Parler is worrisome in a way that Apple’s and Google’s are not

It’s one thing for Apple and Google to throw the right-wing Twitter competitor Parler out if its app stores. It’s another thing altogether for Amazon Web Services to deplatform Parler. Yet that’s what will happen by midnight today, according to BuzzFeed.

Parler deserves no sympathy, obviously. The service proudly takes even less responsibility for the garbage its members post than Twitter and Facebook do, and it was one of the places where planning for the insurrectionist riots took place. But Amazon’s actions raise some important free-speech concerns.

Think of the internet as a pyramid. Twitter and Facebook, as well as Google and Apple’s app stores, are at the top of that pyramid — they are commercial enterprises that may govern themselves as they choose. Donald Trump is far from the first person to be thrown off social networks, and Parler isn’t even remotely the first app to be punished.

But Amazon Web Services, or AWS, exists somewhere below the top of the pyramid. It is foundational; its servers are the floor upon which other things are built. AWS isn’t the bottom layer of the pyramid — it is, in its own way, a commercial enterprise. But it has a responsibility to respecting the free-speech rights of its clients that Twitter and Facebook do not.

Yet AWS has an acceptable-use policy that reads in part:

You may not use, or encourage, promote, facilitate or instruct others to use, the Services or AWS Site for any illegal, harmful, fraudulent, infringing or offensive use, or to transmit, store, display, distribute or otherwise make available content that is illegal, harmful, fraudulent, infringing or offensive.

For AWS to cut off Parler would be like the phone company blocking all calls from a person or organization it deems dangerous. Yet there’s little doubt that Parler violated AWS’s acceptable-use policy. Look for Parler to re-establish itself on an overseas server. Is that what we want?

Meanwhile, Paul Moriarty, a member of the New Jersey State Assembly, wants Comcast to stop carrying Fox News and Newsmax, according to CNN’s “Reliable Sources” newsletter. And CNN’s Oliver Darcy is cheering him on, writing:

Moriarty has a point. We regularly discuss what the Big Tech companies have done to poison the public conversation by providing large platforms to bad-faith actors who lie, mislead, and promote conspiracy theories. But what about TV companies that provide platforms to networks such as Newsmax, One America News — and, yes, Fox News? [Darcy’s boldface]

Again, Comcast and other cable providers are not obligated to carry any particular service. Just recently we received emails from Verizon warning that it might drop WCVB-TV (Channel 5) over a fee dispute. Several years ago, Al Jazeera America was forced to throw in the towel following its unsuccessful efforts to get widespread distribution on cable.

But the power of giant telecom companies to decide what channels will be carried and what will not is immense, and something we ought to be concerned about.

I have no solutions. But I think it’s worth pointing out that AWS’s action against Parler is considerably more ominous than Google’s and Apple’s, and that for elected officials to call on Comcast to drop certain channels is more ominous still.

We have some thinking to do as a society.

Earlier:

Please consider becoming a paid member of Media Nation for just $5 a month. You’ll receive a weekly newsletter with exclusive content. Click here for details.

Twitter solves a business problem. But in the long run, it won’t matter all that much.

A few quick thoughts on Twitter’s decision to cancel Donald Trump’s account.

I was never among those who called for Trump to be thrown off the platform. I have mixed feelings about it even now. But this is not an abridgement of the First Amendment, and I suspect it will be proven to be not that big a deal as social media fracture into various ideological camps.

First, the free-speech argument: Twitter is a private company that has always acted to remove content its executives believe is bad for business. Twitter not only isn’t the government; it’s also not a public utility like the phone company, or for that matter like the broader internet, both of which are built upon principles of free speech no matter how loathsome. As Boston Globe columnist Kimberly Atkins, a lawyer, put it:

The not-a-big-deal argument is a little harder to make. Trump, after all, had more than 88 million Twitter followers, and it was the main way he communicated with his supporters and the broader public. But it’s a big world. He can switch to Parler, a Twitter-like application friendly to right-wingers. Yes, it’s tiny now, but how long would it stay tiny with Trump as its star?

Consider, too, the news that Apple and Google are taking steps to throw Parler off their app stores. So what? Parler could just tell its users to access the platform via the mobile web instead of through apps. This isn’t as exotic as it might sound. Twitter and Facebook members don’t have to use the apps, for instance. They can simply use their phone’s web browsers, and in some ways the experience is better.

Boston Globe columnist Hiawatha Bray writes that “even after this week’s crackdown on his inflammatory and misleading Internet postings, Trump is likely to remain an online force.” Indeed.

The reason that Twitter chief executive Jack Dorsey waited so long to act — until Trump called a coup against his own government in the waning days of his presidency — is that Dorsey understands banning Trump will ultimately prove futile, and that it will endanger Twitter’s dominant role in social media by speeding up the emergence of ideologically sorted alternatives.

Dorsey solved his immediate problem. It’s likely that the worst is yet to come, but at least he’ll be able to tell his shareholders that he did the best that he could.

Please consider becoming a paid member of Media Nation for just $5 a month. You’ll receive a weekly newsletter with exclusive content. Click here for details.

We shouldn’t let Trump’s Twitter tantrum stop us from taking a new look at online speech protections

Photo (cc) 2019 by Trending Topics 2019

Previously published at WGBHNews.org.

It’s probably not a good idea for us to talk about messing around with free speech on the internet at a moment when the reckless authoritarian in the White House is threatening to dismantle safeguards that have been in place for nearly a quarter of a century.

On the other hand, maybe there’s no time like right now. President Donald Trump is not wrong in claiming there are problems with Section 230 of the Telecommunications Act of 1996. Of course, he’s wrong about the particulars — that is, he’s wrong about its purpose, and he’s wrong about what would happen if it were repealed. But that shouldn’t stop us from thinking about the harmful effects of 230 and what we might do to lessen them.

Simply put, Section 230 says that online publishers can’t be held legally responsible for most third-party content. In just the past week Trump took to Twitter and falsely claimed that MSNBC host Joe Scarborough had murdered a woman who worked in his office and that violent protesters should be shot in the street. At least in theory, Trump, but not Twitter, could be held liable for both of those tweets — the first for libeling Scarborough, the second for inciting violence.

Ironically, without 230, Twitter no doubt would have taken Trump’s tweets down immediately rather than merely slapping warning labels on them, the action that provoked his childish rage. It’s only because of 230 that Trump is able to lie freely to his 24 million (not 80 million, as is often reported) followers without Twitter executives having to worry about getting sued.

As someone who’s been around since the earliest days of online culture, I have some insight into why we needed Section 230, and what’s gone wrong in the intervening years.

Back in the 1990s, the challenge that 230 was meant to address had as much to do with news websites as it did with early online services such as Prodigy and AOL. Print publications such as newspapers are legally responsible for everything they publish, including letters to the editor and advertisements. After all, the landmark 1964 libel case of New York Times v. Sullivan involved an ad, not the paper’s journalism.

But, in the digital world, holding publications strictly liable for their content proved to be impractical. Even in the era of dial-up modems, online comments poured in too rapidly to be monitored. Publishers worried that if they deleted some of the worst comments on their sites, that would mean they would be seen as exercising editorial control and were thus legally responsible for all comments.

The far-from-perfect solution: take a hands-off approach and not delete anything, not even the worst of the worst. At least to some extent, Section 230 solved that dilemma. Not only did it immunize publishers for third-party content, but it also contained what is called a “Good Samaritan” provision — publishers were now free to remove some bad content without making themselves liable for other, equally bad content that they might have missed.

Section 230 created an uneasy balance. Users could comment freely, which seemed to many of us in those more optimistic times like a step forward in allowing news consumers to be part of the conversation. (That’s where Jay Rosen’s phrase “the people formerly known as the audience” comes from.) But early hopes faded to pessimism and cynicism once we saw how terrible most of those comments were. So we ignored them.

That balance was disrupted by the rise of the platforms, especially Facebook and Twitter. And that’s because they had an incentive to keep users glued to their sites for as long as possible. By using computer algorithms to feed users more of what keeps them engaged, the platforms are able to show more advertising to them. And the way you keep them engaged is by showing them content that makes them angry and agitated, regardless of its truthfulness. The technologist Jaron Lanier, in his 2018 book “Ten Arguments for Deleting Your Social Media Accounts Right Now,” calls this “continuous behavior modification on a titanic scale.”

Which brings us to the tricky question of whether government should do something to remove these perverse incentives.

Earlier this year, Heidi Legg, then at Harvard’s Shorenstein Center on Media, Politics and Public Policy, published an op-ed in The Boston Globe arguing that Section 230 should be modified so that the platforms are held to the same legal standards as other publishers. “We should not allow the continued free-wheeling and profiteering of this attention economy to erode democracy through hyper-polarization,” she wrote.

Legg told me she hoped her piece would spark a conversation about what Section 230 reform might look like. “I do not have a solution,” she said in a text exchange on (what else?) Twitter, “but I have ideas and I am urging the nation and Congress to get ahead of this.”

Well, I’ve been thinking about it, too. And one possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers. Dorsey would quickly find that his tentative half-steps are insufficient — and Zuckerberg would have to abandon his smug refusal to do anything about Trump’s vile comments.

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Let me concede that I don’t know how practical my idea would be. Like Legg, I offer it out of a sense that we need to have a conversation about the harm that social media are doing to our democracy. I’m a staunch believer in the First Amendment, so I think it’s vital to address that harm in a way that doesn’t violate anyone’s free-speech rights. Ending special regulatory favors for certain types of toxic corporate behavior seems like one way of doing that with a relatively light touch.

And if that meant Trump could no longer use Twitter as a megaphone for hate speech, wild conspiracy theories and outright disinformation, well, so much the better.

Talk about this post on Facebook.

Does Twitter need Trump? Not as much as you might think.

Statistic: Number of monthly active Twitter users worldwide from 1st quarter 2010 to 1st quarter 2019 (in millions) | Statista
Find more statistics at Statista.

You might think that Twitter would have a financial incentive to cave in to President Trump’s incoherent, unconstitutional threats over the platform’s decision to label some of his false tweets as, you know, false. In fact, Trump’s presence on Twitter is not as big a deal to the company as you might think.

First, we often hear that Trump has 80 million followers. But is that really the case? According to analytics from the Fake Followers Audit, 70.2% of his followers are fake, which is defined as “accounts that are unreachable and will not see the account’s tweets (either because they’re spam, bots, propaganda, etc. or because they’re no longer active on Twitter).”

That’s not especially unusual among high-profile tweeters. For instance, 43% of former President Barack Obama‘s 118 million followers are fake. But it’s important to understand that Trump has about 24 million followers, not 80 million. That’s a big difference.

Even more important, Trump’s presence on Twitter has not had a huge effect on its total audience. According to Statista, the number of worldwide active monthly users hovered between a low of 302 million and a high of 336 million between the first quarter of 2015 and the first quarter of 2019. (Zephoria reports that Twitter hasn’t released similar numbers since then.)

The bottom line is that Twitter chief executive Jack Dorsey could probably afford to throw Trump off the platform for repeatedly violating its terms of service. Still, he probably wouldn’t want to risk the outrage that would ensue from MAGA Country if Trump lost his favorite outlet for smearing the memory of a dead woman with his horrendous lies about MSNBC host Joe Scarborough.

Talk about this post on Facebook.

Political ads on Facebook can be fixed. Is Mark Zuckerberg willing to try?

Photo via Wikimedia Commons

Previously published at WGBHNews.org.

If nothing else, Twitter CEO Jack Dorsey proved himself to be a master of timing when he announced last week that his social network will ban all political ads.

Anger was still raging over Mark Zuckerberg’s recent statement that Facebook would not attempt to fact-check political advertising, thus opening the door to a flood of falsehoods. Taking direct aim at Zuckerberg, Dorsey tweeted: “It‘s not credible for us to say: ‘We’re working hard to stop people from gaming our systems to spread misleading info, buuut if someone pays us to target and force people to see their political ad…well…they can say whatever they want!’”

Not surprisingly, Twitter’s ad ban won widespread praise.

“This is a good call,” tweeted U.S. Rep. Alexandria Ocasio-Cortez, D-N.Y., who had only recently tormented Zuckerberg at a congressional hearing. “Technology — and social media especially — has a powerful responsibility in preserving the integrity of our elections. Not allowing for paid disinformation is one of the most basic, ethical decisions a company can make.”

Added Hillary Clinton: “This is the right thing to do for democracy in America and all over the world. What say you, @Facebook?”

Oh, but if only it were that simple. Advertising on social media is a cheap and effective way for underfunded candidates seeking less prominent offices to reach prospective voters. No, it’s not good for democracy if we are overwhelmed with lies. But, with some controls in place, Facebook and Twitter can be crucial for political candidates who can’t afford television ads. To get rid of all political advertising would be to favor incumbents over outsiders and longshots.

“Twitter’s ban on political ads disadvantages challengers and political newcomers,” wrote University of Utah communications researcher Shannon C. MacGregor in The Guardian. “Digital ads are much cheaper than television ads, drawing in a wider scope of candidates, especially for down-ballot races.”

And let’s be clear: Facebook, not Twitter, is what really matters. Journalists pay a lot of attention to Twitter because other journalists use it — as do politicians, bots and sociopaths. Facebook, with more than 2 billion active users around the world, is exponentially larger and much richer. For instance, the 2020 presidential candidates so far have spent an estimated $46 million on political ads on Facebook, compared to less than $3 million spent by all candidates on Twitter ads during the 2018 midterms.

But is political advertising on Facebook worth saving given the falsehoods, the attempts to deceive, that go way beyond anything you’re likely to see on TV?

In fact, there are some common-sense steps that might help fix Facebook ads.

Writing in The Boston Globe, technology journalist Josh Bernoff suggested that Facebook ban all targeting for political ads except for geography. In other words, candidates for statewide office ought to be able to target their ads so they’re not paying to reach Facebook users in other states. But they shouldn’t be able to target certain slices of the electorate, like liberals or conservatives, homeowners or renters, white people or African Americans (or “Jew haters,” as ProPublica discovered was possible in a nauseating exposé a couple of years ago.)

Bernoff also suggested that politicians be required to provide documentation to back up the facts in their ads. It’s a good idea, though it may prove impractical.

“Facebook is incapable of vetting political ads effectively and consistently at the global scale. And political ads are essential to maintaining the company’s presence in countries around the world,” wrote Siva Vaidhyanathan, author of “Antisocial Media: How Facebook Disconnects Us and Undermines Democracy,” in The New York Times.

But we may not have to go that far. The reason ads spreading disinformation are so effective on Facebook is that they fly under the radar, seen by tiny slices of the electorate and thus evading broader scrutiny. In an op-ed piece in The Washington Post, Ellen L. Weintraub, chair of the Federal Election Commission, argued that the elimination of microtargeting could result in more truthful, less toxic advertising.

“Ads that are more widely available will contribute to the robust and wide-open debate that is central to our First Amendment values,” Weintraub wrote. “Political advertisers will have greater incentives to be truthful in ads when they can more easily and publicly be called to account for them.”

Calling for political ads to be banned on Facebook is futile. We live our lives on the internet these days, and Facebook has become (God help us) our most important distributor of news and information.

As Supreme Court Justice Louis Brandeis once wrote, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the process of education, the remedy to be applied is more speech, not enforced silence.”

Nonprofit news update

Earlier this week The Salt Lake Tribune reported that the IRS had approved its application to become a nonprofit organization, making it the first daily newspaper to take that step. Unlike The Philadelphia Inquirer and the Tampa Bay Times, for-profit newspapers owned by nonprofit foundations, the Tribune will be fully nonprofit, making it eligible for tax-deductible donations.

Nonprofit news isn’t exactly a novelty. Public media organizations like PBS, NPR and, yes, WGBH are nonprofit organizations. So are a number of pioneering community websites such as the New Haven Independent and Voice of San Diego. And if the Tribune succeeds, it could pave the way for other legacy newspapers.

Last May I wrote about what nonprofit status in Salt Lake could mean for the struggling newspaper business. This week’s announcement is a huge step forward.

Talk about this post on Facebook.