By Dan Kennedy • The press, politics, technology, culture and other passions

Tag: Twitter Page 4 of 10

Twitter was right to ban MTG — but let’s not kid ourselves that it’s going to matter

Marjorie Taylor Greene. Photo (cc) 2021 by Gage Skidmore.

If I were in charge of Twitter, I would have banned Marjorie Taylor Greene, too. But let’s not kid ourselves. This was a business decision, aimed at protecting Twitter’s brand and keeping its customers satisfied. Greene’s reach will hardly be affected (her official congressional Twitter account is still online), and her fans will simply write off her punishment as further evidence that Twitter is part of the liberal elite’s global conspiracy or whatever.

Meanwhile, Joe Rogan and other right-wingers are moving to GETTR, the latest Trump-friendly Twitter alternative. And our cultural disintegration continues apace.

Researchers dig up embarrassing data about Facebook — and lose access to their accounts

Photo (cc) 2011 by thierry ehrmann

Previously published at GBH News.

For researchers, Facebook is something of a black box. It’s hard to know what its 2.8 billion active users across the globe are seeing at any given time because the social media giant keeps most of its data to itself. If some users are seeing ads aimed at “Jew haters,” or Russian-generated memes comparing Hillary Clinton to Satan, well, so be it. Mark Zuckerberg has his strategy down cold: apologize when exposed, then move on to the next appalling scheme.

Some data scientists, though, have managed to pierce the darkness. Among them are Laura Edelson and Damon McCoy of New York University’s Center for Cybersecurity. With a tool called Ad Observer, which volunteers add to their browsers, they were able to track ads that Facebook users were being exposed to and draw some conclusions. For instance, they learned that users are more likely to engage with extreme falsehoods than with truthful material, and that more than 100,000 political ads are missing from an archive Facebook set up for researchers.

As you would expect, Facebook executives took these findings seriously. So what did they do? Did they change the algorithm to make it more likely that users would see reliable information in their news feed? Did they restore the missing ads and take steps to make sure such omissions wouldn’t happen again?

They did not. Instead, they cut off access to Edelson’s and McCoy’s accounts, making it harder for them to dig up such embarrassing facts in the future.

“There is still a lot of important research we want to do,” they wrote in a recent New York Times op-ed. “When Facebook shut down our accounts, we had just begun studies intended to determine whether the platform is contributing to vaccine hesitancy and sowing distrust in elections. We were also trying to figure out what role the platform may have played leading up to the Capitol assault on Jan. 6.”

In other words, they want to find out how responsible Zuckerberg, Sheryl Sandberg and the rest are for spreading a deadly illness and encouraging an armed insurrection. No wonder Facebook looked at what the researchers were doing and told them, gee, you know, we’d love to help, but you’re violating our privacy rules.

But that’s not even a real concern. Writing at the Columbia Journalism Review, Mathew Ingram points out that the privacy rules Facebook agreed to following the Cambridge Analytica scandal apply to Facebook itself, not to users who voluntarily agree to provide information to researchers.

Ingram quotes Princeton professor Jonathan Mayer, an adviser to Vice President Kamala Harris when she was a senator, who tweeted: “Facebook’s legal argument is bogus. The order “restricts how *Facebook* shares user information. It doesn’t preclude *users* from volunteering information about their experiences on the platform, including through a browser extension.”

The way Ingram describes it, as well as Edelson and McCoy themselves, Facebook’s actions didn’t stop their work altogether, but it has slowed it down and made it more difficult. Needless to say, the company should be doing everything it can to help with such research. Then again, Zuckerberg has never shown much regard for such mundane matters as public health and the future of democracy, especially when there’s money to be made.

By contrast, Facebook’s social media competitor Twitter has actually been much more open about making its data available to researchers. My Northeastern colleague John Wihbey, who co-authored an important study several years ago about how journalists use Twitter, says the difference explains why there have been more studies published about Twitter than Facebook. “This is unfortunate,” he says, “as it is a smaller network and less representative of the general public.”

It’s like the old saw about looking for your car keys under a street light because that’s where the light is. Trouble is, with fewer than 400 million active users, Twitter is little more than a rounding error in Facebook’s universe.

Earlier this year, MIT’s Technology Review published a remarkable story documenting how Facebook shied away from cracking down on extremist content, focusing instead on placating Donald Trump and other figures on the political right before the 2020 election. Needless to say, the NYU researchers represent an especially potent threat to the Zuckerborg since they plan to focus on the role that Facebook played in amplifying the disinformation that led to the insurrection, whose aftermath continues to befoul our body politic.

When the history of this ugly era is written, the two media giants that will stand out for their malignity are Fox News, for knowingly poisoning tens of millions of people with toxic falsehoods, and Facebook, for allowing its platform be used to amplify those falsehoods. Eventually, the truth will be told — no matter what steps Zuckerberg takes to slow it down. There should be hell to pay.

Coming to terms with the false promise of Twitter

Photo (cc) 2014 by =Nahemoth=

Roxane Gay brilliantly captures my own love/hate relationship with Twitter. In a New York Times essay published on Sunday, she writes:

After a while, the lines blur, and it’s not at all clear what friend or foe look like, or how we as humans should interact in this place. After being on the receiving end of enough aggression, everything starts to feel like an attack. Your skin thins until you have no defenses left. It becomes harder and harder to distinguish good-faith criticism from pettiness or cruelty. It becomes harder to disinvest from pointless arguments that have nothing at all to do with you. An experience that was once charming and fun becomes stressful and largely unpleasant. I don’t think I’m alone in feeling this way. We have all become hammers in search of nails.

This is perfect. It’s not that people are terrible on Twitter, although they are. It’s that it’s nearly impossible to avoid becoming our own worst versions of ourselves.

Twitter may not be as harmful to the culture as Facebook, but for some reason I’ve found interactions on Facebook — as well as my own behavior — to be more congenial than on Twitter. Of course, on Facebook you have more control over whom you choose to interact with, and there’s a lot more sharing of family photos and other cheerful content. Twitter, by contrast, can feel like a never-ending exercise in hyper-aggression and performative defensiveness.

From time to time I’ve tried to cut back and use Twitter only for professional reasons — promoting my work and that of others, tweeting less and reading more of what others have to say. It works to an extent, but I always slide back. Twitter seems to reward snark, but what, really, is the reward? More likes and retweets? Who cares?

I can’t leave — Twitter is too important to my work. But Gay’s fine piece is a reminder that social media have fallen far short of what we were hoping for 12 to 15 years ago, and that we ourselves are largely to blame.

A small example of how racially biased algorithms distort social media

You may have heard that the algorithms used by Facebook and other social media platforms are racially biased. I ran into a small but interesting example of that earlier today.

My previous post is about a webinar on news co-ops that I attended last week. I used a photo of Kevon Paynter, co-founder of Bloc by Block News, as the lead art and a photo of Jasper Wang, co-founder of The Defector, well down in the piece.

But when I posted links on Facebook, Twitter and LinkedIn, all three of them automatically grabbed the photo of Wang as the image that would go with the link. For example, here’s how it appeared on Twitter.

I don’t know what happened. Paynter was more central to what I was writing, which is why I led with his photo. Paynter is Black; Wang is of Asian descent. There’s more contrast in the image of Wang, which may be why the algorithms identified it as a superior picture. But in so doing they ignored my choice of Paynter as the lead.

File this under “Things that make you go hmmmm.”

Why I’m asking you to become a member of Media Nation

At the beginning of 2021, I decided to shift my online activities — I was going to blog more and use Facebook and Twitter less. At the same time, I decided to start offering memberships to Media Nation for $5 a month, following the lead of Boston College historian Heather Cox Richardson, pundits such as Andrew Sullivan, reporters such as Patrice Peck and others.

Most of these other folks are using Substack, a newsletter platform. I figured I had sunk way too many years — 16 — into writing Media Nation as a blog, and I didn’t want to switch to a platform that’s reliant on venture capital and could eventually go the way of most such companies. So here I am, still blogging at WordPress.com, and asking readers to consider becoming members by supporting me on Patreon.

And yes, I have been blogging more as I try to stay on top of various media stories, especially involving local journalism, as well as politics, culture and the news of the day. Just this week I’ve written about Larry Flynt and the First Amendment, Duke Ellington’s legacy, a new partnership between The Boston Globe and the Portland Press Herald, and a Louisiana reporter who’s been sued for — believe it or not — filing a public-records request.

If you value this work, I hope you’ll consider supporting it for $5 a month. Members receive a newsletter every Friday morning with exclusive content.

And if you’ve already become a member, thank you.

Twitter reportedly bans Mass. political gadfly Shiva Ayyadurai

Shiva Ayyadurai, in white hat. Photo (cc) 2019 by Marc Nozell.

Massachusetts Republican gadfly Shiva Ayyadurai has been banned from Twitter, most likely for claiming that he’d lost his most recent race for the U.S. Senate only because Secretary of State Bill Galvin’s office destroyed a million electronic ballots. Adam Gaffin of Universal Hub has the details.

In 2018, I gave the City of Cambridge a GBH News New England Muzzle Award for ordering Ayyadurai to dismantle an wildly offensive sign on his company’s Cambridge property that criticized Democratic Sen. Elizabeth Warren. City officials told him that the sign, which read “Only a REAL INDIAN Can Defeat the Fake Indian,” violated the city’s building code.

Ayyadurai threatened to sue, which led the city to back off.

From the Department of Unintended Consequences

The Washington Post reports:

Right-wing groups on chat apps like Telegram are swelling with new members after Parler disappeared and a backlash against Facebook and Twitter, making it harder for law enforcement to track where the next attack could come from….

Trump supporters looking for communities of like-minded people will likely find Telegram to be more extreme than the Facebook groups and Twitter feeds they are used to, said Amarasingam. [Amarnath Amarasingam is described as a researcher who specializes in terrorism and extremism.]

“It’s not simply pro-Trump content, mildly complaining about election fraud. Instead, it’s openly anti-Semitic, violent, bomb making materials and so on. People coming to Telegram may be in for a surprise in that sense,” Amarasingam said.

Entirely predictable, needless to say.

Amazon’s move against Parler is worrisome in a way that Apple’s and Google’s are not

It’s one thing for Apple and Google to throw the right-wing Twitter competitor Parler out if its app stores. It’s another thing altogether for Amazon Web Services to deplatform Parler. Yet that’s what will happen by midnight today, according to BuzzFeed.

Parler deserves no sympathy, obviously. The service proudly takes even less responsibility for the garbage its members post than Twitter and Facebook do, and it was one of the places where planning for the insurrectionist riots took place. But Amazon’s actions raise some important free-speech concerns.

Think of the internet as a pyramid. Twitter and Facebook, as well as Google and Apple’s app stores, are at the top of that pyramid — they are commercial enterprises that may govern themselves as they choose. Donald Trump is far from the first person to be thrown off social networks, and Parler isn’t even remotely the first app to be punished.

But Amazon Web Services, or AWS, exists somewhere below the top of the pyramid. It is foundational; its servers are the floor upon which other things are built. AWS isn’t the bottom layer of the pyramid — it is, in its own way, a commercial enterprise. But it has a responsibility to respecting the free-speech rights of its clients that Twitter and Facebook do not.

Yet AWS has an acceptable-use policy that reads in part:

You may not use, or encourage, promote, facilitate or instruct others to use, the Services or AWS Site for any illegal, harmful, fraudulent, infringing or offensive use, or to transmit, store, display, distribute or otherwise make available content that is illegal, harmful, fraudulent, infringing or offensive.

For AWS to cut off Parler would be like the phone company blocking all calls from a person or organization it deems dangerous. Yet there’s little doubt that Parler violated AWS’s acceptable-use policy. Look for Parler to re-establish itself on an overseas server. Is that what we want?

Meanwhile, Paul Moriarty, a member of the New Jersey State Assembly, wants Comcast to stop carrying Fox News and Newsmax, according to CNN’s “Reliable Sources” newsletter. And CNN’s Oliver Darcy is cheering him on, writing:

Moriarty has a point. We regularly discuss what the Big Tech companies have done to poison the public conversation by providing large platforms to bad-faith actors who lie, mislead, and promote conspiracy theories. But what about TV companies that provide platforms to networks such as Newsmax, One America News — and, yes, Fox News? [Darcy’s boldface]

Again, Comcast and other cable providers are not obligated to carry any particular service. Just recently we received emails from Verizon warning that it might drop WCVB-TV (Channel 5) over a fee dispute. Several years ago, Al Jazeera America was forced to throw in the towel following its unsuccessful efforts to get widespread distribution on cable.

But the power of giant telecom companies to decide what channels will be carried and what will not is immense, and something we ought to be concerned about.

I have no solutions. But I think it’s worth pointing out that AWS’s action against Parler is considerably more ominous than Google’s and Apple’s, and that for elected officials to call on Comcast to drop certain channels is more ominous still.

We have some thinking to do as a society.

Earlier:

Please consider becoming a paid member of Media Nation for just $5 a month. You’ll receive a weekly newsletter with exclusive content. Click here for details.

Twitter solves a business problem. But in the long run, it won’t matter all that much.

A few quick thoughts on Twitter’s decision to cancel Donald Trump’s account.

I was never among those who called for Trump to be thrown off the platform. I have mixed feelings about it even now. But this is not an abridgement of the First Amendment, and I suspect it will be proven to be not that big a deal as social media fracture into various ideological camps.

First, the free-speech argument: Twitter is a private company that has always acted to remove content its executives believe is bad for business. Twitter not only isn’t the government; it’s also not a public utility like the phone company, or for that matter like the broader internet, both of which are built upon principles of free speech no matter how loathsome. As Boston Globe columnist Kimberly Atkins, a lawyer, put it:

The not-a-big-deal argument is a little harder to make. Trump, after all, had more than 88 million Twitter followers, and it was the main way he communicated with his supporters and the broader public. But it’s a big world. He can switch to Parler, a Twitter-like application friendly to right-wingers. Yes, it’s tiny now, but how long would it stay tiny with Trump as its star?

Consider, too, the news that Apple and Google are taking steps to throw Parler off their app stores. So what? Parler could just tell its users to access the platform via the mobile web instead of through apps. This isn’t as exotic as it might sound. Twitter and Facebook members don’t have to use the apps, for instance. They can simply use their phone’s web browsers, and in some ways the experience is better.

Boston Globe columnist Hiawatha Bray writes that “even after this week’s crackdown on his inflammatory and misleading Internet postings, Trump is likely to remain an online force.” Indeed.

The reason that Twitter chief executive Jack Dorsey waited so long to act — until Trump called a coup against his own government in the waning days of his presidency — is that Dorsey understands banning Trump will ultimately prove futile, and that it will endanger Twitter’s dominant role in social media by speeding up the emergence of ideologically sorted alternatives.

Dorsey solved his immediate problem. It’s likely that the worst is yet to come, but at least he’ll be able to tell his shareholders that he did the best that he could.

Please consider becoming a paid member of Media Nation for just $5 a month. You’ll receive a weekly newsletter with exclusive content. Click here for details.

We shouldn’t let Trump’s Twitter tantrum stop us from taking a new look at online speech protections

Photo (cc) 2019 by Trending Topics 2019

Previously published at WGBHNews.org.

It’s probably not a good idea for us to talk about messing around with free speech on the internet at a moment when the reckless authoritarian in the White House is threatening to dismantle safeguards that have been in place for nearly a quarter of a century.

On the other hand, maybe there’s no time like right now. President Donald Trump is not wrong in claiming there are problems with Section 230 of the Telecommunications Act of 1996. Of course, he’s wrong about the particulars — that is, he’s wrong about its purpose, and he’s wrong about what would happen if it were repealed. But that shouldn’t stop us from thinking about the harmful effects of 230 and what we might do to lessen them.

Simply put, Section 230 says that online publishers can’t be held legally responsible for most third-party content. In just the past week Trump took to Twitter and falsely claimed that MSNBC host Joe Scarborough had murdered a woman who worked in his office and that violent protesters should be shot in the street. At least in theory, Trump, but not Twitter, could be held liable for both of those tweets — the first for libeling Scarborough, the second for inciting violence.

Ironically, without 230, Twitter no doubt would have taken Trump’s tweets down immediately rather than merely slapping warning labels on them, the action that provoked his childish rage. It’s only because of 230 that Trump is able to lie freely to his 24 million (not 80 million, as is often reported) followers without Twitter executives having to worry about getting sued.

As someone who’s been around since the earliest days of online culture, I have some insight into why we needed Section 230, and what’s gone wrong in the intervening years.

Back in the 1990s, the challenge that 230 was meant to address had as much to do with news websites as it did with early online services such as Prodigy and AOL. Print publications such as newspapers are legally responsible for everything they publish, including letters to the editor and advertisements. After all, the landmark 1964 libel case of New York Times v. Sullivan involved an ad, not the paper’s journalism.

But, in the digital world, holding publications strictly liable for their content proved to be impractical. Even in the era of dial-up modems, online comments poured in too rapidly to be monitored. Publishers worried that if they deleted some of the worst comments on their sites, that would mean they would be seen as exercising editorial control and were thus legally responsible for all comments.

The far-from-perfect solution: take a hands-off approach and not delete anything, not even the worst of the worst. At least to some extent, Section 230 solved that dilemma. Not only did it immunize publishers for third-party content, but it also contained what is called a “Good Samaritan” provision — publishers were now free to remove some bad content without making themselves liable for other, equally bad content that they might have missed.

Section 230 created an uneasy balance. Users could comment freely, which seemed to many of us in those more optimistic times like a step forward in allowing news consumers to be part of the conversation. (That’s where Jay Rosen’s phrase “the people formerly known as the audience” comes from.) But early hopes faded to pessimism and cynicism once we saw how terrible most of those comments were. So we ignored them.

That balance was disrupted by the rise of the platforms, especially Facebook and Twitter. And that’s because they had an incentive to keep users glued to their sites for as long as possible. By using computer algorithms to feed users more of what keeps them engaged, the platforms are able to show more advertising to them. And the way you keep them engaged is by showing them content that makes them angry and agitated, regardless of its truthfulness. The technologist Jaron Lanier, in his 2018 book “Ten Arguments for Deleting Your Social Media Accounts Right Now,” calls this “continuous behavior modification on a titanic scale.”

Which brings us to the tricky question of whether government should do something to remove these perverse incentives.

Earlier this year, Heidi Legg, then at Harvard’s Shorenstein Center on Media, Politics and Public Policy, published an op-ed in The Boston Globe arguing that Section 230 should be modified so that the platforms are held to the same legal standards as other publishers. “We should not allow the continued free-wheeling and profiteering of this attention economy to erode democracy through hyper-polarization,” she wrote.

Legg told me she hoped her piece would spark a conversation about what Section 230 reform might look like. “I do not have a solution,” she said in a text exchange on (what else?) Twitter, “but I have ideas and I am urging the nation and Congress to get ahead of this.”

Well, I’ve been thinking about it, too. And one possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers. Dorsey would quickly find that his tentative half-steps are insufficient — and Zuckerberg would have to abandon his smug refusal to do anything about Trump’s vile comments.

But wouldn’t this amount to heavy-handed government regulation? Not at all. In fact, loosening Section 230 protections would push us in the opposite direction, toward deregulation. After all, holding publishers responsible for libel, invasions of privacy, threats of violence and the like is the default in our legal system. Section 230 was a regulatory gift, and it turns out that we were too generous.

Let me concede that I don’t know how practical my idea would be. Like Legg, I offer it out of a sense that we need to have a conversation about the harm that social media are doing to our democracy. I’m a staunch believer in the First Amendment, so I think it’s vital to address that harm in a way that doesn’t violate anyone’s free-speech rights. Ending special regulatory favors for certain types of toxic corporate behavior seems like one way of doing that with a relatively light touch.

And if that meant Trump could no longer use Twitter as a megaphone for hate speech, wild conspiracy theories and outright disinformation, well, so much the better.

Talk about this post on Facebook.

Page 4 of 10

Powered by WordPress & Theme by Anders Norén