By Dan Kennedy • The press, politics, technology, culture and other passions

Category: Technology Page 4 of 19

Kara Swisher can’t make sense out of what Elon Musk is doing, either

Elon Musk. Photo (cc) 2019 by Daniel Oberhaus.

If you are trying to make sense out of what Elon Musk is doing with (or, rather, to) Twitter, I recommend this podcast in which the tech journalist Kara Swisher talks about her interactions with the billionaire over the years.

Swisher is appalled as any of us, but she’s more sad than angry — she says she genuinely believed Musk might be the right person to fix the money-losing platform. She doesn’t attribute any nefarious motives to his brief reign, which has been marked by chaos and performative cruelty toward Twitter’s employees. But she can’t make sense of it, either.

Toward the end, her producer, Nayeema Raza, asks Swisher what she’d like to ask Musk if they were back on speaking terms — which they’re currently not. Swisher’s four-word answer: “What are you doing?”

A bit about Mastodon

Photo (cc) 2007 by Benjamin Golub

I’ve opened an account on Mastodon in the hopes that it will prove to be a good alternative to Twitter, now in the midst of an astonishing implosion.

What I’m hoping for is something like Twitter pre-Elon Musk, only without the trolls and bots, the personal abuse and the piling-on. I don’t think any of us believed Twitter was a wonderful place before Musk lit it on fire. So far, Mastodon sort of fits the bill, but it’s also something different. The culture is more polite — maybe excessively so, though that might just be a first impression.

In any case, there doesn’t seem to be any going back. I wouldn’t be surprised if Twitter is essentially gone in a few weeks. You can follow me on Mastodon at @dankennedy_nu@journa.host. And for a really good explanation of Mastodon and how its decentralized governance works, I highly recommended this Lawfare podcast.

The shame of Musk’s takeover is that Twitter was starting to get (a little) better

Elon Musk. Photo (cc) 2019 by Daniel Oberhaus.

The shame of it is that Twitter was starting to get a little better. Some months back I decided to spend $3 a month for Twitter Blue. You had up to a minute to pull back a tweet if you saw a typo or if a picture didn’t display properly. More recently, they added an actual edit button, good for 30 minutes. Best of all is something called “Top Articles,” which shows stories that are most widely shared by your network and their networks. I almost always find a couple of stories worth reading — including the one from The Verge that I’ve shared below.

Anyway, here we are. Billionaire Elon Musk is now the sole owner of a social media platform that I check in with multiple times during the day and post to way too much. Twitter is much smaller than Facebook and YouTube, and smaller than TikTok and Instagram, too. In fact, it’s smaller than just about everything else. But it punches above its weight because it’s the preferred outlet for media and political people. It’s also a cesspool of sociopathy. We’re all worried that Musk will make it worse, but let’s be honest — it’s already pretty bad.

The smartest take I’ve seen so far is by Nilay Patel in The Verge. Headlined “Welcome to hell, Elon,” the piece argues that Musk isn’t going to be able to change Twitter as much as he might like to because to do so will drive advertisers away — something that’s already playing out in General Motors’ decision to suspend its ads until its executives can get a better handle on what the Chief Twit has in mind. Patel also points out that Musk is going to receive a lot of, er, advice about whom to ban on Twitter from countries where his electric car company, Tesla, does business, including Germany, China and India. Those are three very different cultures, but all of them have more restrictive laws regarding free speech than the United States. Patel writes:

The essential truth of every social network is that the product is content moderation, and everyone hates the people who decide how content moderation works. Content moderation is what Twitter makes — it is the thing that defines the user experience. It’s what YouTube makes, it’s what Instagram makes, it’s what TikTok makes. They all try to incentivize good stuff, disincentivize bad stuff, and delete the really bad stuff…. The longer you fight it or pretend that you can sell something else, the more Twitter will drag you into the deepest possible muck of defending indefensible speech.

Indeed, Twitter has already reinstated the noted antisemite formerly known as Kanye West, although Musk, weirdly enough, says he had nothing to do with it.

My approach to tweeting in Elon Musk’s private garden will be to do what I’ve always done and see what happens. I use it too much to walk away, but I don’t like it enough to wring my hands.

A quarter-century after its passage, Section 230 is up for grabs

A quarter-century after Congress decided to hold publishers harmless for third-party content posted on their websites, we are headed for a legal and constitutional showdown over Section 230, part of the Communications Decency Act of 1996.

Before the law was passed, publishers worried that if they removed some harmful content they might be held liable for failing to take down other content, which gave them a legal incentive to leave libel, obscenity, hate speech and misinformation in place. Section 230 solved that by including a so-called Good Samaritan provision that allowed publishers to pick and choose without incurring liability.

Back in those early days, of course, we weren’t dealing with behemoths like Facebook, YouTube and Twitter, which use algorithms to boost content that keeps their users engaged — which, in turn, usually means speech that makes them angry or upset. In the mid-1990s, the publishers that were seeking protection were generally newspapers that had opened up online comments and nascent online services like Prodigy and AOL. Publishers are fully liable for any content over which they have direct control, including news stories, advertisements and letters to the editor. Congress understood that the flood of content being posted online raised different issues.

But after Twitter booted Donald Trump off its service and Facebook suspended him for inciting violence during and after the attempted insurrection of Jan. 6, 2021, Trump-aligned Republicans began agitating against what they called censorship by the tech giants. The idea that private companies are even legally capable of engaging in censorship is something that can be disputed, but it’s gained some traction in legal circles, as we shall see.

Meanwhile, Democrats and liberals argued that the platforms weren’t acting aggressively enough to remove dangerous and harmful posts, especially those promoting disinformation around COVID-19 such as anti-masking and anti-vaccine propaganda.

A lot of this comes down to whether the platforms are common carriers or true publishers. Common carriers are legally forbidden from discriminating against any type of user or traffic. Providers of telephone service would be one example. Another example would be the broader internet of which the platforms are a part. Alex Jones was thoroughly deplatformed in recent years — you can’t find him on Facebook, Twitter or anywhere else. But you can find his infamous InfoWars site on the web, and, according to SimilarWeb, it received some 9.4 million visits in July of this year. You can’t kick Jones off the internet; at most, you can pressure his hosting service to drop him. But even if they did, he’d just move on to the next service, which, by the way, needn’t be based in the U.S.

True publishers, by the way, enjoy near-absolute leeway over what they choose to publish or not publish. A landmark case in this regard is Miami Herald v. Tornillo (1974), in which the Supreme Court ruled that a Florida law requiring newspapers to publish responses from political figures who’d been criticized was unconstitutional. Should platforms be treated as publishers? Certainly it seems ludicrous to hold them fully responsible for the millions of pieces of content that their users post on their sites. Yet the use of algorithms to promote some content in order to sell more advertising and earn more profits involves editorial discretion, even if those editors are robots. In that regard, they start to look more like publishers.

Maybe it’s time to move past the old categories altogether. In a recent appearance on WBUR Radio’s “On Point,” University of Minnesota law professor Alan Rozenshtein said that platforms have some qualities of common carriers and some qualities of publishers. What we really need, he said, is a new paradigm that recognizes we’re dealing with something unlike anything we’ve seen before.

Which brings me to two legal cases, both of which are hurtling toward a collision.

Recently the U.S. Court of Appeals for the 5th Circuit upheld a Texas law that, among other things, forbids platforms from removing any third-party speech that’s based on viewpoint. Many legal observers had believed the law would be decisively overturned since it interferes with the ability of private companies to conduct their business as they see fit, and to exercise their own First Amendment right to delete content they regard as harmful. But the court didn’t see it that way, with Judge Andrew Oldham writing: “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.” This is a view of the platforms as common carriers.

As Rozenshtein said, the case is almost certainly headed for the Supreme Court because it clashes with an opinion by the 11th Circuit, which overturned a similar law in Florida, and because it’s unimaginable that any part of the internet can be regulated on a state-by-state basis. Such regulations need to be hashed out by Congress and apply to all 50 states, Rozenshtein said.

Meanwhile, the Supreme Court has agreed to hear a case coming from the opposite direction. The case, brought by the family of a 23-year-old student who was killed in an ISIS attack in Paris in 2014, argues that YouTube, owned by Google, should be held liable for using algorithms to boost terrorist videos, thus helping to incite the attack. “Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” according to the lawsuit.

Thus we may be heading toward a constitutionally untenable situation whereby tech companies could be held liable for content that the Texas law has forbidden them to remove.

The ISIS case is especially interesting because it’s the use of algorithms to boost speech that are at issue — again, something that was, at most, in its embryonic stages at the time that Section 230 was enacted. Eric Goldman, a law professor at Santa Clara University, put it this way in an interview with The Washington Post: “The question presented creates a false dichotomy that recommending content is not part of the traditional editorial functions. The question presented goes to the very heart of Section 230 and that makes it a very risky case for the internet.”

I’ve suggested that one way to reform Section 230 might be to remove protections for any algorithmically boosted speech, which might actually be where we’re heading.

All of this comes at a time when the Supreme Court’s turn to the right has called its legitimacy into question. Two of the justices, Clarence Thomas and Neil Gorsuch, have even suggested that the libel protections afforded the press under the landmark Times v. Sulllivan decision be overturned or scaled back. After 26 years, it may well be time for some changes to Section 230. But can we trust the Supremes to get it right? I guess we’ll just have to wait and see.

Jonathan Dotan on deep fakes, blockchain technology and the promise of Web3

Jonathan Dotan

The new “What Works” podcast features Jonathan Dotan, founding director of The Starling Lab for Data Integrity at Stanford University. The lab focuses on tools to help historians, legal experts and journalists protect images, text and other data from bad actors who want to manipulate that data to create deep fakes or expunge it altogether.

He has founded and led a number of digital startups, he worked at the Motion Picture Association of America, and he was a writer and producer for the HBO series “Silicon Valley.” While he was working on “Silicon Valley,” a character invented a new technology that got him thinking: What if everyday users could keep hold of their own data without having to store it in a cloud, where it is open to hackers or the government or other bad actors? That, at least in part, is what blockchain technology is all about, and it’s a subject about which Dotan has become a leading expert.

Dotan also shares a link to a valuable resource for anyone who wants to gain a deeper understanding of Web3.

I’ve got a rare rave for Gannett, which is rethinking the way its papers cover police and public safety. And Ellen Clegg unpacks a recent survey about violent attacks against broadcast reporters.

You can listen to our conversation here and subscribe through your favorite podcast app.

Tech thinker Jody Brannon on the digital future and the dangers of monopoly

Jody Brannon

The new “What Works” podcast is up, featuring Jody Brannon, director of the Center for Journalism & Liberty at the Open Markets Institute. Brannon started her career in print in her native Seattle. Never one to shy from a challenge (she’s an avid skiier and beamed in from the snowy mountains of Idaho), she transitioned to digital relatively early on in the revolution. She has had leadership or consulting roles at washingtonpost.comusatoday.com and msn.com, as well as the tech universe.

She served on the board of the Online News Association for 10 years and holds a Ph.D. in mass communication from the University of Maryland. The Center for Journalism & Liberty is part of the Open Markets Institute, which has a pretty bold mission statement: to shine a light on monopoly power and its dangers to democracy. The center also works to engage in grassroots coalitions, such as Freedom from Facebook and Google and 4Competition.

My Quick Take is on an arcane subject — the future of legal ads. Those notices from city and county government may seem pretty dull, but newspapers have depended on them as a vital source of revenue since the invention of the printing press. Now they’re under attack in Florida, and the threat could spread.

Ellen weighs in on a mass exodus at the venerable Texas Observer magazine, once a progressive voice to be reckoned with and home to the late great columnist Molly Ivins.

You can listen to our conversation here and subscribe through your favorite podcast app.

A bogus libel suit raises some interesting questions about the limits of Section 230

Local internet good guy Ron Newman has prevailed in a libel and copyright-infringement suit brought by a plaintiff who claimed Newman had effectively published libelous claims about him by moving the Davis Square Community forum from one hosting service to another.

Adam Gaffin of Universal Hub has all the details, which I’m not going to repeat here. The copyright claim is so ridiculous that I’m going to pass over it entirely. What I do find interesting in the suit, filed by Jonathan Monsarrat, is his allegation that Newman was not protected by Section 230 of the Communications Decency Act because, in switching platforms from LiveJournal to Dreamwidth, he had to copy all the content into the new forum.

Section 230 holds online publishers harmless for any content posted online by third parties, which protects everyone from a small community newspaper whose website has a comments section to tech giants like Facebook and Twitter. The question is whether Newman, by copying content from one platform to another, thereby became the publisher of that content, which could open him to a libel claim. The U.S. Court of Appeals for the First Circuit said no, and put it this way:

Newman copied the allegedly defamatory posts from LiveJournal to Dreamwidth verbatim. He did not encourage or compel the original authors to produce the libelous information. And, in the manner and form of republishing the posts, he neither offered nor implied any view of his own about the posts. In short, Newman did nothing to contribute to the posts unlawfulness beyond displaying them on the new Dreamwidth website.

There’s no question that the court ruled correctly, and I hope that Monsarrat, who has been using the legal system to harass Newman for years, brings his ill-considered crusade to an end.

Nevertheless, the idea that a publisher could lose Section 230 protections might be more broadly relevant. Several years ago I wrote for GBH News that Congress ought to consider ending such protections for content that is promoted by algorithms. If Facebook wants to take a hands-off approach to what its users publish and let everything scroll by in reverse chronological order, then 230 would apply. But Facebook’s practice of using algorithms to drive engagement, putting divisive and anger-inducing content in front of its users in order to keep them logged in and looking at advertising, ought not to be rewarded with legal protections.

The futility of Monsarrat’s argument aside, his case raises the question of how much publishers may intervene in third-party content before they lose Section 230 protections. Maybe legislation isn’t necessary. Maybe the courts could decide that Facebook and other platforms that use algorithms become legally responsible publishers of content when they promote it and make it more likely to be seen than it would otherwise.

And congratulations to Ron Newman, a friend to many of us in the local online community. I got to know Ron way back in 1996, when he stepped forward and volunteered to add links to the online version of a story I wrote for The Boston Phoenix on the Church of Scientology and its critics. Ron harks back to the early, idealistic days of the internet. The digital realm would be a better place if there were more people like him.

Linking reconsidered

Photo (cc) 2013 by liebeslakritze

Although I started blogging in 2002, the first regular column that I ever wrote for a digital publication was for The Guardian. From 2007 until 2011, I produced a weekly commentary about media, politics and culture that was not much different from what I write now for GBH News. What was new was that, for the first time, I could embed links in my column, just as if I was blogging. I did — liberally. (Only later did my editor tell me that the software he used stripped out all the links I had put in, which meant that he had to restore them all by hand. And this was at one of the most digitally focused newspapers on the planet.)

Links have become a standard part of digital journalism. So I was surprised recently when Ed Lyons, a local political commentator who’s an old-fashioned moderate Republican, posted a Twitter thread denouncing links. It began: “I hereby declare I am *done* with hyperlinks in political writing. Pull up a chair and let me rant about how we got to this ridiculous place. What started off as citation has unintentionally turned into some sort of pundit performance art.”

The whole thread is worth reading. And it got me thinking about the value of linking. Back when everything was in print, you couldn’t link, of course, so opinion columns — constrained by space limitations — tended to include a lot of unattributed facts. The idea was that you didn’t need to credit commonly known background material, such as “North Dakota is north of South Dakota.” Sometimes, though, it was hard to know what was background and what wasn’t, and you didn’t want to do anything that might be perceived as unethical. When linking came along, you could attribute everything just by linking to it. And many of us did.

In his thread, Lyons also wrote that “it is my opinion that nobody visits any of these links. I think readers see the link and say oh well that must be true.” I agree. In fact, I tell my students that no one clicks on links, which means that they should always write clearly and include all the information they want the reader to know. The link should be used as a supplement, not as a substitute. To the extent possible, they should also give full credit to the publication and the writer when they’re quoting something as well as providing a link.

I agree with Lyons that links ought to add value and not just be put in gratuitously. And they certainly shouldn’t be snuck in as a way of whispering something that you wouldn’t want to say out loud. The classic example of that would be a notorious column a couple of years ago by Bret Stephens of The New York Times, who wrote that the intelligence of Ashkenazi Jews might be genetically superior — and backed it up with a link to a study co-authored by a so-called scientist who had been identified by the Southern Poverty Law Center as a white nationalist and a eugenicist. Stephens’ assertion was bad enough; his citation was worse, even if few people read it.

One of the most successful self-published writers currently is the historian Heather Cox Richardson. I’ve noticed that she leaves links out of her Substack essays entirely, posting them at the bottom instead. Here’s an example. I’m not going to do that, but it seems to be a decent compromise — showing her work while not letting a bunch of links clutter up her text.

In any event, I don’t expect you to follow the links I include in my writing. They’re there if you want to know more, or if you want to see if I’m fairly characterizing what I’m describing. At the very least, Lyons has reminded me of the value of including links only when they really matter.

This essay was part of last week’s Media Nation Member Newsletter. To become a member for just $5 a month, please click here.

A tidal wave of documents exposes the depths of Facebook’s depravity

Photo (cc) 2008 by Craig ONeal

Previously published at GBH News.

How bad is it for Facebook right now? The company is reportedly planning to change its name, possibly as soon as this week — thus entering the corporate equivalent of the Witness Protection Program.

Surely, though, Mark Zuckerberg can’t really think anyone is going to be fooled. As the tech publisher Scott Turman told Quartz, “If the general public has a negative and visceral reaction to a brand then it may be time to change the subject. Rebranding is one way to do that, but a fresh coat of lipstick on a pig will not fundamentally change the facts about a pig.”

And the facts are devastating, starting with “The Facebook Files” in The Wall Street Journal at the beginning of the month; accelerating as the Journal’s once-anonymous source, former Facebook executive Frances Haugen, went public, testified before Congress and was interviewed on “60 Minutes”; and then exploding over the weekend as a consortium of news organizations began publishing highlights from a trove of documents Haugen gave the Securities and Exchange Commission.

No one can possibly keep up with everything we’ve learned about Facebook — and, let’s face it, not all that much of it is new except for the revelations that Facebook executives were well aware of what their critics have been saying for years. How did they know? Their own employees told them, and begged them to do something about it to no avail.

If it’s possible to summarize, the meta-critique is that, no matter what the issue, Facebook’s algorithms boost content that enrages, polarizes and even depresses its users — and that Zuckerberg and company simply won’t take the steps that are needed to lower the volume, since that might result in lower profits as well. This is the case across the board, from self-esteem among teenage girls to the Jan. 6 insurrection, from COVID disinformation to factional violence in other countries.

In contrast to past crises, when Facebook executives would issue fulsome apologies and then keep right on doing what they were doing, the company has taken a pugnacious tone this time around, accusing the media of bad faith and claiming it has zillions of documents that contradict the damning evidence in the files Haugen has provided. For my money, though, the quote that will live in infamy is one that doesn’t quite fit the context — it was allegedly spoken by Facebook communications official Tucker Bounds in 2017, and it wasn’t for public consumption. Nevertheless, it is perfect:

“It will be a flash in the pan,” Bounds reportedly said. “Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

Is Facebook still fine? Probably not. At the moment, at least, is difficult to imagine that Facebook won’t be forced to undergo some fundamental changes, either through public pressure or by force of law. A number of news organizations have published overviews to help you make sense of the new documents. One of the better ones was written by Adrienne LaFrance, the executive editor of The Atlantic, who was especially appalled by new evidence of Facebook’s own employees pleading with their superiors to stop amplifying the extremism that led to Jan. 6.

“The documents are astonishing for two reasons: First, because their sheer volume is unbelievable,” she said. “And second, because these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.”

LaFrance offers some possible solutions, most of which revolve around changing the algorithm to optimize safety over growth — that is, not censoring speech, but taking steps to stop the worst of it from going viral. Keep in mind that one of the key findings from the past week involved a test account set up for a fictional conservative mother in North Carolina. Within days, her news feed was loaded with disinformation, including QAnon conspiracy theories, served up because the algorithm had figured out that such content would keep her engaged. As usual, Facebook’s own researchers sounded the alarm while those in charge did nothing.

In assessing what we’ve learned about Facebook, it’s important to differentiate between pure free-speech issues and those that involve amplifying bad speech for profit. Of course, as a private company, Facebook needn’t worry about the First Amendment — it can remove anything it likes for any reason it chooses.

But since Facebook is the closest thing we have to a public square these days, I’m uncomfortable with calls that certain types of harmful content be banned or removed. I’d rather focus on the algorithm. If someone posts, say, vaccine disinformation on the broader internet, people will see it (or not) solely on the basis of whether they visit the website or discussion board where it resides.

That doesn’t trouble me any more than I’m bothered by people handing out pamphlets about the coming apocalypse outside the subway station. Within reason, Facebook ought to be able to do the same. What it shouldn’t be able to do is make it easy for you to like and share such disinformation and keep you engaged by showing you more and — more extreme — versions of it.

And that’s where we might be able to do something useful about Facebook rather than just wring our hands. Reforming Section 230, which provides Facebook and other internet publishers with legal immunity for any content posted by their users, would be a good place to start. If 230 protections were removed for services that use algorithms to boost harmful content, then Facebook would change its practices overnight.

Meanwhile, we wait with bated breath for word on what the new name for Facebook will be. Friendster? Zucky McZuckface? The Social Network That Must Not Be Named?

Zuckerberg has created a two-headed beast. For most of us, Facebook is a fun, safe environment to share news and photos of our family and friends. For a few, it’s a dangerous place that leads them down dark passages from which they may never return.

In that sense, Facebook is like life itself, and it won’t ever be completely safe. But for years now, the public, elected officials and even Facebook’s own employees have called for changes that would make the platform less of a menace to its users as well as to the culture as a whole.

Zuckerberg has shown no inclination to change. It’s long past time to force his hand.

Why Section 230 should be curbed for algorithmically driven platforms

Facebook whistleblower Frances Haugen testifies on Capitol Hill Tuesday.

Facebook in the midst of what we can only hope will prove to be an existential crisis. So I was struck this morning when Boston Globe technology columnist Hiawatha Bray suggested a step that I proposed more than a year ago — eliminating Section 230 protections from social media platforms that use algorithms. Bray writes:

Maybe we should eliminate Section 230 protections for algorithmically powered social networks. For Internet sites that let readers find their own way around, the law would remain the same. But a Facebook or Twitter or YouTube or TikTok could be sued by private citizens — not the government — for postings that defame somebody or which threaten violence.

Here’s what I wrote for GBH News in June 2020:

One possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.

If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers.

I hope it’s an idea whose time has come.

Page 4 of 19

Powered by WordPress & Theme by Anders Norén