The new “What Works” podcast features Jonathan Dotan, founding director of The Starling Lab for Data Integrity at Stanford University. The lab focuses on tools to help historians, legal experts and journalists protect images, text and other data from bad actors who want to manipulate that data to create deep fakes or expunge it altogether.
He has founded and led a number of digital startups, he worked at the Motion Picture Association of America, and he was a writer and producer for the HBO series “Silicon Valley.” While he was working on “Silicon Valley,” a character invented a new technology that got him thinking: What if everyday users could keep hold of their own data without having to store it in a cloud, where it is open to hackers or the government or other bad actors? That, at least in part, is what blockchain technology is all about, and it’s a subject about which Dotan has become a leading expert.
Dotan also shares a link to a valuable resource for anyone who wants to gain a deeper understanding of Web3.
I’ve got a rare rave for Gannett, which is rethinking the way its papers cover police and public safety. And Ellen Clegg unpacks a recent survey about violent attacks against broadcast reporters.
The new “What Works” podcast is up, featuring Jody Brannon, director of the Center for Journalism & Liberty at the Open Markets Institute. Brannon started her career in print in her native Seattle. Never one to shy from a challenge (she’s an avid skiier and beamed in from the snowy mountains of Idaho), she transitioned to digital relatively early on in the revolution. She has had leadership or consulting roles at washingtonpost.com, usatoday.com and msn.com, as well as the tech universe.
She served on the board of the Online News Association for 10 years and holds a Ph.D. in mass communication from the University of Maryland. The Center for Journalism & Liberty is part of the Open Markets Institute, which has a pretty bold mission statement: to shine a light on monopoly power and its dangers to democracy. The center also works to engage in grassroots coalitions, such as Freedom from Facebook and Google and 4Competition.
My Quick Take is on an arcane subject — the future of legal ads. Those notices from city and county government may seem pretty dull, but newspapers have depended on them as a vital source of revenue since the invention of the printing press. Now they’re under attack in Florida, and the threat could spread.
Ellen weighs in on a mass exodus at the venerable Texas Observer magazine, once a progressive voice to be reckoned with and home to the late great columnist Molly Ivins.
Local internet good guy Ron Newman has prevailed in a libel and copyright-infringement suit brought by a plaintiff who claimed Newman had effectively published libelous claims about him by moving the Davis Square Community forum from one hosting service to another.
Adam Gaffin of Universal Hub has all the details, which I’m not going to repeat here. The copyright claim is so ridiculous that I’m going to pass over it entirely. What I do find interesting in the suit, filed by Jonathan Monsarrat, is his allegation that Newman was not protected by Section 230 of the Communications Decency Act because, in switching platforms from LiveJournal to Dreamwidth, he had to copy all the content into the new forum.
Section 230 holds online publishers harmless for any content posted online by third parties, which protects everyone from a small community newspaper whose website has a comments section to tech giants like Facebook and Twitter. The question is whether Newman, by copying content from one platform to another, thereby became the publisher of that content, which could open him to a libel claim. The U.S. Court of Appeals for the First Circuit said no, and put it this way:
Newman copied the allegedly defamatory posts from LiveJournal toDreamwidthverbatim.Hedidnotencourageor compel theoriginalauthorstoproducethelibelousinformation. And, in the manner and form of republishing the posts, heneither offered nor implied any view of his own about the posts.In short, Newman did nothing to contribute tothe posts‘unlawfulnessbeyond displaying themonthe new Dreamwidth website.
There’s no question that the court ruled correctly, and I hope that Monsarrat, who has been using the legal system to harass Newman for years, brings his ill-considered crusade to an end.
Nevertheless, the idea that a publisher could lose Section 230 protections might be more broadly relevant. Several years ago I wrote for GBH News that Congress ought to consider ending such protections for content that is promoted by algorithms. If Facebook wants to take a hands-off approach to what its users publish and let everything scroll by in reverse chronological order, then 230 would apply. But Facebook’s practice of using algorithms to drive engagement, putting divisive and anger-inducing content in front of its users in order to keep them logged in and looking at advertising, ought not to be rewarded with legal protections.
The futility of Monsarrat’s argument aside, his case raises the question of how much publishers may intervene in third-party content before they lose Section 230 protections. Maybe legislation isn’t necessary. Maybe the courts could decide that Facebook and other platforms that use algorithms become legally responsible publishers of content when they promote it and make it more likely to be seen than it would otherwise.
And congratulations to Ron Newman, a friend to many of us in the local online community. I got to know Ron way back in 1996, when he stepped forward and volunteered to add links to the online version of a story I wrote for The Boston Phoenix on the Church of Scientology and its critics. Ron harks back to the early, idealistic days of the internet. The digital realm would be a better place if there were more people like him.
Although I started blogging in 2002, the first regular column that I ever wrote for a digital publication was for The Guardian. From 2007 until 2011, I produced a weekly commentary about media, politics and culture that was not much different from what I write now for GBH News. What was new was that, for the first time, I could embed links in my column, just as if I was blogging. I did — liberally. (Only later did my editor tell me that the software he used stripped out all the links I had put in, which meant that he had to restore them all by hand. And this was at one of the most digitally focused newspapers on the planet.)
Links have become a standard part of digital journalism. So I was surprised recently when Ed Lyons, a local political commentator who’s an old-fashioned moderate Republican, posted a Twitter thread denouncing links. It began: “I hereby declare I am *done* with hyperlinks in political writing. Pull up a chair and let me rant about how we got to this ridiculous place. What started off as citation has unintentionally turned into some sort of pundit performance art.”
The whole thread is worth reading. And it got me thinking about the value of linking. Back when everything was in print, you couldn’t link, of course, so opinion columns — constrained by space limitations — tended to include a lot of unattributed facts. The idea was that you didn’t need to credit commonly known background material, such as “North Dakota is north of South Dakota.” Sometimes, though, it was hard to know what was background and what wasn’t, and you didn’t want to do anything that might be perceived as unethical. When linking came along, you could attribute everything just by linking to it. And many of us did.
In his thread, Lyons also wrote that “it is my opinion that nobody visits any of these links. I think readers see the link and say oh well that must be true.” I agree. In fact, I tell my students that no one clicks on links, which means that they should always write clearly and include all the information they want the reader to know. The link should be used as a supplement, not as a substitute. To the extent possible, they should also give full credit to the publication and the writer when they’re quoting something as well as providing a link.
I agree with Lyons that links ought to add value and not just be put in gratuitously. And they certainly shouldn’t be snuck in as a way of whispering something that you wouldn’t want to say out loud. The classic example of that would be a notorious column a couple of years ago by Bret Stephens of The New York Times, who wrote that the intelligence of Ashkenazi Jews might be genetically superior — and backed it up with a link to a study co-authored by a so-called scientist who had been identified by the Southern Poverty Law Center as a white nationalist and a eugenicist. Stephens’ assertion was bad enough; his citation was worse, even if few people read it.
One of the most successful self-published writers currently is the historian Heather Cox Richardson. I’ve noticed that she leaves links out of her Substack essays entirely, posting them at the bottom instead. Here’s an example. I’m not going to do that, but it seems to be a decent compromise — showing her work while not letting a bunch of links clutter up her text.
In any event, I don’t expect you to follow the links I include in my writing. They’re there if you want to know more, or if you want to see if I’m fairly characterizing what I’m describing. At the very least, Lyons has reminded me of the value of including links only when they really matter.
This essay was part of last week’s Media Nation Member Newsletter. To become a member for just $5 a month, please click here.
How bad is it for Facebook right now? The company is reportedly planning to change its name, possibly as soon as this week — thus entering the corporate equivalent of the Witness Protection Program.
Surely, though, Mark Zuckerberg can’t really think anyone is going to be fooled. As the tech publisher Scott Turman told Quartz, “If the general public has a negative and visceral reaction to a brand then it may be time to change the subject. Rebranding is one way to do that, but a fresh coat of lipstick on a pig will not fundamentally change the facts about a pig.”
And the facts are devastating, starting with “The Facebook Files” in The Wall Street Journal at the beginning of the month; accelerating as the Journal’s once-anonymous source, former Facebook executive Frances Haugen, went public, testified before Congress and was interviewed on “60 Minutes”; and then exploding over the weekend as a consortium of news organizations began publishing highlights from a trove of documents Haugen gave the Securities and Exchange Commission.
No one can possibly keep up with everything we’ve learned about Facebook — and, let’s face it, not all that much of it is new except for the revelations that Facebook executives were well aware of what their critics have been saying for years. How did they know? Their own employees told them, and begged them to do something about it to no avail.
If it’s possible to summarize, the meta-critique is that, no matter what the issue, Facebook’s algorithms boost content that enrages, polarizes and even depresses its users — and that Zuckerberg and company simply won’t take the steps that are needed to lower the volume, since that might result in lower profits as well. This is the case across the board, from self-esteem among teenage girls to the Jan. 6 insurrection, from COVID disinformation to factional violence in other countries.
In contrast to past crises, when Facebook executives would issue fulsome apologies and then keep right on doing what they were doing, the company has taken a pugnacious tone this time around, accusing the media of bad faith and claiming it has zillions of documents that contradict the damning evidence in the files Haugen has provided. For my money, though, the quote that will live in infamy is one that doesn’t quite fit the context — it was allegedly spoken by Facebook communications official Tucker Bounds in 2017, and it wasn’t for public consumption. Nevertheless, it is perfect:
“It will be a flash in the pan,” Bounds reportedly said. “Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”
Is Facebook still fine? Probably not. At the moment, at least, is difficult to imagine that Facebook won’t be forced to undergo some fundamental changes, either through public pressure or by force of law. A number of news organizations have published overviews to help you make sense of the new documents. One of the better ones was written by Adrienne LaFrance, the executive editor of The Atlantic, who was especially appalled by new evidence of Facebook’s own employees pleading with their superiors to stop amplifying the extremism that led to Jan. 6.
“The documents are astonishing for two reasons: First, because their sheer volume is unbelievable,” she said. “And second, because these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.”
LaFrance offers some possible solutions, most of which revolve around changing the algorithm to optimize safety over growth — that is, not censoring speech, but taking steps to stop the worst of it from going viral. Keep in mind that one of the key findings from the past week involved a test account set up for a fictional conservative mother in North Carolina. Within days, her news feed was loaded with disinformation, including QAnon conspiracy theories, served up because the algorithm had figured out that such content would keep her engaged. As usual, Facebook’s own researchers sounded the alarm while those in charge did nothing.
In assessing what we’ve learned about Facebook, it’s important to differentiate between pure free-speech issues and those that involve amplifying bad speech for profit. Of course, as a private company, Facebook needn’t worry about the First Amendment — it can remove anything it likes for any reason it chooses.
But since Facebook is the closest thing we have to a public square these days, I’m uncomfortable with calls that certain types of harmful content be banned or removed. I’d rather focus on the algorithm. If someone posts, say, vaccine disinformation on the broader internet, people will see it (or not) solely on the basis of whether they visit the website or discussion board where it resides.
That doesn’t trouble me any more than I’m bothered by people handing out pamphlets about the coming apocalypse outside the subway station. Within reason, Facebook ought to be able to do the same. What it shouldn’t be able to do is make it easy for you to like and share such disinformation and keep you engaged by showing you more and — more extreme — versions of it.
And that’s where we might be able to do something useful about Facebook rather than just wring our hands. Reforming Section 230, which provides Facebook and other internet publishers with legal immunity for any content posted by their users, would be a good place to start. If 230 protections were removed for services that use algorithms to boost harmful content, then Facebook would change its practices overnight.
Meanwhile, we wait with bated breath for word on what the new name for Facebook will be. Friendster? Zucky McZuckface? The Social Network That Must Not Be Named?
Zuckerberg has created a two-headed beast. For most of us, Facebook is a fun, safe environment to share news and photos of our family and friends. For a few, it’s a dangerous place that leads them down dark passages from which they may never return.
In that sense, Facebook is like life itself, and it won’t ever be completely safe. But for years now, the public, elected officials and even Facebook’s own employees have called for changes that would make the platform less of a menace to its users as well as to the culture as a whole.
Zuckerberg has shown no inclination to change. It’s long past time to force his hand.
Facebook in the midst of what we can only hope will prove to be an existential crisis. So I was struck this morning when Boston Globe technology columnist Hiawatha Bray suggested a step that I proposed more than a year ago — eliminating Section 230 protections from social media platforms that use algorithms. Bray writes:
Maybe we should eliminate Section 230 protections for algorithmically powered social networks. For Internet sites that let readers find their own way around, the law would remain the same. But a Facebook or Twitter or YouTube or TikTok could be sued by private citizens — not the government — for postings that defame somebody or which threaten violence.
One possible approach might be to remove Section 230 protections from any online publisher that uses algorithms in order to drive up engagement. When 230 was enacted, third-party content flowed chronologically. By removing protections from algorithmic content, the law would recognize that digital media have fundamentally changed.
If Jack Dorsey of Twitter and Mark Zuckerberg of Facebook want to continue profiting from the divisiveness they’ve helped foster, then maybe they should have to pay for it by assuming the same legal liability for third-party content as print publishers.
Could this be the beginning of the end for Facebook?
Even the Cambridge Analytica scandal didn’t bring the sort of white-hot scrutiny the social media giant has been subjected to over the past few weeks — starting with The Wall Street Journal’s “Facebook Files” series, which proved that company officials were well aware their product had gone septic, and culminating in Sunday’s “60 Minutes” interview with the Journal’s source, Frances Haugen.
As we’ve seen over and over, though, these crises have a tendency to blow over. You could say that “this time it feels different,” but I’m not sure it does. Mark Zuckerberg and company have shown an amazing ability to pick themselves up and keep going, mainly because their 2.8 billion engaged monthly users show an amazing ability not to care.
On Monday, New York Times technology columnist Kevin Roose wondered whether the game really is up and argued that Facebook is now on the decline. He wrote:
What I’m talking about is a kind of slow, steady decline that anyone who has ever seen a dying company up close can recognize. It’s a cloud of existential dread that hangs over an organization whose best days are behind it, influencing every managerial priority and product decision and leading to increasingly desperate attempts to find a way out. This kind of decline is not necessarily visible from the outside, but insiders see a hundred small, disquieting signs of it every day — user-hostile growth hacks, frenetic pivots, executive paranoia, the gradual attrition of talented colleagues.
The trouble is, as Roose concedes, it could take Facebook an awfully long time to die, and it may prove to be even more of a threat to our culture during its waning years than it was on the way up.
I suspect what keeps Facebook from imploding is that, for most people, it works as intended. Very few of us are spurning vaccines or killing innocent people in Myanmar because of what we’ve seen on Facebook. Instead, we’re sharing personal updates, family photos and, yes, some news stories we’ve run across. For the most part, I like Facebook, even as I recognize what a toxic effect it’s having.
The very real damage that Facebook is doing seems far removed from the experience most of its customers have. And that is what’s going to make it incredibly difficult to do anything about it.
What could shock us about Facebook at this point? That Mark Zuckerberg and Sheryl Sandberg are getting ready to shut it down and donate all of their wealth because of their anguish over how toxic the platform has become?
No, we all know there is no bottom to Facebook. So Jeff Horwitz’s investigative report in The Wall Street Journal on Monday — revealing the extent to which celebrities and politicians are allowed to break rules the rest of us must follow — was more confirmatory than revelatory.
That’s not to say it lacks value. Seeing it all laid out in internal company documents is pretty stunning, even if the information isn’t especially surprising.
The story involves a program called XCheck, under which VIP users are given special treatment. Incredibly, there are 5.8 million people who fall into this category, so I guess you could say they’re not all that special. Horwitz explains: “Some users are ‘whitelisted’ — rendered immune from enforcement actions — while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.”
And here’s the killer paragraph, quoting a 2019 internal review:
“We are not actually doing what we say we do publicly,” said the confidential review. It called the company’s actions “a breach of trust” and added: “Unlike the rest of our community, these people can violate our standards without any consequences.”
Among other things, the story reveals that Facebook has lied to the Oversight Board it set up to review its content-moderation decisions — news that should prompt the entire board to resign.
Perhaps the worst abuse documented by Horwitz involves the Brazilian soccer star Neymar:
After a woman accused Neymar of rape in 2019, he posted Facebook and Instagram videos defending himself — and showing viewers his WhatsApp correspondence with his accuser, which included her name and nude photos of her. He accused the woman of extorting him.
Facebook’s standard procedure for handling the posting of “nonconsensual intimate imagery” is simple: Delete it. But Neymar was protected by XCheck.
For more than a day, the system blocked Facebook’s moderators from removing the video. An internal review of the incident found that 56 million Facebook and Instagram users saw what Facebook described in a separate document as “revenge porn,” exposing the woman to what an employee referred to in the review as abuse from other users.
“This included the video being reposted more than 6,000 times, bullying and harassment about her character,” the review found.
As good a story as this is, there’s a weird instance of both-sides-ism near the top. Horwitz writes: “Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up ‘pedophile rings,’ and that then-President Donald Trump had called all refugees seeking asylum ‘animals,’ according to the documents.”
The pedophile claim, of course, is better known as Pizzagate, the ur-conspiracy theory promulgated by QAnon, which led to an infamous shooting incident at the Comet Ping Pong pizza restaurant in Washington in 2016. Trump, on the other hand, had this to say in 2018, according to USA Today: “We have people coming into the country or trying to come in, we’re stopping a lot of them, but we’re taking people out of the country. You wouldn’t believe how bad these people are. These aren’t people. These are animals.”
Apparently the claim about Trump was rated as false because he appeared to be referring specifically to gang members, not to “all” refugees. But that “all” is doing a lot of work.
The Journal series continues today with a look at how Instagram is having a damaging effect on the self-esteem of teenage girls — and that Facebook, which owns the service, knows about it and isn’t doing anything.
This is an important story — not just because some crucial 9/11 coverage has been lost or even because the demise of Adobe Flash means that parts of the internet are now broken. Rather, it illustrates that the internet is, in many ways, an ephemeral medium, meaning that we simply can’t preserve and archive our history the way we could during the print era.
Clare Duffy and Kerry Flynn report for CNN.com that The Washington Post, ABC News and CNN itself are among the news organizations whose interactive presentations in the aftermath of 9/11 no longer work properly.
As they recount, Flash was a real advance in the early days of the web, as it was an important step forward for video and interactive graphics. But the late Steve Jobs, criticizing Flash’s security flaws, decreed that Apple’s iPhone and iPad would not run Flash. At that point the platform began to crumble, and Adobe pulled support for it at the end of 2020.
Duffy and Flynn write that some efforts are under way to use Flash emulators in order to bring some old content back to life. Adobe, which is worth $314 billion, ought to spend a few nickels to help with that effort.
More broadly, though, the problem with Flash illustrates how the internet decays over time. Link rot is an ongoing frustration — you link to something, go back a year or five later, and find that the content has moved or been taken down. Publications go out of business, taking their websites with them. Or they change content-management systems, resulting in new URLs for everything.
We’re all grateful for the work that the Internet Archive does in preserving as much as it can. Here, for instance, is the home page of The New York Times on the evening of Sept. 11, 2001.
But what’s available online isn’t nearly as complete as what’s in print. For the moment, at least, we can still go to the library and look at microfilm of print editions for publications that pay little attention to preserving their digital past. It won’t be too many years, though, before digital is all we’ve got.
You would think such a commonplace observation hardly needs to be said out loud. In recent years, though, Apple has tried to market itself as the great exception.
“Privacy is built in from the beginning,” reads Apple’s privacy policy. “Our products and features include innovative privacy technologies and techniques designed to minimize how much of your data we — or anyone else — can access. And powerful security features help prevent anyone except you from being able to access your information. We are constantly working on new ways to keep your personal information safe.”
All that has now blown up in Apple’s face. Last Friday, the company backed off from a controversial initiative that would have allowed its iOS devices — that is, iPhones and iPads — to be scanned for the presence of child sexual abuse material, or CSAM. The policy, announced in early August, proved wildly unpopular with privacy advocates, who warned that it could open a backdoor to repressive governments seeking to spy on dissidents.Apple cooperates with China, for instance, arguing that it is bound by the laws of the countries in which it operates.
What made Apple’s efforts especially vulnerable to criticism was that it involved placing spyware directly on users’ devices. Although surveillance wouldn’t actually kick in unless users backed up their devices to Apple’s iCloud service, it raised alarms that the company was planning to engage in phone-level snooping.
“Apple has put in place elaborate measures to stop abuse from happening,” wrote Tatum Hunter and Reed Albergotti in The Washington Post. “But part of the problem is the unknown. iPhone users don’t know exactly where this is all headed, and while they might trust Apple, there is a nagging suspicion among privacy advocates and security researchers that something could go wrong.”
The initiative has proved to be a public-relations disaster for Apple. Albergotti, who apparently had enough of the company’s attempts at spin, wrote a remarkable sentence in his Friday story reporting the abrupt reversal: “Apple spokesman Fred Sainz said he would not provide a statement on Friday’s announcement because The Washington Post would not agree to use it without naming the spokesperson.”
That, in turn, brought an attaboy tweet from Albergotti’s Post colleague Christiano Lima, complete with flames and applauding hands, which promptly went viral.
“We in the press ought to do this far, far more often,” tweeted Troy Wolverton, managing editor of the Silicon Valley Business Journal, in a characteristically supportive response.
Even though the media rely on unnamed sources far too often, my own view is that there would have been nothing wrong with Albergotti’s going along with Sainz’s request. Sainz was essentially offering an on-the-record quote from Apple.
(Still, it’s hard not to experience a zing of delight at Albergotti’s insouciance. Now let’s see the Post do the same with politicians and government officials.)
Apple has gotten a lot of mileage out of its embrace of privacy. Tim Cook, the company’s chief executive, delivered a speech earlier this year in which he attempted to position Apple as the ethical alternative to Google, Facebook and Amazon, whose business models depend on hoovering up vast amounts of data from their customers in order to sell them more stuff.
“If we accept as normal and unavoidable that everything in our lives can be aggregated and sold, we lose so much more than data, we lose the freedom to be human,” Cook said. “And yet, this is a hopeful new season, a time of thoughtfulness and reform.”
The current controversy comes just months after Apple unveiled new features in its iOS operating software that made it more difficult for users to be tracked in a variety of ways, offering greater security for their email and more protection from being tracked by advertisers.
Yet it always seemed that there was something performative about Apple’s embrace of privacy. For instance, although Apple allows users to maintain tight control over their iPhones and iMessages, the company continues to hold the encryption keys to iCloud — which, in turn, makes the company liable to a court order to turn over user data.
“The dirty little secret with nearly all of Apple’s privacy promises is that there’s been a backdoor all along,” wrote privacy advocates Albert Fox Cahn and Evan Selinger in a recent commentary for Wired. “Whether it’s iPhone data from Apple’s latest devices or the iMessage data that the company constantly championed as being ‘end-to-end encrypted,’ all of this data is vulnerable when using iCloud.”
Of course, you might argue that there ought to be reasonable limits to privacy. Just as the First Amendment does not protect obscenity, libel or serious breaches of national security, privacy laws — or, in this case, a powerful company’s policies — shouldn’t protect child pornography or certain other activities such as terrorist threats. Fair enough.
But as the aforementioned Selinger, a professor of philosophy at MIT and an affiliate scholar at Northeastern University, argued over the weekend in a Boston Globe Ideas piece, there are times when slippery-slope arguments, often bogus, are sometimes valid.
“Governments worldwide have a strong incentive to ask, if not demand, that Apple extend its monitoring to search for evidence of interest in politically controversial material and participation in politically contentious activities,” Selinger wrote, adding: “The strong incentives to push for intensified surveillance combined with the low costs for repurposing Apple’s technology make this situation a real slippery slope.”
Five years ago, the FBI sought a court order that would have forced Apple to provide the encryption keys so they could access the data on an iPhone used by one of the shooters in a deadly terrorist attack in San Bernardino, California. Apple refused, which set off a public controversy, including a debate between former CIA director John Deutsch and Harvard Law School professor Jonathan Zittrain that I covered for GBH News.
The controversy proved to be for naught. In the end, the FBI was able to break into the phone without Apple’s help. Which suggests a solution, however imperfect, to the current controversy.
Apple should withdraw its plan to install spyware directly on users’ iPhones and iPads. And it should remind users that anything stored in iCloud might be revealed in response to a legitimate court order. More than anything, Apple needs to stop making unrealistic promises and remind its users: