By Dan Kennedy • The press, politics, technology, culture and other passions

Tag: Wikipedia

Fighting for our online freedom of speech

As I’m sure you already know, Wikipedia’s English-language site is the most prominent to go dark today in protest of two bills being considered by Congress to crack down on copyright infringement.

The bills, the Stop Online Piracy Act (SOPA), in the House, and the Protect IP Act (PIPA), in the Senate, are being pushed by major media corporations. Copyright infringement is a real problem, of course, but these bills would place the interests of copyright-holders above all other considerations. Save the Internet puts it this way:

If they are passed, corporations (with the help of the courts) will become the arbiters of what is and isn’t lawful online activity, with millions of Internet users swept in their nets as collateral damage.

Earlier item here. Note that the Big Brother poster I used to illustrate the item is missing. I wonder if that has anything to do with the protest.

And be sure to have a look at Google.

Talking back to the news with NewsTrust

Who doesn’t like to talk back to the news? That, in its essence, is the idea behind NewsTrust, a site I’ve been involved with almost from its inception in 2005. The basic idea is to rate news stories on journalistic criteria such as sourcing, fairness and depth. You can rate news organizations, and other reviewers get to rate you as well.

Last week Mike LaBonte, a volunteer editor for NewsTrust who lives in Greater Boston, visited my Reinventing the News class to lead a hands-on demonstration. Dividing the class into four groups, we reviewed a story in the Washington Post on a day in the life of an Iowa tea-party protester.

It was a difficult story to rate, and my students were of two minds. On the one hand, the story was woefully incomplete, and the reporter allowed the protester to make all kinds of ridiculous assertions about President Obama and health-care reform. On the other hand, the story had value if viewed not in isolation but, rather, as part of the Post’s ongoing coverage. As a result, student reviews ranged from a high of 3.5 (out of 5) all the way down to a 1.7.

We followed that up with a class assignment: each student was asked to find, post and rate at least three stories, and to write about the experience, as well as the positives and negatives of NewsTrust, on her or his blog. Here is our class wiki, which links to everything.

Unlike previous semesters, we did not participate in a news hunt on any particular topic. Thus you’ll find stories ranging from the death of Polish President Lech Kaczynski and the pending retirement of Supreme Court Justice John Paul Stevens to lighter fare such as why yoga appeals mainly to women.

Students have differing views about the value of NewsTrust as well. One positive aspect, it would seem, is that perusing NewsTrust restores some of the serendipity that existed back when everyone read a print newspaper every day.

Yet Mark DiSalvo observes that Google News and the people he follows on Twitter already put news stories in front of him that he might not otherwise know about, and with less technological hassle. “Google News has better customization tools, and the people I follow on Twitter are already people whose taste I trust,” he writes.

Hannah Martin writes that NewsTrust makes her think about the news in a more critical and discerning way. “What I liked about the reviewing experience was it forced me to really analyze my news on its journalistic value, which, as bad as it sounds, is often something that slips my mind,” she says. “I browse the headlines of nyt.com, read what looks important, and accept it as fact, rarely stopping to count sources or assess context. The process of reviewing though, forced me to think through all the elements of each piece, and consider what, as a journalist, should ultimately be there.”

My own view is that NewsTrust is potentially valuable as a crowdsourced front page — an alternative to letting the New York Times or the Washington Post tell us what the most important news of the day is. The problem is that the software is time-consuming and not particularly intuitive, even though it has been improved over the past year.

And though NewsTrust claimed more than 15,000 registered users by the end of 2009, most of the stories you’ll find seem to have been posted and rated by just a small handful of regulars. This is not surprising. Studies have shown that two much-bigger crowdsourced sites, Wikipedia and Digg, are the handiwork of small numbers of unusually active users.

I hope NewsTrust will continue to grow, because the idea is sound. The challenge is that crowdsourcing only works when there is a crowd.

John Yemma on open-source news

Christian Science Monitor editor John Yemma has some sharp observations about the demise of Encarta, the struggles of Encyclopedia Britannica and the dominance of Wikipedia. And he argues that there’s a cautionary tale for the news media therein:

If all the big newspapers at once adopted a pay model, some upstart would come along and use a small group of journalists and a larger group of Wikipedia-like amateurs to build a multimedia newspaper. Like Wikipedia, it would be the butt of countless jokes about unreliability.

Maybe it would even report on its own unreliability. But it would grow stronger because it would be organically constituted on the World Wide Web. That’s the power of open-source knowledge.

And that’s the challenge the news media face as they dive into the Internet.

This, of course, is week one of the Monitor’s Web-mostly existence, as the daily print edition has given way to a 24/7 Web site and a weekly magazine. (Via Jeff Jarvis.)

Wikipedia, probability and community

I’m sure I’m far from the only person to have a conflicted relationship with Wikipedia. Yes, I know that at least one prominent study found that the user-produced encyclopedia is about as accurate as the venerable Britannica. I also know that Wikipedia can be mind-bogglingly wrong — sometimes only for a few minutes or a few days, until an adult (I’m talking about maturity, not age) undoes someone else’s vandalism. But that’s not much consolation if you’re the victim of that bad information.

I tell my students that Wikipedia can be a great starting point, but that they should use it to find more-authoritative sources of information, not cite it in their papers. As for me, well, I’ve been known to link to Wikipedia articles, but I try to be careful, and I try to keep it to a minimum.

This week’s New York Times Magazine includes a worthwhile story by Jonathan Dee on the emergence of Wikipedia as a news source. Dee reports on a small army of activists (one is just 16) who jump in with summaries of major news events even as they are unfolding. These activists come across as admirably dedicated to the idea of fair, neutral content; many look for vandalism after a major news event takes place, such as the death of the Rev. Jerry Falwell, a favorite target of those who opposed his homophobic, right-wing views. (Yes, if I wrote that on Wikipedia, someone would edit those descriptions out.) But I would still have a nagging sense that something might be very wrong.

So what is the real difference between Wikipedia and a more traditional encyclopedia such as the Britannica? It’s not just the notion that anonymous and pseudonymous amateurs write and edit Wikipedia articles, whereas Britannica relies on experts. That’s certainly part of it, although if that were the entire explanation, Wikipedia would be worthless.

The more important difference is the idea of community-based, bottom-up verification (Wikipedia) versus authority-based, top-down verification (Britannica). Each has its purpose. The question is why the community-based model works — or at least works often enough that Wikipedia is a worthwhile stop on anyone’s research quest.

To that end, I want to mention a couple of ideas I’ve run across recently that help explain Wikipedia. The first is a 2006 book by Wired editor Chris Anderson, “The Long Tail,” in which he suggests that the accuracy of Wikipedia is based on probability theory rather than direct verification. The more widely read a Wikipedia article is, the more likely it is to be edited and re-edited, and thus be more accurate and comprehensive than even a Britannica article. But you never know. Anderson writes (I’m quoting from the book, but this blog post captures the same idea):

Wikipedia, like Google and the collective wisdom of millions of blogs, operates on the alien logic of probabilistic statistics — a matter of likelihood rather than certainty. But our brains aren’t wired to think in terms of statistics and probability. We want to know whether an encyclopedia entry is right or wrong. We want to know that there’s a wise hand (ideally human) guiding Google’s results. We want to trust what we read.

When professionals — editors, academics, journalists — are running the show, we at least know that it’s someone’s job to look out for such things as accuracy. But now we’re depending more and more on systems where nobody’s in charge; the intelligence is simply “emergent,” which is to say that it appears to arise spontaneously from the number-crunching. These probabilistic systems aren’t perfect, but they are statistically optimized to excel over time and large numbers. They’re designed to “scale,” or improve with size. And a little slop at the microscale is the price of such efficiency at the macroscale.

Anderson is no Wikipedia triumphalist. He also writes: “[Y]ou need to take any single result with a grain of salt. Wikipedia should be the first source of information, not the last. It should be a site for information exploration, not the definitive source of facts.”

Right now I’m reading “Convergence Culture” (2006), by Henry Jenkins, director of the Comparative Media Studies Program at MIT. Jenkins, like Anderson, offers some insight into that clichéd phrase “the wisdom of the crowd,” and why it often works. Jenkins quotes the philosopher Pierre Lévy, who has said of the Internet, “No one knows everything, everyone knows something, all knowledge resides in humanity.” Jenkins continues:

Lévy draws a distinction between shared knowledge, information that is believed to be true and held in common by the entire group, and collective intelligence, the sum total of information held individually by the members of the group that can be accessed in response to a specific question. He explains: “The knowledge of a thinking community is no longer a shared knowledge for it is now impossible for a single human being, or even a group of people, to master all knowledge, all skills. It is fundamentally collective knowledge, impossible to gather together into a single creature.” Only certain things are known by all — the things the community needs to sustain its existence and fulfill its goals. Everything else is known by individuals who are on call to share what they know when the occasion arises. But communities must closely scrutinize any information that is going to become part of their shared knowledge, since misinformation can lead to more and more misconceptions as new insight is read against what the group believes to be core knowledge.

Jenkins is writing not about Wikipedia but about an online fan community dedicated to figuring out the winners and losers on CBS’s “Survivor.” But the parallels to Wikipedia are obvious.

I’ve sometimes joked that the madness of the mob must turn into the wisdom of the crowd when you give everyone a laptop. The Jenkins/Lévy model suggests something else — “shared knowledge” defines a mob mentality; “collective intelligence” is the wisdom of the crowd. At its best, that what drives Wikipedia.

Dueling wikis

The Associated Press reports on Citizendium, an attempt to create a user-written online encyclopedia that’s more reliable than Wikipedia. Citizendium’s founder, Larry Sanger, says he’s a co-founder of Wikipedia — a claim that’s vigorously disputed by Wikipedia founder Jimmy Wales.

As you’ll see if you follow the links, Citizendium and Wikipedia look much the same, and they’re based on the same idea. The difference is that Sanger says he’ll require real names and use a more rigorous system of expert verification.

There’s no doubt Wikipedia has had its problems, such as the editor who fooled the New Yorker — and everyone else — about his credentials. Like many college instructors, I tell my students not to cite it, though I also tell them it can be a great starting point.

Still, Wikipedia has achieved a certain critical mass. Citizendium will be worth watching, but I wonder whether it might be easier to fix Wikipedia than to start all over.

Powered by WordPress & Theme by Anders Norén