August 04, 2007

Wisdom vs. ignorance in networked crowds

A fascinating post by Lauren Squires at Polyglot Conspiracy documents and discusses the widespread public disdain for academics in general and linguists in particular ("On the uselessness of linguistics in particular and academia in general", 8/3/2007). Lauren looked at the reaction in blog comments to Benjamin Nugent's mention of Mary Bucholtz's work on nerdiness and racial identity; and she didn't enjoy the experience:

I was really interested to see how people would interpret her work given the complete lack of important context in the NYT, and how the notions of linguistics and academia would figure into their comments about the piece. Taking to the blogosphere via Technorati and Google blogs searches, I found a bunch of really awful, mean, spiteful, depressing commentary. Almost all of it was completely reliant on Nugent's piece, and hardly any of it reflected an actual reading of Bucholtz's work (you can tell, because half the time the very points commenters bring up are points that Bucholtz covers in her articles, including the 2001 one that Nugent cited). This didn't surprise me - who's going to wade through a 17-page academic article if they're not attempting to get an academic degree or maintain their academic status? - but I was really, really shocked by how hypocritical people are. Because basically they are fast-as-lightning to criticize Bucholtz's work for failing to consider all angles of the issue, but they do so without themselves considering that perhaps the NYT didn't give them all the information.

It's true, there's something about blogospheric comments (and web-forum and newsgroup postings) that tends to bring "awful, mean, spiteful, depressing commentary" front and center. In these forums, the wisdom of crowds fights with the effects of transient, cost-free and semi-anonymous communication, and it's not often the wisdom that comes out on top.

Laura is not the first blogger to get discouraged about blogospheric discussions of science. A couple of years ago, Chris at Mixing Memory wrote:

As an academic, I have spent a lot of time hiding away in the ivory tower, oblivious to the larger world around me. As a graduate student, especially, I had almost no time to pay any attention to what non-scientists were saying about cognitive science. However, on a fateful day in early 2004, I chose to crawl out of my hole and actually look at what other people were saying. I started reading blogs. And now I want to crawl back in!

And it's experiences of the same sort that led us here at Language Log to abandon our experiments with comments.

But all the same, I believe that blogs and other new media are capable of giving birth to better public understanding of research (in linguistics as in all other areas), whereas traditional science journalism is doomed to repeat the doleful patterns of the past.

In a response to Chris's post, I suggested that the way to make things better is to "Encourage everyone to think about science, and to write about it on the web, whether they know anything about it or not", and to "improve the (professional scientific) literature" by promoting open access not only for all scientific publications but for all the data and programs that they rely on. I claimed that "open intellectual communities intrinsically tend to generate a virtuous cycle", both for professionals and amateurs. ("Raising standards -- by lowering them", 3/7/2005.)

You can go read what I wrote, if you like, and see if you agree with me that things have been moving in the right direction over the past couple of years. I'm not going to recapitulate the arguments here, other than to note that Chris is still blogging. In the rest of this post, I'd like to explain why I think that traditional media are unlikely to contribute much to the process.

The key problem is that no single source ever gives us "all the information", much less all the interpretation. The source of motive power here is a generalization of Moglen's Metaphorical Corollary to Faraday's Law, extended from software to scientific understanding:

...if you wrap the Internet around every person on the planet and spin the planet, software flows in the network. It's an emergent property of connected human minds that they create things for one another's pleasure and to conquer their uneasy sense of being too alone. The only question to ask is, what's the resistance of the network?

But journalists have no tradition of providing references or links, and generally don't read primary sources themselves anyway, and operate as if their goal were to be the sole source of information for their readers, not a node in a network of communication and creation. These attitudes and practices are deeply embedded in journalistic culture, and deeply connected to many other aspects of its social and economic context, and reinforced by the traditional attitudes and practices of scientists.

When I blogged briefly about the NYT Magazine article that Lauren Squires took as her starting point ("Language and identity", 7/29/2007), the main thing that I wanted to do was to present some of Bucholtz's work in her own voice, and to give people a link to her original paper, so they could read it for themselves.

Unfortunately, this wasn't nearly as easy to do as it should have been. I started with the obvious thing, namely a link to Bucholtz's paper in the online archive of the Journal of Linguistic Anthropology. JLA is "a semiannual publication of the Society for Linguistic Anthropology (SLA), a Section of the American Anthropological Association". JLA is available online via AnthroSource -- if you're accessing it from an institution that has bought a subscription. Otherwise, you need to pay a per-article fee, which I think is generally $12.

That's less than some journals charge, but it's a lot to pay for electronic access to a few pages of text that you're not sure in advance is worth anything to you at all.

(Several people complained about this access barrier, both to me by email and in blog posts on the topic. And it turned out that Bucholtz has a pdf of the article on her website, so I redirected the link there. But more often than not, linguists don't put free copies of their articles anywhere -- a simple cultural change in this practice would make an enormous difference in our field's relations outside the academy. ROA and LingBuzz are steps in the right direction.)

Anyhow, The Times Magazine's editors characteristically didn't bother to put a link to Bucholtz's work in (or next to) Nugent's article. One excuse might be that the archival version is behind the AnthroSource pay wall -- but that's not the real reason, because the NYT doesn't provide links to Open Access journals either. In fact, they generally don't provide links outside their own sites, either because they don't want to seem to endorse other sources, or because they don't want to send traffic outside their walled garden, or because their unconscious ideology is that they should be the sole source of information for their readers.

Company policies aside, it's clear that most journalists feel that the public is not capable of reading primary sources. They feel that way because they themselves typically don't read primary sources, when they're writing about science or for that matter about other areas of intellectual inquiry. Instead, they generally rely on press releases and on their (often muddled) notes from interviews with experts.

We've documented the sad results over and over again, as have many other bloggers. This post is already too long, but let me digress anyhow to comment on two small but telling recent cases. (I'll add a grey background, so you can skip these depressing little case studies if you want.)

One example was documented in my post "A bulletin from the Language Log Early Warning Center", 8/1/2007.  The culprit, Chip Scanlan, has paid his journalistic dues:

Former reporter (Providence Journal, St. Petersburg Times, Knight Ridder Washington Bureau), author of "Reporting and Writing: Basics for the 21st Century" (Oxford University Press). Co-editor, "America's Best Newspaper Writing" (Bedford/St. Martin's) Edited "Best Newspaper Writing" 1994-2000. Teaches reporting, interviewing, coaching skills, nonfiction narrative, personal essays and deadline storytelling.

Scanlan, who is obviously smart, hard-working and responsible, now teaches at the Poynter Institute and produces a writing advice column for Poynter Online. His column from 6/28/2007, "Brain Science for Writers: Active verbs move nerve cells, too" tells us that the common advice to "use active verbs" has been "put ... under the gaze of science", by British neuroscientists who have shown that "if my characters kick, kiss or dance, so will my readers' brains".

Cute, but the original article is not about active verbs but about action words, that is, "words that have a clear semantic relationship to actions, typically action verbs ... or nouns referring to tools ". The stimuli in the study were mostly words that are ambiguously nouns or verbs (such as "kick, kiss, or dance"), and they were presented in isolation, not in either active or passive sentence frames.

Now, Scanlan was not the author of this misunderstanding. He took it directly from a piece in Science News (Bruce Bower, "The Brain's Word Act: Reading verbs revs up motor cortex areas", Science News, 2/7/2004). I'll bet that Scanlan took the Science News article at face value, and that it never even occurred to him to go read the original research reports.

And it wouldn't surprise me to find that Bruce Bower in turn was misled by some anonymous PR person who wrote the press release about the original research, either at the MRC lab in Cambridge or at the journal Neuron

It's certainly a PR person who was responsible for a striking numerical botch at Science Daily a few days ago.

According to "Genetic Mutations Linked To Lupus", Science Daily, 8/1/2007

The study involved 417 lupus patients from the United Kingdom and Germany. Mutations were found in nine patients with lupus and were absent in 1,712 people without lupus.

But according to the original research report (Min Ae Lee-Kirsch et al., "Mutations in the gene encoding the 3'-5' DNA exonuclease TREX1 are associated with systemic lupus erythematosus", Nature Genetics, published online 7/29/2007):

We identified five heterozygous missense changes and one frameshift change in 6/218 individuals with SLE from the UK compared with 0/200 nonsynonymous changes in controls .... In the German SLE cohort, we found four heterozygous missense changes, one frameshift and a single 3' UTR variant in 6/199 affected individuals but only 2/1,512 controls ....

So Science Daily says that "mutations were found in nine patients with lupus", while the research report says that they found mutations in 12 "affected individuals", 6 in each of two cohorts.

And Science Daily says that the mutations "were absent in 1,712 people without lupus" -- the research report says that the mutations were found in "2/1,512 controls" in one cohort, and 0/200 in the other, summing to 2/1,712.

How did 12 turn into 9 and 2 turn into 0? Carelessness, certainly, but whose? A clue is provided by a note at the bottom of the Science Daily article:

Note: This story has been adapted from a news release issued by Wake Forest University Baptist Medical Center.

Adapted, hell -- it was copied word-for-word, as this reprint of the original news release at newswise.com shows. So far, the rest of the mainstream media haven't picked up this particular story, and maybe they won't, because the press release gives actual mutation counts rather than odds ratios, and also there was some apparently more important biomedical news about lupus during the same period. But what do you want to bet, if the story makes it onto the Reuters or AP newswires, or into BBC News or the NYT, that the numbers reported will be 9 and 0, not 12 and 2?

OK, back to the main thread...

According to Andrew Keen's recent screed "Against Open Culture" in Always On,

...formal cultural gatekeepers are good. There is massive commercial and intellectual value in the traditional ecosystem of professional scouts, agents, editors, producers, and publicists skilled in the discovery and development of talent. Sure, this may be a system run by elites, but at least it's a meritocratic one.

Wouldn't it be pretty to think so? The trouble is, the "elites" in question -- at least the elites responsible for informing the public about science -- are collectively ignorant, careless and lazy. Of course, the individual human beings involved are no doubt mostly smart, responsible and hard-working. But the culture they live in doesn't allow them to show it.

This "traditional ecosytem" of science-related journalism is probably impossible to reform. As bad as its results are, each of its parts is highly adapted to the economic and cultural niche it inhabits. It's fun to make fun of the system's botches, and this helps to correct misinformation and to make people suitably suspicious of the media's coverage of science in general, but it would be naive to expect that sniping from the sidelines can really change the fundamental dynamics of the industry.

It's better to build an alternative culture of connections between researchers and members of the public, bypassing the "formal cultural gatekeepers" entirely. That culture, already in the process of being born, will not consist of everyone yelling at once, anonymously, in some hyperdemocratic network of web forums. It'll be contentious at times, no doubt, and also complicated, highly structured, and uncomfortably dynamic. But if it works, as Eben Moglen wrote about free software, it'll be because

... in the end ... it's just a human thing. Rather like why Figaro sings, why Mozart wrote the music for him to sing to, and why we all make up new words: Because we can. Homo ludens, meet Homo faber. The social condition of global interconnection that we call the Internet makes it possible for all of us to be creative in new and previously undreamed-of ways.

[Update -- Fev at HeadsUp: The Blog speculates about me: "At a guess, it's an artifact of news routines (did somebody give Mark a Herbert Gans anthology for his birthday or something?) rather than deliberate news slant."

This confirms my suspicion that Fev is a closet sociologist. In fact, I've never read anything by Herbert Gans, but now I've ordered "Deciding What's News" and "Democracy and the News", and expect to be enlightened. ]

[Update #2 -- Peter Svensson, a Technology Writer at the Associated Press, writes:

My friend Ed Keer pointed me to your Aug. 4 Language Log post on science journalism, and while I think it raises worthwhile points, I take exception to the implication that AP reporters don't check primary sources when writing about science.

Adapted, hell -- it was copied word-for-word, as this reprint of the original news release at newswise.com shows. So far, the rest of the mainstream media haven't picked up this particular story, and maybe they won't, because the press release gives actual mutation counts rather than odds ratios, and also there was some apparently more important biomedical news about lupus during the same period. But what do you want to bet, if the story makes it onto the Reuters or AP newswires, or into BBC News or the NYT, that the numbers reported will be 9 and 0, not 12 and 2?

I've participated in our science coverage, and I can tell you it's based on the original papers, plus interviews with the authors. I'd be surprised if the other members of the MSM that you mention don't have the same practices. I'd be happy to put you in touch with one of our current science writers, so you can go straight to the source.

Touché. This post doesn't document lack of primary-source reference on the part of an AP writer (though you could look here for an apparent example from an earlier post), and it was irresponsible of me to make an insinuation about the AP without citing a specific example to back it up.

However, I believe that that my basic point was valid, independent of whether particular journalists do or don't look at the scientific papers they're reporting about. The determination of what science and technology news to report, and what to say about that news, seems to be substantially determined by a relatively uncritical replication of the press releases issued by journals and scientific societies, universities, and companies. I rarely see, in the AP or anywhere else, a story that looks critically at the content of an original paper, even when the reporter takes a few facts or quotes from such a source. Nor do I often see a case where a story features a research result or a research program whose newsworthiness has not been decided by someone's public-relations department.

Of course, I don't really know how science journalists work. All I can do is evaluate what they write.]

Posted by Mark Liberman at August 4, 2007 12:14 PM