Search This Blog

Thursday, October 25, 2007

Peer Review and the Wisdom of Crowds

We know that as children we resisted taking our medicine not because it wasn't good for us but because it didn't taste good. Of course, as children we don't have the wits to rationalize that our distaste for medicine is other than a literal distaste. We like to think that as adults we are above all this, and can rise above a little discomfort to know what's good for us, and figuratively take our medicine. This is presumably the case in the rarefied world of academic debate, where a group of your academic peers reviews your argument with a dispassionate and objective eye. This process is aptly named peer review, where a jury of your intellectual equals decide whether your term paper/journal article/book is up to snuff, or whether it should be snuffed out.

But this is the type of hanging jury that will hand you your head if you just look at them cross, or cross them up by saying something that makes them look foolish or just plain wrong.

This was argument of the sociologist Michael J. Mahoney of Pennsylvania State University, who was one of the first to examine how well the peer review process works in evaluating scientific papers. In a landmark study( see peer review), he sent copies of one paper to 75 reviewers but doctored the results so that in some cases the research appeared to support mainstream theories, while in others it ran against them.

"When the results ran contrary to the reviewer's theoretical beliefs," Mahoney reported, "the procedures were berated and the manuscript was rejected. When the results 'confirmed' the reviewers beliefs, the same procedures were lauded and the manuscript was recommended for publication."

Mahoney's findings struck a nerve. Within three months after he presented his results last year at a meeting of the American Association for the Advancement of Science, he said, he "received probably 200 to 300 letters and phone calls from scientists who felt they had been victims of that kind of discrimination."
The problem is that when theoretical perspectives are informed hunches, who is to say your hunch is better than mine? Thus it's easy dismiss a competing hunch by merely saying that although the data is there, the right hypothesis isn't. For example, if your great paper notes that rats running a maze will take a left turn to get at the cheese because it's hungry, although your data may be above suspicion, your hypothesis of a hungry rat may not. So the reviewer in his damning repartee will dismiss your paper by remarking that your paper tragically errs by not confirming the well known fact that cosmic rays cause changes in maze navigation.

Mahoney's point however was that a hypothesis drawn from experimental data has nothing to do with the quality of the data itself. Thus it's easy to throw the proverbial baby out with the bathwater by discarding the data because it can lead to uncomfortable conclusions. So what's the solution? Namely make conclusions that are safe, or don't make any conclusions at all. In other words, just have the facts, and not you, speak for themselves. This is called inductive reasoning, and is safe, boring, uncontroversial, and hardly the stuff of great science. But it gets you published, which at least provides tenure if not a Nobel prize.

The plain truth, or should I say likely hypothesis, is that we tend to reject stuff because it's just plain uncomfortable. In the real world, we can't escape making bad decisions. This is called the school of hard knocks, and we realize that taking our medicine is after all good for us. However, in the academic world, if you support a bonehead hypothesis you can withdraw not only from painful theories that challenge it, but also from the painful facts that don't support it, and wile away your time counting angels on pinheads in the company of your fellow true believers. This underscores an even greater problem, as uncomfortable arguments regardless of their basis in fact are not only shunned by individual academics, but also by the very provisos of the professional journals that represent their collective opinion. Thus if you want to make a reasoned argument against psychoanalysis, behaviorism, or evolutionary psychology in one of their journals, a rejection is not only in the cards, it's in the RULES. Thus no argument gets settled because no questions get to be raised, let alone discussed. Thus everyone talks past each other rather than to each other, and academia becomes not just an ivory tower but a tower of Babel.

And that's why I'm glad I am not an academic psychologist. It would simply drive me mad.
For a rather personal example of this, see tomorrows post.

No comments: