Search This Blog

Thursday, October 25, 2007

Peer Review and the Wisdom of Crowds

We know that as children we resisted taking our medicine not because it wasn't good for us but because it didn't taste good. Of course, as children we don't have the wits to rationalize that our distaste for medicine is other than a literal distaste. We like to think that as adults we are above all this, and can rise above a little discomfort to know what's good for us, and figuratively take our medicine. This is presumably the case in the rarefied world of academic debate, where a group of your academic peers reviews your argument with a dispassionate and objective eye. This process is aptly named peer review, where a jury of your intellectual equals decide whether your term paper/journal article/book is up to snuff, or whether it should be snuffed out.

But this is the type of hanging jury that will hand you your head if you just look at them cross, or cross them up by saying something that makes them look foolish or just plain wrong.

This was argument of the sociologist Michael J. Mahoney of Pennsylvania State University, who was one of the first to examine how well the peer review process works in evaluating scientific papers. In a landmark study( see peer review), he sent copies of one paper to 75 reviewers but doctored the results so that in some cases the research appeared to support mainstream theories, while in others it ran against them.


"When the results ran contrary to the reviewer's theoretical beliefs," Mahoney reported, "the procedures were berated and the manuscript was rejected. When the results 'confirmed' the reviewers beliefs, the same procedures were lauded and the manuscript was recommended for publication."

Mahoney's findings struck a nerve. Within three months after he presented his results last year at a meeting of the American Association for the Advancement of Science, he said, he "received probably 200 to 300 letters and phone calls from scientists who felt they had been victims of that kind of discrimination."
The problem is that when theoretical perspectives are informed hunches, who is to say your hunch is better than mine? Thus it's easy dismiss a competing hunch by merely saying that although the data is there, the right hypothesis isn't. For example, if your great paper notes that rats running a maze will take a left turn to get at the cheese because it's hungry, although your data may be above suspicion, your hypothesis of a hungry rat may not. So the reviewer in his damning repartee will dismiss your paper by remarking that your paper tragically errs by not confirming the well known fact that cosmic rays cause changes in maze navigation.

Mahoney's point however was that a hypothesis drawn from experimental data has nothing to do with the quality of the data itself. Thus it's easy to throw the proverbial baby out with the bathwater by discarding the data because it can lead to uncomfortable conclusions. So what's the solution? Namely make conclusions that are safe, or don't make any conclusions at all. In other words, just have the facts, and not you, speak for themselves. This is called inductive reasoning, and is safe, boring, uncontroversial, and hardly the stuff of great science. But it gets you published, which at least provides tenure if not a Nobel prize.

The plain truth, or should I say likely hypothesis, is that we tend to reject stuff because it's just plain uncomfortable. In the real world, we can't escape making bad decisions. This is called the school of hard knocks, and we realize that taking our medicine is after all good for us. However, in the academic world, if you support a bonehead hypothesis you can withdraw not only from painful theories that challenge it, but also from the painful facts that don't support it, and wile away your time counting angels on pinheads in the company of your fellow true believers. This underscores an even greater problem, as uncomfortable arguments regardless of their basis in fact are not only shunned by individual academics, but also by the very provisos of the professional journals that represent their collective opinion. Thus if you want to make a reasoned argument against psychoanalysis, behaviorism, or evolutionary psychology in one of their journals, a rejection is not only in the cards, it's in the RULES. Thus no argument gets settled because no questions get to be raised, let alone discussed. Thus everyone talks past each other rather than to each other, and academia becomes not just an ivory tower but a tower of Babel.

And that's why I'm glad I am not an academic psychologist. It would simply drive me mad.
For a rather personal example of this, see tomorrows post.






Tuesday, October 09, 2007

Getting down to business

What if one year, in a spasm of superhuman creativity, you were to write 20,000 articles that were published in all the best academic journals. And what if no one actually read them, let alone put their lessons to use? Welcome to the wonderful world of business pedagogy, where business journalese takes aim at the concerns of business managers, and promptly overshoots its target, or better said, shoots itself in the foot.

This is the problem with academic business research, which pretty much goes unread by an audience that only has 10 seconds for you to get to your point. Since getting to your point or more specifically marketing your point is a skill that academics rarely possess, the audience moves to those white collar types who become bestowed with street cred by earning a billion or so for General Electric, IBM, or Starbucks. It's sort of like Dr. Phil becoming a genius psychologist because he 'cured' a million of so poor souls on Oprah.

In an article on the state of business journalese in the 'The Economist', the global accrediting agency for business schools recommended that the value of research for the research faculty should be judged not by listing their citations in journals, but by demonstrating their impact on the workaday world. Since journal articles don't have much of an impact, you can get the drift.

Ultimately it is not the validity of academic research that counts in the real world, but its parsimony, readability, and most importantly, usefulness. For business people, usefulness is measured in how an idea translates into procedures that provide an edge in the Darwinian marketplace. Hence, nonsense has the shelf life of a Care Bear in the Cretaceous. Like business research, much of psychology aims to justify itself by making observations that can be used by common folks, where it promptly fails or is ignored. Too bad there is no global accrediting agency for the social sciences as there is for business. It would be good indeed for those of us interested in the business of living.