That’s the premise put forth by John P.A. Ioannidis, a medical researcher in Greece and adjunct professor at Tufts University School of Medicine. Ioannidis has modeled what can happen when unconscious biases creep into hypothesis testing. The field he had in mind was medical research, but others commenting on his work see parallels in areas ranging from brain mapping to parapsychology.
Some of Ioannidis’ findings sound like common sense about relationships claimed to be statistically significant. For example, studies with small sample sizes are less likely to yield correct results, as are studies of observable effects that are relatively small. And studies confirming consequences for which there is already a lot of solid evidence are probably valid.
But some problems are not as obvious. For example, too much flexibility in defining or analyzing outcomes can potentially transform negative results into positive ones. And it’s a good idea to question graded scales that researchers invent themselves. Of course, fuzzy definitions are less of an issue where the outcome is inarguable (such as death).
There are similar difficulties in new fields where analysis techniques are still being hammered out. Researchers can be tempted to report only their “best” results rather than a sea of data that is inconclusive. Ioannidis says there’s evidence that researchers manipulate outcomes and only report them selectively even in randomized trials.
Two of Ioannidis’ conclusions are interesting because they have implications beyond medicine. He says the greater the financial interests and prejudices in a scientific field, the less likely research findings are true. In the same regard, hot fields chased by numerous scientific teams are less likely to yield correct research findings.
His explanation for these effects is that prestigious investigators can have their own biases that prejudice their actions. For example, they can have financial or other interests that may lead them to shoot down peer reviews of findings that refute their own. And there can be a bandwagon effect when many teams of investigators pursue the same field. The imperative is to publish ahead of the competition, with a priority put on disseminating the most impressive “positive” results. So researchers can be predisposed to confirm the initial idea rather than to find the truth.
Clearly, hot fields are not confined just to medical research. Two other areas that immediately come to mind are global warming and alternative energy. Cheerleaders on both sides of these issues tend to haul out peer-reviewed research findings and wave them at each other as though warding off vampires with a crucifix.
Those tempted to use peer-reviewed research this way would do well to realize the scientific ground on which they are standing may not be as solid as they think. As Ioannidis has said, “It is impossible to know with 100% certainty what the truth is in any research question.”
Leland Teschler, Editor