Some of the connections that doctors and researchers hold so dear – links between certain biomarkers like gene variations and the diseases that they are thought to predict – may not be so strong after all. According to a large new review, many of these connections no longer hold water, meaning that doctors will need to adjust the recommendations they make for their patients based on their labs and blood tests.

Effects Not So Strong as Believed...or Reported

Researchers reviewed 35 of the most widely-cited studies in medicine, which had been published in some of the top journals. They found that less than half of the connections between specific biomarkers, like genes and levels of blood proteins, and certain diseases were supported by later follow-up studies. "We found that a large majority of these highly cited papers suggested substantially stronger effects than that found in the largest study of the same markers and outcomes," said author John Ioannidis.

They found that less than half of the connections between specific biomarkers, like genes and levels of blood proteins, and certain diseases were supported by later follow-up studies.

This means that in many cases, the research community is holding on to obsolete findings, just because they were the first or perhaps the splashiest studies on a particular subject. This is ironic, because researchers know that replication or repeatability is one of the central tenets of science — that is, a finding must be supported by follow-up studies to really hold weight.

Ioannidis says that the phenomenon he reported doesn’t necessarily indicate deception on the part of original researchers who published the flawed findings: "No research finding has no uncertainty; there are always fluctuations. This is not fraud or poor study design, it’s just statistical expectation. Some results will be stronger, some will be weaker. But scientific journals and researchers like to publish big associations."

Once a finding is published, especially if it is pioneering and published in a major journal, it tends to be cited by author after author (not to mention the media) in a kind of snowball effect. This phenomenon makes the original finding appear as gospel, even it if it not backed up by later studies.

Some Results May Be Due to Chance

Sometimes a connection may be reported even though the results are really just due to chance: suppose you flip a coin ten times with your left hand. Occasionally (if rarely) they’ll all come up heads. This might lead you to say that there’s a strong connection between left-handed coin-flipping and the coin landing on heads, when really it’s just luck. The same thing can happen in science, which is why more studies (i.e., more coin-flipping) must be done to either confirm or invalidate the original study.

'We need to follow the scientific method through to the end and demand replication and verification of results before accepting them as fact.'

Sometimes the explanation is not so innocent. Ioannidis says that "[r]esearchers tend to play with their data sets, and to analyze them in creative ways. We’re certainly not pointing out any one investigator with this study; it’s just the societal norm of science to operate in that fashion. But we need to follow the scientific method through to the end and demand replication and verification of results before accepting them as fact."

Last year the prestigious journal that published the original study sparking the autism-vaccine connection in children retracted the study. Ten of the 13 authors stood behind the retraction. In this case, a flimsy (and likely nonexistent) connection was implied in an article — and blown out of the water by the media, the public, and the scientific community, who scrambled to replicate the findings in follow-up studies. After 12 years and little evidence of an actual link between autism and vaccines, the paper was finally retracted — but many people still believe in the connection. In the same way now, it may take a good amount of time before we are able to let go of the connections between biomarkers and disease that we’ve come to understand as fact.

Ioannidis suggests that there be an ongoing way to track new research in a particular area. As each new result comes in on a specific marker and its disease, it could feed into a larger analysis so that the connection could be evaluated as it develops. Time will tell how the scientific community adjusts for these findings. In the meantime, doctors will need to be more careful about how they interpret a patient’s test results, and refer to the latest – not the biggest – studies on a particular subject.

Ioannidis is a researcher at Stanford University; the results are published in the June 2011 issue of the Journal of the American Medical Association.