Science is chasing truth, with an emphasis on the chase. Nevertheless, regardless of this pursuit, a common assumption is that when a scientific study is published, its results are true — and only on the rarest occasions, do false findings appear in print.

But a new analysis of the probability that published research findings are true suggests that we all may be deceiving ourselves – most research papers are instead false.

“We have to acknowledge that there is a problem,” says John Ioannidis, from the University of Ioannina in Greece, who recently undertook the analysis.

Using a mathematical model that doesn’t look at particular findings, but instead examines the probability that any result is true, Ioannidis argues in the August issue of PLoS Medicine that it is more likely for a published research claim to be false than true. This, he says, is due to the effect of many studies having small sample sizes, investigating things with a tiny overall impact, researcher biases such as financial interests, and the speed of some ‘hot’ fields.

“But this [a false result] is part of the scientific process and it’s misleading to expect scientific research to always come up with highly conclusive conclusions,” says Ioannidis.

“However, many times we see in the scientific literature, and mass media, that scientific discoveries are presented as highly definitive,” says Ioannidis, “and that’s not the case, it’s just the best news we have to date.” As such, the reproducibility of science is a problem in need of attention.

Ioannidis began to think about the truth of research claims while studying biases in medical research. As of part of this previous research, he poured over the replication rates for highly cited epidemiological studies and randomized clinical trials between 1990 and 2002.

“Five out of six epidemiological studies have been contradicted in a very short period of time,” says Ioannidis, “while about one out of three randomized clinical trials were also refuted.”

But, in reality, what can be done? “One step, would be to present science with an estimate of its credibility,” says Ioannidis, “this would be very useful, even if it is low, since other scientists, clinical doctors, and the lay public, would be able to better assess it.”

It would also be useful to register a science project before it starts, so that the design of the experiments could be examined for bias. “This may not be wholly feasible, but steps of this sort are already underway for some randomized clinical trials,” says Ioannidis, “although for discovery research, this isn’t really reasonable, since the sense of trying to outwit others is strong for this type of research.”

In an editorial that accompanies Ioannidis’ study, the editors of PLoS Medicine suggest that “one way to do this [avoid the publication of false results] is to delay publication until such a time when the chances that a conclusion is true are sufficiently high.” However, the editors also suggest that a move like this must be balanced to ensure research doesn’t become stagnant.

Science is often modeled as a source certainty, but as Ioannidis’ study points out, scientists, like most people, are simply in the process of learning.