Publication Bias and Motivated Reasoning

Physicists do it (Glanz 2000). Psychologists do it (Kruglanski and Webster 1996). Even political scientists do it (cites withheld to protect the guilty among us). Research findings confirming a hypothesis are accepted more or less at face value, but when confronted with contrary evidence, we become “motivated skeptics” (Kunda 1990), mulling over possible reasons for the “failure,” picking apart possible flaws in the study, recoding variables, and only when all the counterarguing fails do we rethink our beliefs. Whether this systematic bias in how scientists deal with evidence is rational or not is debatable, though one negative consequence is that bad theories and weak hypotheses, like prejudices, persist longer than they should." - Taber & Lodge, 2006

Thus starts Taber & Lodge's famous paper on "motivated skepticism," or motivated reasoning, whereby we attend to, interpret and seek out information that is congruent with the beliefs and attitudes we already hold, while avoiding information that challenges our previous attitudes, beliefs and values.

Motivated reasoning has many applications. Chris Mooney has written on motivated reasoning and how it explains why many citizens choose not to believe the science and evidence of climate change, for example: "We apply fight-or-flight reflexes not only to predators, but to data itself," Mooney writes. "In other words, when we think we're reasoning, we may instead be rationalizing." (Mooney, 2011).

Taber and Lodge (2006) were introducing this phenomenon in their research paper when they wrote the paragraph at the top of this blog. But I think that the example they give, of bias for positive research findings in the sciences, is interesting in and of itself. Are scientists and publication editors engaging in motivated reasoning when they fail to accept findings that conflict with previously supported theories and hypotheses? Or is it just 'good science' to demand an accumulation of evidence against theories that have stood the test of time? For example, most scientists aren't going to revise their beliefs about the severity of climate change given a single published article that shows evidence for limited warming over the last century.

The International Council for Science writes: "Unacceptable bias arises when authors ignore data that does not fit a particular point of view (for example, instances of drug side effects), submit only positive results, or only include results that agree with the opinions of an editor or publisher." Many argue that scientific journals should publish negative results more often, as well as corrections and rebuttals.

"...there exists a tendency, mostly at the subconscious level, for an individual to confirm their expectations and the hypotheses they test. Writ large the confirmation bias yields one clear consequence in the excess of reported positive results. It’s not just researchers who are to blame — editors and pharmaceutical companies are also implicated in this pressure for interesting, profitable and positive results at the expense of the much maligned negative." - Open Science

So when do we start accepting evidence against previously accepted theories and hypotheses in science? When a single article refutes previous evidence? Five articles? A dozen articles? Ten dozen articles? As scientists who refrain from motivated reasoning, are we to accept every study as a small puzzle piece that should somehow fit into the larger puzzle picture?

Daniel Sarewitz writes in a recent Nature column, "[...] if biases were random, then multiple studies ought to converge on truth. Evidence is mounting that biases are not random."

For publication bias, perhaps the problem is a process of motivated reasoning (and wish for interesting stories to tell) on the part of scientists and editors who favor positive over negative results. Who wants to find that their well-founded hypotheses weren't supported in a lengthy and time-consuming experiment? One of my professors puts smiley faces next to the "Hypothesis Supported" outcome when teaching statistics and hypothesis testing in class. I guess negative results are bad (frown face).

Sarewitz writes: "The lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated — but the necessary cultural change is incredibly difficult to achieve."

Should PhD students be warned against favoring positive findings in their experiments? Should scientists pitch more negative results at editors of scientific journals? Should we all be more accepting of research findings that counter long-held theories and hypotheses?

In the end, how can science "self-correct" if we only pay attention to and publish positive findings, and if we favor findings that corroborate previous research? What is the difference between being more confident in a particular theory as confirming evidence accumulates, and being motivated reasoners?

At the very least, contrary evidence and negative results should make it to our eyes through scientific journals, or we are all motivated reasoners without even trying.