Open up the science section of a newspaper and you will come across an article on the alleged reproducibility crisis in science1-2. According to a recent survey3, the vast majority of researchers have tried and failed to reproduce another scientist’s experiments. Had I known that a couple of years ago, I might not have gone through my own reproducibility crisis. I had a relatively large dataset, but was not able to replicate a seemingly well-established finding. After finally realizing this non-significant result was neither a personal failure nor a mistake, I had to move heaven and earth to get it published. There were times when I just wanted to shove the article back into my file drawer, but I couldn’t: I did the work and believed in its veracity.
You get what you reward
Trying to publish a negative replication study seems to be one of the most irrational strategies a researcher can follow under current incentive structures in science. Research by Marcus Munafò and Andy Higginson suggests that it is much more rewarding for a modern-day researcher to conduct a series of exploratory, underpowered studies in the search for novelty4. This will lead to a better publication record, and hence, career success. But what might serve the individual scientist, doesn’t serve science as a whole. The scientific record will maintain many false findings if scientists are much more rewarded for publishing novel findings than for (non)replications.
Another way incentive structures can have an impact on scientific discoveries is nicely illustrated by Bill Bryson in A short history of nearly everything. The book describes the impressive finds of bones from early humans by palaeontologists at the turn of the twentieth century. Then, in a short but telling digression, Bryson describes how these discoveries could have been even more impressive if not for a “tactical error”.
One of the explorers, Ralph von Koenigswald, had “offered locals ten cents for every piece of hominid bone they could come up with, then discovered to his horror that they had been enthusiastically smashing large pieces into small ones to maximize their income.”
I can imagine how future historians will also shake their heads in disbelief when they study our present ‘publish or perish’ culture. Already now, the tactical error seems pretty clear: the overemphasis on the size of an individual’s publication record stimulates practices that undermine the scientific value of research. Consider, for instance, the practice of salami slicing5: why publish one comprehensive article when you can also split your study results up and publish them in multiple smaller articles?
Thinking about science
Fortunately, more and more researchers are thinking about how the behaviour of individual scientists is, consciously or unconsciously, shaped by the way science is organized. One of these metascientists is British psychologist Marcus Munafò, whom I recently had the chance to interview for the BCN newsletter. In the interview Marcus says: “People who are interested in metascience are looking at the way science functions and asking: Can we do better?”. Marcus, for instance, suggests moving toward a more diverse system of rewarding outputs and incentivizing open science practices.
Meeting Marcus made me think we should all occasionally take a step back from our own research and become metascientists. To better understand the consequences of our current practices and to think about how we could ourselves be the change we would like to see in science. In the end, that is what my miniature reproducibility crisis did for me: it made me stop and think, not only about my particular field of research, but also about science as a whole.