“Too much of what is published is either wrong or trivial!” I still vividly remember this admonition by the eminent mathematician Shizuo Kakutani delivered to a class on functional analysis at Yale University. My eyes were wide open, I had just entered graduate school and it was the fall of 1981.
Dr. Ariel Fernandez in his Yale days.
How much out there can you trust as a scientist? How much of the published scientific record can withstand the test of time or should be swiftly withdrawn? What percentage of your own scientific claims is likely to be debunked during your lifetime? These are looming questions that often haunt researchers early in their careers.
Reprogramming differentiated cells into stem cells by adding acid and a few other chemicals (stimulus-triggered acquisition of pluripotency, or STAP) seemed almost too good to be true. Sure enough, STAPs didn’t fly for long. Nature, the journal where the striking discoveries were announced, went at considerable lengths to spot potential issues after the results met skepticism right from the get-go. It was quickly realized that STAP cell research would not pass the acid test of science.
The more recent claims on the astronomical signature of such esoteric entities like cosmic inflation and gravitational waves seemed blockbusters in their own right, only to be reinterpreted as artifacts due to cosmic dust. The scientists that heralded the discoveries as crucial to unveil the mysterious Big Bang ripples and to corroborate Einstein’s predictions were now trying to distance themselves from the recent claims.
An alarming proportion of the research that people have tried to replicate has proven to be irreproducible. And it is sometimes very difficult to reproduce research results. I still remember the work by one of the coworkers of Nobel laureate Manfred Eigen at the Max Planck Institute in Goettingen, Germany, that had elicited skepticism as it seemed to contravene the central dogma of biology (a postulate about the directionality of the flow of biological information). The results published in the journal Nature seemed to support the existence of a flow of information from protein to RNA, counter to the established RNA-to-Protein directionality. As it turns out, the biological material that needed to be handled to reproduce those experiments was so difficult to work with that the researcher in question and his technician, both skillful experimentalists, needed to be flown over (often across the Atlantic) to whatever lab was claiming irreproducibility in order to perform the experiment in situ. It worked every time!
How thoroughly should you check your claims? If the news are truly spectacular, many labs will surely try to reproduce the work and, some would say, if you don’t have great news to convey, why bother doing the work in the first place? I suspect that the vast majority of mistakes are simply never spotted or withdrawn. Faulty research only comes to light because there is enough incentive for other scientists to try to replicate the research. The vast majority of research findings are simply never tested by other groups.
Again, how much out there is likely to be faulty? As it turns out, we now have a-priori estimates of such a figure, and they surely paint a disturbing picture. In 2005, John Ioannidis, a professor of medicine at Stanford University, published a paper entitled “Why most published research findings are false.” Ioannidis used statistical arguments and a set of common-sense assumptions to infer the likelihood that a given scientific paper contains false results. His conclusions are based on rigorous mathematics and statistics and are extremely troubling. Under a great variety of conditions, prejudices and interests, most scientific claims are false and they are overwhelmingly likely to represent “measures of prevailing biases” rather than pristine novel truths.
In regards to the financial interests that may be at play within this bleak picture, it is of course too tempting to assume that pharmaceutical companies would tend to publish results that highlight the therapeutic efficacy of their products and cite supporting studies, while placing those that raise doubts under the rug. Hopefully, this is not the case.
This crisis of trust demands that we regard the scientific process in a different light, not as an infallible series of hits but rather as a modest progression of provisional attempts at approaching truth. Strangely, this takes us back to the way Newton regarded science: Truth lies in front of our eyes like a vast ocean and we would be lucky to find a few pebbles at the beach.
The crisis of trust is not going to be resolved by beefing up journals with peer reviewers or by opening the gates of post publication critique in blogs where anybody is entitled to his or her neurosis. Science only gets trustworthy knowledge through a long and convoluted curation that is sometimes referred to as “the test of time”. If a good result is out there, it will pass the test and will be sooner or later incorporated to the real corpus, the one that matters. As any scientist worth his/her salt knows, the body of knowledge is far smaller than the published record.
Ariel Fernandez held the Karl F. Hasselmann Chaired Professorship in Bioengineering at Rice University until his retirement in 2011.