Saturday 11 April 2015

The science fraud

The suicide of a stem cell researcher in Japan last summer prompted a great deal of soul searching in science. Yoshiki Sasai’s death came after a scandal involving two papers retracted for fraud — the most high-profile case of scientific misconduct in 2014. But it was far from the only one.
Serious questions were also raised about stem cell research by Harvard’s Piero Anversa. We learned more about Cory Toth, a former diabetes researcher at the University of Calgary, whose lab fabricated data in nine published articles. And we saw the discovery of an apparent ring to generate positive assessments, aka peer reviews, of submitted manuscripts, 60 of which wound up being retracted.
It might seem, then, that 2014 was an annus horribilis in the world of science fraud. For many in the public, which pays for much of this research in tax dollars, news of these events may have come as a rude awakening. But at Retraction Watch, when we see and hear that kind of commentary, we feel a little like the police captain in Casablanca who proclaims he’s “shocked, shocked!” to learn there is gambling at Rick’s, only to be handed his winnings a moment later.
We started Retraction Watch in 2010, and every year since then, we’ve witnessed at least a few cases big enough to warrant headlines: anesthesiologist Yoshitaka Fujii, record holder for retractions at 183; Diederik Stapel, whose groundbreaking social psychology work was almost entirely fabricated; Joachim Boldt, the German critical-care specialist and previous retraction record holder. The list goes on.
So what can we learn from all these scandals? Are scholarly journals and their editors, who insist that their peer-reviewed studies are more trustworthy than everything else the public hears about science, little better than carnival barkers hawking bogus trinkets? 
The short answer is no. Journals and publishers are, for the most part, doing a good job. They increasingly use software to screen manuscripts for plagiarism, and some even employ statistics experts to review papers for signs of data fabrication. In the Fujii case, for example, the British journal Anaesthesia had a stats guru analyze Fujii’s articles. His verdict: The chances that the data were valid were infinitesimal.
It’s impractical to apply this sort of extensive scrutiny to every one of the nearly 2 million manuscripts submitted each year. But conducting statistical reviews of papers that get flagged during the editorial process — or, perhaps more important, after publication — is an achievable goal, one that would make a significant contribution to the integrity of the scientific literature. 
In fact, post-publication peer review is an emerging phenomenon in scholarly publishing. On sites like PubPeer, researchers critique papers, pointing out everything from errors or other problem spots to potentially manipulated images and other evidence of misconduct. One of the reasons there seems to be more fraud is simply that we’re better at finding it.
Many scientists, journal editors and publishers have reacted warily to PubPeer and its ilk. Some contend that the anonymity of the post-publication reviewers breeds witch hunts and harms innocent bystanders. But the sites are doing a service by catching horses even though they have already left the barn. In addition, small but growing efforts have also begun, to test whether research holds up by repeating — in scientific parlance, replicating — experiments in cancer and psychology research. 
All of these efforts underscore a critical point, one that science may need some time to embrace: The paper is not sacrosanct. It does not come into the world like a flawless, shining deity immune to criticism or critique. If more scientists come to think of a new publication as a larval stage of scientific knowledge and if fewer schools and funding agencies prize the high-profile journal article — basing tenure, grant and promotions on it — then researchers will feel less pressure to cut corners and manufacture dramatic results.
Reporting on cases in which scientists have committed fraud can be disheartening, even heartbreaking. But for every fraudster out there, we know there are dozens of scientists who are quick to correct the record when they discover problems in their work. And they rail against the reluctance of many of their peers to do the same. Sadly, many scientists are worried that acknowledging any fraud in their midst will discourage funding.
We are guided by the old chestnut: The cover-up is worse than the crime. If the growing awareness of an ongoing problem has led to more transparency, the scientific process, and the public who benefits from the knowledge it generates, will be better off.
[This article originally appeared in print as "The Year in Fraud."]

No comments:

Post a Comment