The first results of a long-awaited series of experiments to test the reliability of key cancer research findings have exposed a “potentially serious shortcoming” in efforts to reproduce study findings, according to editors, raising questions about how scientists should test their conclusions.
Of the first five studies in the Reproducibility Project: Cancer Biology, launched in the wake of the failure of two big drug companies to reproduce high-profile cancer findings in 2011 and 2012, two were judged to have “broadly” supported earlier findings; one failed to do so; and two were “inconclusive due to technical problems with certain key experiments”, according to editors at the journal eLife, where the results are being published.
These are only the first batch of 29 replication attempts, so it is too early to draw any broader conclusions about the reproducibility of cancer research.
But the failure of two of five experiments to produce a result either way highlights one of the weaknesses of the strict method the project is using, according to an editorial in the journal.
Sean Morrison, a professor at the University of Texas Southwestern and reviewing editor for the studies, explained that these two experiments involved transferring tumours into mice, which then grew much more quickly or slowly than in the original experiments, making a comparison difficult. This kind of experiment is “notoriously volatile”, he told Times Higher Education. ?
In a normal lab setting, scientists might tweak the experiment to get an interpretable result. But because the project uses registered reports – peer-reviewed schedules that set out exactly how an experiment will be done so it replicates the original as closely as possible – researchers’ hands are tied.
Chris Chambers,?head of brain stimulation at?Cardiff University?and chair of the registered reports committee at the Centre for Open Science, one of the project partners,? said that?"replication attempts will sometimes produce uninterpretable results, but the only way science can advance is by making?such attempts".?
Following the registered report process was “vital”, he said, because it eliminated bias in experimentation, and the “polishing” and “burial” of inconvenient results.?
But Professor Morrison argued that following registered reports was only one way of testing whether results are sound. “There are many, many other studies going on in cancer biology testing the reliability of these results,” he said. These latest studies were “just one data point”, he added.
“This project wasn’t designed to be the final word on the reproducibility of certain studies,” he said. Instead, the aim is to provide an aggregate view of what proportion of cancer studies are reproducible, he explained.
Ottoline Leyser, professor of plant development at the University of Cambridge, said that the “value of these kinds of studies is to provide a clear route for funding and publication of repeat experiments, which is not easy with the current reward structures in science”.?
Even if all tested studies are sound, they will not all replicate, as the current experiments “are typically powered to have an 80 per cent probability of reproducing something that is true”, the eLife editorial cautioned.?