Two new studies have further highlighted problems of standards and reproducibility in science as academics face increasing scrutiny over whether they are eliminating bias during their research.
The first paper, published in Plos Biology, looked at experiments involving animals and found that scientists were rarely using techniques to cut down on bias, such as randomising which animals are assigned to different groups, or blinding themselves to this division. ??
Malcolm MacLeod, professor of neurology and translational neuroscience at the University of Edinburgh and one of the study authors, said that if these kinds of measures were not taken during the experiment, this could lead scientists to “overstate” the impact of a treatment.
Over time, studies had gotten better at randomisation, but still only 30 per cent of them did so, Professor MacLeod said at a briefing in London on the findings on 12 October. However, conflict of interest reporting had also improved significantly in recent years, he said.
The study also found that randomisation occurred “significantly” less in high-impact journals. This was actually a reversal of the situation in 1992, when papers in such supposedly top quality journals had been more likely to randomise. “Things are getting worse,” Professor MacLeod said.
In addition to blinding and randomisation, the study also looked at whether researchers calculated the statistical power of their experiments, and whether they reported any conflicts of interest.
In papers produced at universities ranked highly in terms of their research performance, just 15 per cent randomised their trials, and one in fifty studies included power calculations – in other words, almost none explained why they were using the number of animals they did, Professor MacLeod said.
“Studies could be better than they are,” he said, and added that the problem was more widespread than just animal research.
The paper, ", was released yesterday.
The second new paper found that researcher biases produced a 45 per cent increase in the overestimate of the effectiveness of treatments in studies that looked at tumour reduction in animals.
Responding to the findings, Chris Chambers, professor of cognitive neuroscience at Cardiff University, said: “Once again we are faced with an area of biomedical research suffering from poor research practices, this time in preclinical cancer studies. Preclinical animal research is vital for producing effective medications, which is why the quality of research in this field is paramount.”
“The problem is that guidelines alone will never fix these problems. Scientists already know how to do better science; they simply have no reason to do it,” he argued.
“We need to radically alter the incentive structure of academia, dumping a system that rewards scientists for doing the minimum necessary to generate publishable results in favour of one that rewards doing everything reasonable to produce reliable science. A major part of this solution is to decide which scientific studies should be published before the results of studies even exist
"A meta-analysis of threats to valid clinical inference in preclinical research of sunitinib" appeared in the journal eLife yesterday.?