Source: Katoosha/Shutterstock.com
Richard Smith, who edited the BMJ between 1991 and 2004, told the Royal Society¡¯s Future of Scholarly Scientific Communication conference on 20 April that there was no evidence that pre-publication peer review improved papers or detected errors or fraud.
Referring to John Ioannidis¡¯ famous 2005 paper , Dr Smith said ¡°most of what is published in journals is just plain wrong or nonsense¡±. He added that an experiment carried out during his time at the BMJ had seen eight errors introduced into a 600-word paper that was sent out to 300 reviewers.
¡°No one found more than five [errors]; the median was two and 20 per cent didn¡¯t spot any,¡± he said. ¡°If peer review was a drug it would never get on the market because we have lots of evidence of its adverse effects and don¡¯t have evidence of its benefit.¡±
He added that peer review was too slow, expensive and burdensome on reviewers¡¯ time. It was also biased against innovative papers and was open to abuse by the unscrupulous. He said science would be better off if it abandoned pre-publication peer review entirely and left it to online readers to determine ¡°what matters and what doesn¡¯t¡±.
ÁñÁ«ÊÓƵ
¡°That is the real peer review: not all these silly processes that go on before and immediately after publication,¡± he said.
Opposing him, Georgina Mace, professor of biodiversity and ecosystems at University College London, conceded that peer review was ¡°under pressure¡± due to constraints on reviewers¡¯ time and the use of publications to assess researchers and funding proposals. But she said there was no evidence about the lack of efficacy of peer review because there was no ¡°counterfactual against which to tension¡± it.
ÁñÁ«ÊÓƵ
¡°It is no good just finding particular instances where peer review has failed because I can point you to specific instances where peer review has been very successful,¡± she said.
She feared that abandoning peer review would make scientific literature no more reliable than the blogosphere, consisting of an unnavigable mass of articles, most of which were ¡°wrong or misleading¡±.
It seemed to her that the ¡°limiting factor¡± on effective peer review was the availability of good reviewers, and more attention needed to be paid to increasing the supply. She suggested that the problem of the non-reproducibility of many papers was much more common in biomedicine.
But Dr Smith said biomedical researchers were only more outspoken about the problems because ¡°we are the people who have gathered the evidence¡± of it. He said peer review persists due to ¡°huge vested interests¡±, and admitted that scrapping peer review was ¡°just too bold a step¡± for a journal editor currently to take.
ÁñÁ«ÊÓƵ
¡°But that doesn¡¯t mean [doing so would be] wrong¡It is time to slaughter the sacred cow,¡± he said.
Meanwhile, science publisher Jan Velterop said peer review should be carried out entirely by the academy, with publishers limited to producing technically perfect, machine readable papers for a ¡°much, much lower¡± fee than typical open access charges.
He said that for many papers, it would be most appropriate for authors to invite experts to seek the endorsement of a number of experts ¨C on the basis of which they would be submitted to journals.
When he had approached publishers about the idea they had typically accused him of ¡°asking us to find the quickest way to the slaughterhouse¡±. But the ScienceOpen platform had to offer publication by endorsement as an option, for a fee to be determined following consultation.?
ÁñÁ«ÊÓƵ
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login