The research selectivity exercise presumes that the best papers get published in the best journals. The evidence suggests that on the contrary, the refereeing process discriminates against the best papers, and that many of the papers in the best journals are rubbish.
It has been argued that if the refereeing system is 99 per cent accurate, then 90 per cent of what is published will be rubbish. If an Einstein writes a paper, there is a 1 per cent chance of it being wrongly rejected. There is also a 1 per cent chance of a really bad paper being accepted. Since there are l,000 bad papers for every one by an Einstein, 90 per cent of what is published is bad. At 67 per cent accuracy, there would be 500 bad papers published for every good one.
Unfortunately the refereeing system is nothing like 99 per cent accurate. The evidence suggests that referees will usually agree that some papers are very bad indeed, and they will usually reject a very good paper. For the rest, the judgement is random. For example, Ingelfinger examined the performance of referees for the New England Journal of Medicine. Five hundred papers were refereed by two people. The two referees agreed only slightly more often than could be expected by chance. Since a quarter of the papers were considered bad by both reviewers, this suggests complete randomness on the papers that were not obviously bad. However, in 10 per cent of cases one reviewer rated the paper "A" while another rated it "D". Similar studies on papers submitted to biomedical journals and social science journals have shown agreement only slightly better than random.
There is much evidence that the best papers are more likely to be rejected. Current Contents ran some articles by the authors of the most cited papers in the physical and biological sciences - those that were cited more than l,000 times in ten years. The authors complained: "I had more difficulty in getting this published than anything else I have written." Some of the more prolific authors in economics and statistics have found the same: it is easy to place a routine paper but it is difficult to place an original, important or controversial paper. I know a case where one journal rejected a paper as rubbish, but another, of higher status, accepted it as being "the most important paper ever published in this journal".
Good papers present new theory or refute old theory, give new insights, or present unexpected results. Referees who accept these must be willing to accept new ideas: they must accept both that they have been wrong, and that they have missed what now appear to be obvious errors. This is a threat to their self-esteem, and subconscious blocking mechanisms come into play.
Experiments show that referees are more likely to accept papers which support their own beliefs. Papers which had the identical method, analysis and discussion sections, but different results, were sent to referees by a psychology journal. They were more likely to be accepted if the results supported the referee's theories.
Similarly, papers with inconclusive or negative results are more often rejected. Some journals have editors and referees drawn from half a dozen universities, who seem uncomfortable with the paradigms and analysis used elsewhere, and who seldom accept papers from other universities. The American Economic Review, the American Sociological Review and the Quarterly Journal of Economics have been accused of this.
A paper is likely to be rejected if its methodological approach is not the same as the referees'. There are ivory towered theoreticians who will reject a paper as "eclectic" if it has sufficient assumptions to approximate to reality, or as "anecdotal" if it cites empirical data. A paper which attacks and refutes the paradigm of the journal is almost certain to be rejected. I have several times had such papers rejected by the specialist journals, only to be accepted by more prestigious general journals.
Referees also reject papers because their work is not cited: I had a paper on the economics of quality criticised by a referee because I had not cited his book on economics philosophy - a book that had not been published when the paper was written.
This unconscious and sometimes conscious bias once had a big effect, but the pressures are changing. The research selectivity exercise and the casualisation of academic employment have introduced a new moral hazard. Every referee has a strong personal interest in whether a paper is published. If someone else publishes papers which challenge the paradigm, or publishes a string of good papers, the referee's career suffers. The referee may then lose research income and research ratings. This will result in lost promotion and possibly a lost job. The effect is exacerbated by some journals, striving to be among the elite, having more referees than the competition. The more referees there are the more likely it is that one of them will reject your paper on specious grounds. Some of the best journals such as Nature achieve their eminence by minimising refereeing. Most journals omit the very best and the very good and print a random selection of the rest.
The research selectivity exercise commits the error a social scientist is warned against from the cradle: do not think that just because something can be measured, it is meaningful.
Dr Peter Bowbrick is an economics consultant.