ÁñÁ«ÊÓƵ

The big grants, the big papers: are we missing something?

<ÁñÁ«ÊÓƵ class="standfirst">A perverse focus on research cash and high-impact publications threatens academics¡¯ careers and the aims of science itself, says Dorothy Bishop
January 15, 2015

Source: James Fryer

You¡¯re applying for an academic job in the UK. What factors do you think are most likely to decide whether or not you¡¯ll get it? Your reputation among colleagues, your ability to teach, your collegiality, your creativity? In most research-intensive institutions two factors will trump all of the above: publications and research income. Since the job of a researcher is to do research and publish it, these may seem reasonable criteria, but they have become problematic both in terms of how they are interpreted, and in the way they have become all-important.

Looking first at publications, it has long been recognised that number of publications is not a useful way of assessing researchers. In the research excellence framework and its predecessors, individuals were allowed to submit up to four publications ¨C a move that should encourage a focus on producing a smaller number of meaty publications rather than disgorging findings in tiny gobbets. Many funders and some employers take a similar approach: applicants for grants or jobs are asked to nominate a specified number of publications rather than all publications ¨C thereby disadvantaging those who adopt a ¡°never mind the quality, feel the width¡± approach.

So we need to evaluate quality rather than quantity, but how? Many employers say they want applicants with publications in ¡°high-impact journals¡±, but this is fraught with problems. These are well-articulated in the , a document that has been signed by many august individuals and institutions, including the Higher Education Funding Council for England, the Royal Society and the British Academy. Nevertheless, the belief in journal impact factors remains entrenched in many places, leading researchers to waste months if not years in a demoralising cycle of submission to journals with ever-declining impact factors until at last a home is found. This is bad for science, and not just because it delays the publication of research. The problem with high-impact journals is that they regard newsworthiness as a major criterion for deciding what to publish. Although they aim for methodological rigour, this tends to take second place to ¡°interest¡±. Negative results from well-designed studies are unlikely to find a home in a high-impact journal, and so the literature paints a distorted picture of reality. Furthermore, the highest-impact journals often have a lower standard of reviewing than the lower-impact specialist journals, because the editor doesn¡¯t have expert knowledge of the subject area. Finally, high-impact journals usually take only very short papers; methods may be relegated to supplementary material, making it easy to highlight the most exciting results while glossing over the detail, where the devil is often lurking.

Reliance on grant income as an indicator of research prowess is even more corrosive. For many universities, a high proportion of their core funding is linked to research grants. Consequently, we now have the weird situation whereby a researcher who achieves important results with little or no funding is valued less than someone who receives a huge grant but fails to do anything sensible with it. It should be a matter of concern to funders that in contemporary academic life, people are encouraged to write expensive grant proposals rather than thrifty ones. The other trend is the rise of the ¡°research magnate¡± who accumulates grants as if they were pots of gold, but then is overwhelmed by the workload of the resulting research portfolio. Some top researchers from the past would not have flourished in the current system, because their research was not expensive enough. After a career spanning 40 years, Daniel Kahneman¡¯s elegant experiments led to a 2002 Nobel prize, but they did not require costly equipment or large squads of staff. This kind of research would be devalued in the current system for not generating enough research income.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

But there is a deeper concern about changes in our scientific culture. The system of valuing high-impact publications and expensive grants has rewarded those who achieve these goals, and who have a vested interest in perpetuating the status quo. In effect, we may be driving out the very people we need to retain: those who are interested in science as an end in itself, rather than as a way of achieving personal advancement. If a scientist has the option of publishing a detailed account of a piece of work in a middle-ranking journal, or a shortened version that omits problematic findings in a high-impact journal, many feel that it would be career suicide to go for the former option. If they want to carefully analyse, reflect on and write up their current research before applying for further funding, they may find line managers threatening them with redundancy.

There is growing recognition of these problems. In April I¡¯ll be chairing a joint meeting of the Academy of Medical Sciences, the Wellcome Trust, the Biotechnology and Biological Sciences Research Council and the Medical Research Council on research reproducibility, where the topic of incentives will be high on the agenda. As Ottoline Leyser, deputy chair of the Nuffield Council on Bioethics, has put it, the ¡°relentless focus¡± on publishing in prestigious journals encourages poor practices such as ¡°over-claiming the significance of research findings, sticking to trendy areas of science and leaving important but confirmatory results unpublished¡±. If our focus remains so narrow, we also risk losing sight of the purpose and meaning of science itself.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Dorothy Bishop is professor of developmental neuropsychology at the?University of Oxford.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (3)
Dorothy Bishop hits the mark on all the negative consequences due to our short sighted research culture. I constantly notice how many papers from lowly specialist journals are cited as the basis for performing studies published in others of higher impact, or as justification of results. Like research metrics, we bought into the system and now the numbers are used as sticks to beat us by the corporate cabals who run our universities and government bureaucrats who can only read the bottom line of an Excel spreadsheet.
Too many people from BIS to VCs to deans and heads of departments are convinced that they can just have the top of the pyramid of research, that it will somehow support and sustain itself. This same logic is employed at macro level and at department level, and it's equaly invalid in all cases.
Well articulated, Dorothy! Incentivising researchers for having high costs and low humility is the opposite of what we should be doing. I wonder if a step forward would be to go back to the days when there were no university overheads included in grants. As long as Universities both take a 40% cut and do the hiring, we'll see Professorships advertised as "must have big grant" and little else. The money going toward overheads could then be directed towards, say, a mixture of short-term citation impact, and longer-term recognition, like Nobel prizes etc.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs
ADVERTISEMENT