ÁñÁ«ÊÓƵ

It¡¯s our duty to assess the costs of the REF

<ÁñÁ«ÊÓƵ class="standfirst">
December 11, 2014

As we await the results of the research excellence framework 2014, it is time to reflect. The basic idea of identifying and directing research funding towards excellence must be right. Few would argue that the overall quality of the UK¡¯s research in virtually all subjects and its international position have not strengthened since the first research assessment exercise in 1986.

But ¨C and it is a big but ¨C the academic community is keenly aware of the heavy burden the process imposes, and of the time that academics spend preparing for assessment rather than on research itself. Given the monetary and human resource costs of the exercise, we simply cannot afford not to ask how well the whole system works.

As presidents of the British Academy and of the Royal Society, spanning most areas of research covered by the REF, we believe it is time to ask crucial questions about whether we are assessing quality in the most sensible way, and whether the burden could be reduced and the value of the process enhanced.

Have well-intentioned but imperfectly designed frameworks led to wasteful and distorting behaviours, by academics and their universities? Has what was designed as an instrument of quality assessment become an institution that risks stifling the excellence it was intended to foster?

ÁñÁ«ÊÓƵ

Have criteria of quality become too narrow and formulaic in some subjects? Are researchers feeling pressured to adopt short-term horizons and a narrow focus, and chasing publication rather than following their own judgements on which are the most fruitful avenues for research and most likely to yield major outcomes?

Is ¡°impact¡± examined in a way that is insufficiently deep and broad, and without an appropriate understanding of the timescales on which so many kinds of research depend? Is the REF incentivisation of universities to hire stars in the closing months, like an imminent transfer deadline in the Premier League, really a way to build a long-term scholarly department?

ÁñÁ«ÊÓƵ

If the present system is not encouraging researchers and scholars to pursue the best, most profound and most important lines of research, then we need to create one for 2020 that does. The solution is not to abandon research assessment, but we should aspire to have a genuinely light-touch system ¨C and find ways to reduce the burden. To begin with, we must be clear about what we are trying to encourage, and the principles that can guide us there ¨C in other words, what all this is for.

Our academies are ready to help lead this debate, but we urge that we begin by focusing on the big questions before being swamped by the detail. We need a system that is fit for purpose ¨C our world-class researchers deserve no less.

Lord Stern of Brentford
President, British Academy

Sir Paul Nurse
President, Royal Society

?

I offer reservations about the value of the REF from a business/management perspective. The ranking of universities by accumulative scores and averaged outcomes seems meaningless outside the UK academic system.

For research-intensive universities, the cracks in the system are visible and risible. Before a REF, institutions can buy in academic CVs with the right scores. However, the four submitted outputs are often written by several authors at different institutions, and each claims the same score for papers. Articles are also published in special issues of journals, which allows enterprising academics to book those issues years ahead.

The professoriate and institutions are judged ¨C almost entirely ¨C by journal publications. The contribution made by an academic over his/her working life is not as important now as an average score in the last REF.

ÁñÁ«ÊÓƵ

¡°Impact¡± is measured by citation and rank indices instead of any consideration of impact in business practice, and this perpetuates an unchanging list of high-ranked journals and the snail-like progression of business/management subjects. Journal articles are often relevant only to colleagues, yet the quality of REF submissions is assessed by:

  • Impact. Since 2008, this measure is still at the developmental stage and weighted at 20 per cent. However, most business and management papers offer no economic or social benefits. It is alarming to find no attempt to measure relevance when business and management research needs to be relevant now
  • Research outputs. This element is ranked at 65 per cent. Yet scores are aggregate scores of averaged individual outputs. While panel experts claim to read individual outputs, this seems unrealistic. The outcomes, which could have been arrived at by less expensive means, are not good value when seeking to compare UK research with that of other nations
  • Research environment. This is weighted at 15 per cent and is meaningless outside the UK.

The REF is a type of academic navel-gazing in a system designed to justify continued investment in perpetuity, but offering little value to external constituencies. Other nations often have much better records relative to innovation than the UK.

REF 2014 is a prima facie case of all mouth and no trousers. It is a pity given the state of the UK economy that precious resources continue to be squandered in this way.

ÁñÁ«ÊÓƵ

Philip J. Kitchen
Research professor in marketing
ESC Rennes School of Business, France

?

Your article on using metrics to predict results of the research excellence framework (¡°The (predicted) results for the 2014 REF are in¡±, News, November) attributes to Ralph Kenna the view that even a correlation of 80 per cent between h-indices and peer-review rankings would not justify moving to a metrics-based system because ¡°that would still mean that 20 per cent of departments would suffer the ¡®tragedy¡¯ of being inaccurately ranked¡±. That would be true only if we could assume that peer review is more ¡°accurate¡±. I am unaware of any philosophically or empirically sound justification for such an assumption.

Robert Barton
Professor of anthropology, Durham University

?

Any suggestion of metrics produces howls of protest from those who think it is too gross a way to measure quality. The problem is that the alternative ¨C peer review ¨C is also flawed.

ÁñÁ«ÊÓƵ

There are numerous problems with an h-index, but the question is whether it is any worse than alternatives ¨C especially given its cost-effectiveness. I have recently done an analysis that suggests that we could actually do away with the peer-reviewed REF and with metrics and just allocate funding in relation to the number of active researchers in a department.

Dorothy Bishop
Via timeshighereducation.co.uk

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs