ÁñÁ«ÊÓƵ

Move to metrics may not bring significant savings to REF bill

<ÁñÁ«ÊÓƵ class="standfirst">Another review of the exercise is a chance to revise details of the costs, says Jonathan Grant
January 7, 2016
Miles Cole illustration (7 January 2016)
Source: Miles Cole

Those charged with planning the next research excellence framework could have been forgiven a weary sigh on hearing that there is to be yet another review of the exercise.

The news, announced in November¡¯s spending review, came despite a published last year (two of which I was involved in), which concluded that the REF worked, was value for money and that metrics are not sophisticated enough as an alternative form of research assessment.

But clearly this evidence has not been persuasive, with the government now intent on, as the puts it, examining ¡°how to simplify and strengthen funding on the basis of excellence¡±. The review, to be chaired by Lord Stern of Brentford, president of the British Academy, follows up on the suggestion in November¡¯s that the government is keen to see ¡°greater use of metrics¡± in order to ¡°challenge the cost and bureaucracy¡± of the REF.

There are two defining characteristics of the REF. One is that it is the means by which funding is distributed on one side of a dual-support system. The , which the chancellor committed in the spending review to implementing, concluded that dual support ¡°should be preserved¡±, so some kind of REF will still be needed.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

The second defining characteristic is that it is performance-related. The first question a review should therefore answer is whether this is appropriate. If not, then a simple and cheap formulaic system could be based on the volume of research grant funding generated by universities. Some might argue that this would still be performance-related because research grants are awarded competitively ¨C but the point of the REF is that it rewards the outputs and outcomes (or impacts) of research, as opposed to the inputs of research funding.

Any assessment of any performance is not free, raising the issue of what is an appropriate cost. We know that the for the 2014 REF were 2.4 per cent of the total funding that will be allocated on the basis of its results. That is significantly cheaper than research council transaction costs, . Although the analysis on which this figure is based is somewhat dated, a review could update it, as well as set clear evidence-based expectations as to what are appropriate transaction costs for a REF?like assessment.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

If a review concludes that the system needs to be performance-related and that the dual-support system needs to be preserved then the use of metrics reappears as a solution. This is despite the conclusion of James Wilsdon¡¯s , commissioned by Lord Willetts, the former universities and science minister, and published in the summer, that ¡°individual metrics give significantly different outcomes from the REF peer review process, and therefore cannot provide a like-for-like replacement for REF peer review¡±.

Although I am sympathetic to much of what is said in The Metric Tide, it is perhaps too simplistic to conclude that, when it comes to research outputs, ¡°it is not currently feasible to assess¡­quality¡­using quantitative indicators alone¡±. The issue here is the ¡°like-for-like¡± comparison. The implicit assumption is that the peer review process of the REF is the benchmark against which alternatives should be compared. But we know that peer review is not perfect: it can be biased against non-established groups, can inhibit innovation and is a subjective process. As we showed last year in a , it is possible to undertake bibliometric analysis of research outputs for some (mostly science) subjects. However, bibliometrics is also not without its flaws: it is biased to certain subjects, dated and most relevant for journal articles (not books).

In other words, we are comparing two imperfect systems. Under such circumstances, it is fair to ask about the costs of assessment ¨C or, to put it another way, the efficiency of the REF as opposed to its effectiveness.

We know from a that the absolute cost of the 2014 REF was ?246 million (the 2.4 per cent transaction cost already referred to). The ?232 million cost to submitting institutions consisted of about ?212 million for the submission process and about ?19 million for panellists¡¯ time. The biggest cost (?93 million) was for the selection of staff and publications. One way to eliminate this would be to submit all staff, although that would clearly have upward cost implications around volume of assessment and may generate some unintended behaviour, such as incentivising the movement of staff on to teaching-only contracts.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Examining the details of the costs also suggests that a move to metrics might not save as much as it would seem at first sight. Given the widely accepted inability of metrics to replace impact case studies, a best-case scenario would be that the environment element became wholly metrics based, which would save ?34 million, and that outputs to the science panels would be wholly assessed through bibliometrics, which would save about ?15 million (half of the cost of assessing outputs).

But the use of metrics will not be cost free, so let¡¯s set aside ?4 million for their central generation and management. This results in total savings of ?45 million. This is clearly a crude and heuristic analysis but even if out by a factor of two, the saving is still probably not as much as the government anticipates.

That is not to say that another review is a waste of time (and money). Done right, and focused on the right questions, it could help to fill in the gaps in the existing body of evidence and perhaps challenge some of the myths that have arisen around the supposedly exorbitant costs of the REF¡¯s current incarnation.

Jonathan Grant is director of the Policy Institute, professor of public policy and assistant principal for strategy at King¡¯s College London.

ÁñÁ«ÊÓƵ

ADVERTISEMENT
<ÁñÁ«ÊÓƵ class="pane-title"> POSTSCRIPT:

Print headline: Settling the bill

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (1)
A cautionary tale about an over-reliance on metrics and a bureaucratic distrust in human creativity and the ability to take risks can be found here: http://www.ted.com/talks/sebastian_wernicke_how_to_use_data_to_make_a_hit_tv_show?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TedtalksHD+%28TEDTalks+HD+-+Site%29
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs
ADVERTISEMENT