The impact element of the research excellence framework was almost as costly to institutions as the entire 2008 research assessment exercise.
That is one of the findings of Rand Europe’s assessment of institutions’ and panellists’ experience of the inaugural inclusion of impact in the 2014 REF.
The assessment, launched at the Higher Education Funding Council for England’s REFlections conference last week, estimates the total cost to institutions of articulating their impact to have been ?55 million.
That compares with the ?47 million estimated by PA Consulting to have been the total cost to English institutions of the 2008 RAE. Taking into account inflation and including non-English institutions would put that up to ?66 million, Rand estimates.
An analysis of the total cost of the non-impact elements of the REF, being undertaken by Technopolis, is still incomplete. A similar level of expenditure as in 2008 would put the total cost of the REF at ?121 million (excluding the cost of the time of academics appointed to assessment panels).
However, Paul Simmonds, managing director of Technopolis UK, told the conference that while some elements of the RAE had been scrapped, substantial new costs had been added by the more elaborate procedures for assessing staff with special circumstances.
Catriona Manville, a senior analyst at Rand Europe, acknowledged that the ?55 million figure for impact was “large”, but noted that it was just 3.4 per cent of the 20 per cent of quality related (QR) funding that will be allocated each year on the basis of impact, assuming another flat-cash science budget in the next spending review. If the REF had cost ?121 million, that would represent 1.4 per cent of QR funding over six years, compared with an estimated 10 per cent for the “transactional cost” of distributing money via the research councils.
Robert Bowman, director of the Centre for Nanostructured Media at Queen’s University Belfast, estimated in February that the real cost of the REF was more than ?1 billion, with impact costing nearly ?100 million. However, his figures included full economic costing for salaries, while Rand’s do not.
He agreed that even ?100 million for impact would represent “value to the sector and the country”, but he called for a “wholly transparent” publication of cost calculations so that the research community could “see if the assumptions made correspond to their experience”.
Each of the 7,000 submitted impact case studies cost a median of ?7,500 to produce, and took about 30 days. The latter figure compares with just five days in the preREF impact pilot, suggesting to Dr Manville that the high stakes of the REF proper meant a lot of “gold plating” had been carried out, with many institutions admitting to carrying out more than 10 rewrites.
Meanwhile, impact templates – intended to articulate a department’s “approach to enabling impact from its research” – cost a median of ?3,500 to produce. However, many assessment panel members felt that they added little. According to Tom Ling, senior research leader in evaluation at Rand Europe, this was partly because there was no requirement to provide evidence of claims in the templates, which left assessors to base their judgements largely on their quality of writing.
Universities also said that future REFs should clarify the evidence required in case studies, and consider aligning the REF’s definition of impact with that of the research councils.
Metrics: the pros and cons
A research excellence framework based purely on metrics is “neither desirable nor even technically possible”, but citation analyses of entire departments could help to counter “game playing” around staff selection.
That is the conclusion of the Independent Review of the Role of Metrics in Research Assessment and Management, commissioned by the Higher Education Funding Council for England.
Some academics argue that a metrics-based REF would be much easier and cheaper, and result in similar distributions of research funding. But the review’s chair, James Wilsdon, professor of science and democracy at the University of Sussex, told Hefce’s REFlections conference on 25 March that, for all its flaws, peer review retains “broad confidence”, while a majority of respondents to a call for evidence remained “sceptical” of metrics.
The review’s full report will not be published until July, but Professor Wilsdon said it had already concluded that metrics were not currently developed enough to replace peer review in either outputs or, especially, the impact sections of the next REF, likely in 2020.
Citation databases still provided inadequate coverage of disciplines that do not publish primarily in journals. And even where metrics bore some correlation to peer judgements, they couldn’t match their “multifaceted and nuanced” quality, especially at disciplinary level.
Professor Wilsdon could envisage an increased role for more advanced metrics in future REFs, and called for more bibliometric data to be provided to assessment panels in 2020, provided that each remained permitted to choose how and whether to use them.
However, he advocated meeting panellists’ appetite for more data to help them make “meaningful comparisons” in the environment section of the 2020 REF. As well as information on the age profile of staff and the representation of people with protected characteristics, he suggested that a bibliometric assessment of the research output of entire units of assessment could contribute to a “more accurate overall picture” by flagging up departments that submit low proportions of their staff in pursuit of high quality scores.
Since environment only counts for 15 per cent of overall scores, opposition to its use of metrics was likely to be less fierce, and the move would not incur the “perverse consequences” of requiring institutions to submit everyone – “which would lead to loads of people being put on to teaching-only contracts”.
Paul Jump