榴莲视频

A fitter rival would soon make the REF extinct

<榴莲视频 class="standfirst">The UK’s research excellence framework is slow, expensive and disruptive. The time and technology is ripe for a better alternative, says James Tooley
四月 11, 2019
Dinosaur and cheetah illustration
Source: Miles Cole

As preparations for the next research excellence framework gather momentum, I fear I’m in a dwindling minority who finds it discombobulating to hear colleagues confidently asserting that they have a handful of 3* or 4* papers, and looking down upon those with only 2*s.

Colleagues, the difference between a 2*, 3* and 4* is based on subjective judgements of vague criteria. My articles have been ranked internally as 3* or 4*, so this isn’t sour grapes, but there really is not a knowable distinction between – as Main Panel C puts it – “quality that is internationally excellent” (3*) and “quality that is recognised internationally” (2*). The latter implies the former.

Nor is there a meaningful difference between research that is a “major influence” (4*) and research that is “likely to have a lasting influence” (3*). Ditto. And even if there were, the judgement is going to be made by a REF panellist who is unlikely to be an expert in your sub-field and who will probably skim-read your article over breakfast on a flight.

That, of course, has always been the problem with the REF. But back in 1986, when the first forerunner exercise was run, it was excusable to think that the only way of splitting research funding between universities on the basis of some sort of demonstrable merit was through some bureaucratic process of this sort, however imperfect.

But, hey, times have moved on. There are now numerous internet-based data sources on academics' research performance – and many of them are free. Senior academics have used some of these to come up with rankings of universities and departments that are extremely closely correlated to those produced from the REF.

For instance, in 2017, Anne-Wil Harzing, professor of international management at Middlesex University, found only small differences between the REF rankings and those created using data from Microsoft Academic. Memorably, doing so took her just “”. And, in 2013, Dorothy Bishop, professor of developmental neuropsychology at the University of Oxford, the data from the REF’s precursor, the research assessment exercise, and found that departmental h-indices in psychology predicted the results “remarkably well”. She suggested this may be true more broadly, too.

Meanwhile, Marcus Munafo, professor of biological psychology at the University of Bristol,? that a “prediction market” closely mirrored REF outcomes for chemistry departments. Prediction markets arrive at the probability of an outcome occurring based on individuals betting on what they believe the outcome will be.

I’ve also had a go, with my colleague Barrie Craven. We found that university rankings compiled using ResearchGate, Google Scholar and Webometrics (which creates scores based on “link analysis”, looking at each university’s presence and impact on the web) were, again, extremely closely correlated with REF rankings compiled by both Times Higher Education (based on quality) and Research Fortnight (based on quality and volume).

Importantly, two of the approaches described, prediction markets and Webometrics, have nothing to do with citation indices. These sometimes get a bad press from those, especially in the humanities, who are reluctant to let go of the REF; in the sciences, by contrast, the case is more accepted that citation by colleagues who presumably are experts in the relevant field is a better mark of quality than the approval of stressed, non-expert REF panellists. Either way, it is hard to endorse the conclusion of the 2015 government-sponsored that subjective judgement based on ambiguous criteria remains “the least worst form of academic governance we have” in the 21st?century.

Let’s spell this out. The REF delivers data extremely slowly and infrequently, at great expense (the official estimate is ?246 million) and with huge disruption to university life, resulting in rankings very similar but, arguably, inferior to those obtained simply and cheaply using a range of methods that don’t disrupt anyone.

Clearly the government is not going to replace the REF any time soon. An elephantine beast like this develops a life and purpose of its own, and loyalty to match. But there is a clear market opportunity for a sympathetic thinktank to create parallel league tables using the alternative, freely available resources. Because there are many of these, it would be easy to experiment to find an optimum combination of data that can’t be gamed and that offers no perverse incentives. A handful of supervised interns could easily handle it.

Regularly updated tables will be much more attractive to consumers of higher education – students and funders – than the quickly stale and out-of-date REF rankings. Hence, as Friedrich Engels might have put it, the demand for the REF will wither away. The interference of state power in research excellence will become superfluous. Universities will cease to see the need to participate. And, with that, the minister’s pen will easily do the needful and consign the REF to history.

All that universities would then need to do would be to make sure that their academics published high-quality research articles. Then they could stand back and let the private sector do the heavy lifting.

James Tooley is professor of education policy at Newcastle University.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> Reader's comments (4)
And what would the private sector do with the results?
And what form would this "private sector heaving lifting" take, I wonder?
One has a good deal of sympathy with the author's desire to get rid of the REF, as it is expensive, its validity may be questioned, and it has become associated with time-wasting and oppressive university procedures. However, this is a very bad article, primarily because it misrepresents the proposed alternatives. In particular, the author views 'prediction markets' as a better alternative: "Marcus Munafo, professor of biological psychology at the University of Bristol, found in 2015 that a 'prediction market' closely mirrored REF outcomes for chemistry departments. Prediction markets arrive at the probability of an outcome occurring based on individuals betting on what they believe the outcome will be." If you follow the link to the paper, https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.150287 , you will see (at p 4) that the 'prediction market' got the rankings significantly wrong. The predicted REF ranking for chemistry had Cambridge at the top, followed by Imperial, Oxford, Manchester, and Edinburgh/St Andrews. In the real REF ranking, the top 5 by overall score were Cambridge, Liverpool, Oxford, Bristol, and Durham, and the top 5 by 'outputs'-- the real core evaluation of research-- were Liverpool, Cambridge, Oxford, UEA, and Bristol. Imperial ranked 7th overall and 16th for outputs, and Manchester was 11th overall and 22nd for outputs. Munafo et al blithely dismiss this as "a few mismatches" and suggest that it "may reflect strategic decisions" regarding how many staff to submit (p 6), without providing any evidence of this. What these "mismatches" actually suggest is obvious. Unsurprisingly, the 'prediction market' looks as though it reflected historic reputation and prestige in the subject area, whereas the actual REF result, especially for outputs, might just possibly have reflected the real quality of people's work.
The REF is mainly a means for managers to suppress academic pay as if you don't publish 4* then they just say no pay rise. Simple as that. It is not as though the managers could even write 1* papers but it gives them a great tool to bash academics on the head with. Also academics spend too much time talking about the REF when they should be talking about why the managers are getting big pay rises and the academics next to nothing.