ÁñÁ«ÊÓƵ

Creator says REF should swap expert panels for metrics in science

<ÁñÁ«ÊÓƵ class="standfirst">Rama Thirunamachandran says metrics could be effectively used to assess quality for many disciplines
April 20, 2021
Rama Thirunamachandran, the vice-chancellor of Canterbury Christ Church University
Rama Thirunamachandran, vice-chancellor of Canterbury Christ Church University

The UK¡¯s research excellence framework (REF) should replace peer review in some scientific disciplines with citation-based assessments, the architect of the country¡¯s first national research audit has recommended.

As hundreds of expert assessors across 34?sub-panels begin the year-long task of grading tens of thousands of research outputs submitted to the 2021 exercise, the issue of whether this onerous task of peer review ¨C which cost ?19 million in panellists¡¯ time in 2014 ¨C could be replaced with less bureaucratic, costly and time-consuming process has again been discussed.

It follows several successful attempts by researchers to replicate the assessment, held every six years to determine university budgets, using only bibliometric data: one 2018?, which analysed the 6.95 million citations connected to the 190,000 outputs to the 2014 REF, claimed it was able to correctly predict top-ranked universities in 10 mainly science-based units of assessment with an 80 per cent level of accuracy.

Using this kind of bibliometric analysis would save countless hours of academic labour, it said, with the members of the REF¡¯s expert panel for physics in 2014 having to read at least two papers a day every day for 10 months to get through the 6,446 outputs submitted for this discipline. Other panel members would face an even higher number of outputs, which now account for 60 per cent of assessments, it added.

ÁñÁ«ÊÓƵ

Rama Thirunamachandran, vice-chancellor of?Canterbury Christ Church University, who developed the 2008 research excellence exercise ¨C the forerunner of the REF ¨C while he was director of research, innovation and skills at the Higher Education Funding Council for England, told?Times Higher Education?that he had believed future incarnations of the REF could successfully use metrics in place of peer-review panels.

¡°For some disciplines, a more mechanistic approach looking at bibliometric information might allow us to make valid assessments of outcomes,¡± said Professor Thirunamachandran, who added that these ¡°studies show this broad-brush approach can work quite well¡±.

ÁñÁ«ÊÓƵ

¡°In biosciences or chemistry, bibliometrics could act as a proxy for peer review, though for arts and humanities, and social sciences, it would be quite difficult to do as [metrics] are not robust enough.¡±

With a government-commissioned?review?of research bureaucracy under way following?criticisms?by the prime minister¡¯s former chief of staff, Dominic Cummings, that universities are a ¡°massive source of bureaucracy¡±, a move to metrics-based assessments in some disciplines has long been seen as a potential way to reduce red-tape costs, with the 2014 framework costing an estimated ?246 million to universities and funding bodies.

But Professor Thirunamachandran said he believed there were other areas of research that could yield more substantial savings in terms of bureaucratic costs than the REF.

¡°It¡¯s an exercise that takes place every six to seven years, whereas the bureaucratic burden is much higher for those constantly bidding for research funding ¨C that is quite significant, particularly when the level of applications not getting funding is quite high,¡± he said, adding he would like to see longer grants awarded to successful applicants to ease this strain.

ÁñÁ«ÊÓƵ

Dorothy Bishop, professor of developmental neuropsychology at the University of Oxford, who has argued that departmental h-indexes could be used instead of expert panels in some subjects, told?THE?that it was time to ¡°ditch the ridiculous current system¡±.

¡°It¡¯s highly stressful and a ridiculously inefficient system: mountains of effort for very little, if any, marginal gain over a simpler approach,¡± she said, adding that?¡°hours of time have been spent on mock REFs before we even got to the real thing¡±.

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Related universities
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (13)
Peer review is the most valid and unbiased method for reviewing a scientist¡¯s research. The metrics that have been developed are simplistic and flawed and are biased against younger researchers. We are not just a number!
For years I have argued for a simplified approach. To validate it at my own institutions (as applied to both the UK and Australian contexts), we analysed various models. The one that worked best (with an accuracy rate of over 90% again the scores) was to simply take every paper published by a faculty and weight that paper by the 5-year CIF. What this does is treat every paper as if has the impact factor of an average paper in that journal. This works remarkably well for group assessments since you are dealing with hundreds of papers and, on average, the best guess for those papers is that they will be average for where they have been published. I would note that in my area (business & management) citation factors are not used and there is a general view that every paper has to be read to understand its value (you would be pilloried for suggesting otherwise). What our analysis shows is that this may be true for any single paper but the exercise is collective, hence it does not matter if you over-rate or under-rate a paper (unless you believe that you will always be finding 'gems' in lower ranked journals while avoiding 'dogs' in upper ranked journals -- which btw is pure fallacy as it turns out institutions invariably over-rank their own work despite trying to be independent -- I know of no case where an institution was rated higher by the REF than their internal assessment indicated). This approach is not biased against young researchers (since you are using the average for the journal in which they publish not the actual citations) and stops gaming where well known researchers tend to get their papers overweighted (a fact we validated by looking at the ratings when the journal and the authors were not revealed). In addition, it is cheap and easy (we had an intern do it w/o any real issues) and can be adapted to allow for additional validation (reading a sample rather than the whole corpus and reading books and monographs). The problem with this approach is that it is algorithmic and this is usually a red flag to some bulls.
Surely the issue here is that algorithmic approaches can be gamed in very unhelpful ways. I can think of 2 ways this would be gamed. 1) target only higher impact journals. Probably shaping research to meet whatever those editors want, rather than what is good. 2) publish less. Paper rejected by high impact journal? Don't publish it. Afterall, it will count against the department come ref. Do we really want to narrow what people can publish and where?
elliot_shubert_gmail_com: "Peer review is the most valid and unbiased method for reviewing a scientist¡¯s research." Not if it is the way REF review is done. Peer review done by REF is NOT of the same quality we see in peer reviews done for journals. Many people have made the same critique of REF peer review (e.g., non-blind, lack of international reviewers, lack of suitable expertise, lack of the ability to reject review based on lack of expertise or unreasonable workload). REF peer review is to assume that UK academics assume themselves as the metric to judge what 'international' research excellence looks like - it is a neo-colonial enterprise.
What has always baffled is that in practice, REF peer review seems to be "academics grading their competitors". Given the stakes and the process being non-blind, why would an academic praise another academic working at another institution they could potentially compete against for funding??
It's important to realise that metrics (and especially citations) are really only informative if they are not being (extensively) gamed by academics themselves. In the current system, the process of peer review during REF reduces the returns to academics of gaming citation and publication metrics, so of course both metrics are informative about the quality of the underlying research. By tying the REF to these metrics, however, we incentivise universities to employ them in performance evaluation and as redundancy criteria for staff. As researchers' job security becomes very much dependent on these narrow metrics, they will (undoubtedly) start heavily gaming them. As a result, the indicators actually become less informative about the quality of the underlying research and institutions that they are supposed to be evaluating. Just something to keep in mind.
If by gaming the metrics you mean publishing more in the top-impact journals in the respective field, that sounds like it could actually improve the overall quality of research. I'd prefer such a transparent performance evaluation criterion over arbitrary other criteria.
A reasonable point to a degree. The thing is, it is recent research that is evaluated (or relatively), so some work that takes time to gather citations may fare badly even if it is actually pretty good research that later proves to be pioneering.
another interesting take on why we need a rethink in REF terms. For me there are three possible next steps. Option A: scrap the whole thing and give out research funds on a per capita basis. Truly radical, forces levelling up and would then probably require future evidence that research outcomes are improving. Option B: admit it is too hard to think of anything better that everyone will agree to ... and therefore turn the handle again with minor tweaks. Option C: admit that wasting time re-reading research published in 2015 to see if it merits funding for the next X years is largely wasteful, still prone to reflecting opinons/preferences not unambiguous facts and therefore elect to go with a lighter touch, metric driven approach. I really like Option A but suspect it is unworkable. Option B is the path of least resistance but is NOT a good outcome. Option C is therefore the least bad way forward and if you¡¯re doing this for REF you might as well merge in TEF and KEF. No it won¡¯t be popular, nor perfect. But less bad is still better than the current arrangement. My argument for that can be found here ... /blog/radical-rethink-uks-excellence-frameworks-needed
Peer review in the way it is done in the REF is on average would be inferior to peer review done for the journals. A joural can approach the most suitable reviewer for a particular paper to review it. The REF is stuck with pre-selected members of the panel and they may have to review papers that are totally not their area of expertise. Being a member of the REF panel does not give you super powers to develop expertise in all areas. The REF is just one way of sorting universities in order to allocate funding. If anyone is under any illusions that it improves the quality of UK universities they are mistaken. We have, today, roughly the same number of UK universties in the top 100 internationally as when REF started. So what has all the millions of pounds achieved except short-termism and rent seeking?
totally agree with the point about being able to find the right expert from the whole world, versus the best expert available from within the REF panel.
The expenditure for REF shows that people are financially profiting from the formal or informal (institutional mock reviews) REF processes - this is one of the barriers for removing it. When a person is financially or reputationally profiting from something, of course they are less likely to advocate its removal...
Indeed. Hope the next round bans ex-members from consulting for future REFs, bans the practice of internal or shadow REF and paying for external reviews and consultants (No need to second guess what the REF panel might do), have very strict guide lines for who can be returned - especially non-Uk based academics on fractional contracts, All types of non-standard contracts must be scruinized very carefully to show significant and a history of relationship of at least 7 years (7 is arbitrary but it must be some long term relationship so should have started prior to the current REF period) , Include staff surveys as a measure of environment among other things.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs