ÁñÁ«ÊÓƵ

Should the UK replace journals with a REF repository?

<ÁñÁ«ÊÓƵ class="standfirst">Immediate review of outputs by a REF reviewer could be more efficient, transparent, informative and, above all, fair, says Martin Lang
April 28, 2023
A thumbs up and a thumbs down symbol
Source: iStock

There is a long-standing debate about whether the UK¡¯s Research Excellence Framework is a waste of time and money given its insistence on re-assessing tens of thousands of papers that have already been reviewed by journals. Why not just base REF scores on journal rankings instead?

One answer is that, as Robert de Vries put it in a recent article for Times Higher Education, journal-administered peer review ¡°sucks¡±. De Vries is conscious, though, that the obvious alternative to journals, post-publication review on subject repositories, might quickly descend into a social-media-style ¡°attention-economy hellscape¡±, which would be even worse.

His solution is to oblige everyone who publishes on such platforms to undertake post-publication review to ensure that visibility is a function of merit. But I believe that a specific REF repository would be a better solution, eliminating reviewing redundancy while upholding high standards.

UK academics would be able to upload their articles to this hub at any point during the REF cycle. These would be directly reviewed by a REF reviewer and given a score of between one and four stars, as in the current grading system. Of course, assigning these grades would involve more reviewing work than the current REF does, not least because not all papers are currently entered for the REF. But the reviewing would be spread out across the entire seven-year cycle, and many more reviewers could be involved than the overburdened few who, under the existing rules, have to review a large number of papers in a very short space of time.

ÁñÁ«ÊÓƵ

If an author was happy with their score, their paper would be published immediately on the repository. Alternatively, they could revise and resubmit. Or, if they thought the review was unfair, they could resubmit the article unrevised for review by a different reviewer. This novel option in publishing would prevent reviewers with ideological axes to grind from blocking publication or under-scoring.

If the second review gave a different score, the article would be sent to an arbitration panel, led by a senior REF reviewer. The first two rounds of review would be blind, but the arbitration panel would be able to see the names of the reviewers. If they saw nothing obviously untoward in either review, their final decision might be an aggregate, fractional score. But if they considered any of the reviews to be clearly inaccurate, training would be provided to that reviewer.

ÁñÁ«ÊÓƵ

One advantage of this system is that it would provide universities with real-time data on their likely REF scores. Even cross-panel standardisation could occur dynamically. For example, a selection of outputs could be randomly sampled prior to release of the output score ¨C much like how an external examiner picks a sample of assessments to review prior to a final award board. Alternatively, a selection of ¨C or even all ¨C outputs could be reviewed by two reviewers: a specialist from the corresponding Unit of Assessment and a reviewer from another panel.

But would the repository really limit redundancy? Wouldn¡¯t UK academics still feel the need to publish in journals in order to preserve their international visibility? Perhaps initially. But if the repository were fully open access and were promoted internationally, it could become the go-to place to find high-quality UK research. As its renown grew, UK academics would feel less of a need to publish in journals.

An alternative arrangement would be to make the repository open access only for people with UK IP addresses, charging those outside the UK for access and thereby generating income to partially cover administration costs. To maintain international prominence in this case, journals would be encouraged to select articles from the repository and sell them around the world in special themed editions (with the authors¡¯ permission, of course). There would be no need for the journals to re-review the articles, freeing up academics who previously worked as journal reviewers to offer their expertise to the REF repository instead. If journals wanted additional expert opinion as part of their publication process, they would have to pay for it ¨C creating a new income stream for academics.

One exciting aspect of the REF repository is that it would also make post-publication review extremely easy to incorporate. As de Vries suggests, users could simply give a thumbs-up to articles they considered to be of good quality, or they might rate them out of four stars, in a way comparable to TripAdvisor reviews. All readers with ¡°reviewer rights¡± would be registered academics with ORCID IDs, ensuring that the review process remained in the hands of professionals. And to avoid the bad-tempered hellscape of which de Vries warns us, comments would not be anonymised.

ÁñÁ«ÊÓƵ

Articles receiving more attention would be highlighted to repository users based on algorithms that identified their own research interests. In this way, our articles would reach the people who?were most interested in them and, if they?were positively received, they would reach even more people because academic journals would pick them up and publish them outside the UK.

I believe that this arrangement would offer a peer-review process that is more efficient, transparent, informative and, above all, fair. This would incentivise research that is more rigorous and creative ¨C to the benefit of academia and society as a whole.

Martin Lang is course leader for MA fine art and senior lecturer at the University of Lincoln.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (11)
I have rarely read such a load of nonsense in my life. If reviewers want to be unreasonable, they will always be so. The current 100 words, although far from perfect, seems a good compromise. Reviewing papers is one of those necessary tasks in the job and I am sure that those who volunteered to review would be the most ruthless types who are looking to cause trouble. Peer review can be pretty rough and unpleasant now and this just seems to introduce more of it. I am glad that I am near retirement so that I will not see publication reduced to the same ratings game that is seen on the Internet and social media.
Contrary to the previous comment, I actually think this an interesting discussion starter. My reasons are not really about the REF though, but more generally related to publishing. As the digital age has evolved, the way articles are found are rarely now based around collections (e.g. specific journal titles) but through better search engines (Scholar, library catalogues etc). In addition, the costs associated with subscription are no longer justifiable, with little publication work required now that printing and marketing is not needed as much - libraries appear to be paying mainly for the metric information (which a central repository could also set up with relative ease). This means that, in many cases, there's a closed system where publishers are profiting quite handsomely for what is mostly a volunteer-led process. Although there's been a push towards open access, this hasn't gone far enough. Doing away with journal titles in favour of a centralised portal and providing open-access for readers would be a logical evolution of a currently siloed concept being paywalls. Creative solutions would be required around cost to maintain the portal but there are plenty of models that would be superior to the existing one (with multiple publishers catalogue fees), whilst also improving efficiency of the REF (based on its current format anyway, and assuming it continues as a form of quality assurance).
This is one of the most rediculous ideas I've ever seen voiced on THE. The reviewers of articles in journals are people who are in the same sub-sub-sub-sub-field as the field, it requires a high degree of specialization. It is obviously infeasible to have enough specialists in these 'Units of Assessment'. Not to mention - what about papers co-authored with non-UK academics? These, in addition to what a previous commentor said, it turns the whole publishing process into a popularity contest.
I am confused by all of the comments on this article which make less sense to me than the arguments in the article. The best example of that is if the premises in the second and third sentences of this comment are true and valid, the REF and previous RAEs adopted a flawed design, i.e., panels were not large enough to judge each and every sub-sub-sub-sub field . Which may well be the case. If it is, then the point supports rather than argues against the proposal in the article. So why does this commentator judge the ideas to be ridiculous?
There are clear and obvious flaws with the current publication system. Not only financially and linked to things like paper mills and predatory publishers, but also with the challenge of finding reviewers (or recognition of the work). This idea, which indeed would be a departure from past approaches, would actually have quite a number of good points. It would improve the predictability and part of the overhead (mock exercises). Whether it can actually work, maybe, but it is certainly worth exploring further.
The individual grades from the REF should be published so everyone can see the quality of the papers for what they are. No more internal discounting of papers that could otherwise be ranked higher by the REF panel. Make it public, let it all be transparent. The hundreds of thousands of pounds spent on the REF must be justified.
This article is complete nonsense. Academics will simply get their colleagues to write rave reviews about their research in the REF deposity and do likewise to their colleagues in return. Better is to have a clear list of journals with a ranking score of 4*, 4, 3, 2 and 1 and leave it at that to automate the REF score. This will avoid the perception that if an academic from Oxbridge published in a 4 or 4* journal it gets that grade 4 in the REF but if you are from another university the panel can easily lower it to a 3 or 2 since it was not published by some Oxbridge Don. Indeed, if it is Oxbridge Don it often gets 4 rating in the REF even if it is publshed in a 3 rated journal. By making clear the ranking of the journals we can avoid a lot of wasted time on mock peer review at the Universities.
Isn't a public rank list of journals exactly what the REF is trying to get around? With many journals monopolised by various parties, using only journal rankings would certainly not be a fair way to go, that is if fairness matters. Agree that it would certainly reduce the cost and time involved. If papers and other outputs are being graded then we ought to know what the grade is. A list of all outputs and grades needs to be made publicly available. Why not, if you trust your processes? Otherwise, this is all just a load of nonsense and the greatest nonsense of all, claiming that research has improved since the average rating has gone up! Talk about grade inflation!! Similar to saying the teaching quality has improved if the student grades have gone up on average when you are the one giving the grades in the first place! Do academics not see this?
This is dangerous stuff and completely goes against the purpose and merit of the REF. The REF is not and should never be misused as a performance or quality measure for individual outputs (that can then be link to individual academics easily). University managements up and down the country have already misappropriated the REF for individual-level internal performance management (permanency, promotion, redundancy) as it is. This would be a bonanza for the bean counters and metrics fanatics in administration (and politics). It will also increase the need for constant monitoring and reporting of targets, demanded by said bean counters. Do not go there!
"REF is not and should never be misused as a performance or quality measure" - but we are already there. The internal REF that many universities carry out IS used for performance evaluation already. So why not make the REF results public so anyone who gets the wrong end of the stick in the internal review process, which is often not done without muchf rigour, care or transparency, get a chance to fight back. So an individuals paper might actually benefit their institution because is it is 4* publication but may have missed out a promotion because the shoddy internal review process deemed it a 3*. This is total nonsense. If you are going to grade me I want to know what the results of that exercise is. End of!
"¡­but we are already there. The internal REF that many universities carry out IS used for performance evaluation already." So, your conclusion is to simply give in rather than fight back? Also, please do not twist my comment in a new direction. The above opinion piece is about further institutionalising the REF as a continuous activity with scores linked to individual outputs traceable to individual academics. The REF should be opposed and eventually abolished, because it corrupts the research process and limits academic freedom. It should not be further entrenched and institutionalised. The latter is what the opinion piece advocates, I am afraid. This is what I call dangerous.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs