ÁñÁ«ÊÓƵ

The REF is an exercise in fantasy accountancy and management

<ÁñÁ«ÊÓƵ class="standfirst">Long-standing debates about what ¡®research quality¡¯ means make it obvious that the REF can be little more than make-believe, says Martyn Hammersley
May 16, 2022
A crazy accountant
Source: iStock

The 2021 Research Excellence Framework results are now available. No doubt institutions of higher education are scrutinising them for what they ¡°show¡±, and for how they can be spun to present the best public face or perhaps to justify decisions already made.

The executive chair of Research England, David Sweeney, declares that the ¡°exercise has fulfilled its aim to identify research quality across the whole system¡±. Yet, ironically, this claim would be rejected out of hand if judged by the requirements of research methodology.

To pick out just the most fundamental problem: despite the best efforts of those involved, measuring the quality of individual research products in the REF cannot have high accuracy because the concept is unavoidably fuzzy. It is multidimensional, and in many fields there is not strong consensus among researchers about what it means and how it should be assessed ¨C even if they know bad research when they see it. The problem is illustrated by the disparate views that frequently emerge in the peer-reviewing of journal articles; and, of course, REF results depend on the outcomes of this process across a range of diverse journals.

The REF is one of many exercises in institutional accountability that have become central to modern educational governance. Another highly influential one is the Organisation for Economic Cooperation and Development¡¯s Programme for International Student Assessment (PISA). It shares many of the same methodological defects with the REF, even though the procedures it employs are very different (PISA relies on children taking its tests). What both of them claim to assess cannot be measured consistently and accurately, and the quantitative data they produce amounts to pseudo-precision. It is not just that the margin of error is large but also that the gap between the key concepts and their operationalisation is huge.

ÁñÁ«ÊÓƵ

Shared by both the REF and PISA is the assumption that because we feel a need for information to answer a policy question, there must be some rigorous means available to supply it. If only life were like that! We may wish to know whether investment in research is producing an adequate ¡°return¡±, and how this differs across universities. Similarly, it may be felt necessary to know whether the schools in a particular country are performing at a high level compared?with those in other countries. But the idea that answers to these questions can be anything more than very rough judgements based on inadequate evidence is wishful thinking. Long ago, economists told us that when we seek information we may reach a point after which little worthwhile is added and costs escalate. We have gone way past that point with both the REF and PISA.

To a degree, both the REF and PISA, like other accountability regimes, amount to rituals designed to show that proper managerial protocols have been applied to ¡°measure performance¡±. But this is management as fantasy. And the fundamental danger here, all too obvious in the reception of REF results, is that apparently ¡°hard data¡± are taken at face value as a basis for evaluating institutions, and the units within them. Decisions are made, or at least justified, on a basis whose warrant is inevitably spurious.

ÁñÁ«ÊÓƵ

The problems with the REF go back to the initial establishment of a research selectivity exercise in the 1980s. A genuine problem was identified: that the allocation of research funds to universities by the University Grants Committee (UGC) seemed to operate in an informal and rather obscure fashion. And this came under challenge as a result of budget cuts.

But with the abolition of the UGC, and the establishment of the Research Assessment Exercise (RAE), the REF¡¯s precursor, there was a shift from the allocation of funding according to the varying needs of institutions towards treating research funding as an investment, seeking to reward excellence and punish institutions that failed to achieve?it.

Furthermore, the shift to the RAE and then the REF involved a move from, on the one hand, a concern with satisfying university managements that the allocation of funds among institutions was broadly fair to, on the other, the aim of offering a measure that could tell politicians and the general public whether an adequate return was coming from public investment in university research. This is the point when fantasy accountancy joined fantasy management.

We now suffer from a prevailing conception of public management that makes excessive claims for itself and swallows a huge amount of resources ¨C at a time when public finances are under growing strain. The REF not only involves massive costs, direct and indirect, but also has profound consequences for institutions, and indeed for individual researchers. It distorts the whole process of research through instrumentalising?it.

ÁñÁ«ÊÓƵ

I¡¯m hardly the first to make these points. When will we ever learn?

Martyn Hammersley is emeritus professor of educational and social research at the Open University.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles

As the 2014 REF census date approaches, Paul Jump talks to the architects of previous rounds of assessment about how it all began and their views on the research excellence framework

17 October
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (7)
I am inclined to agree. Identifying "research excellence" seems to be virtually impossible given the different opinions on what both words mean and how narrow or wide the "context" element for comparisons between different subjects and sectors should be. How are we meant to interpret / what do we learn from the existence of an excellent research paper on a very narrow focus by a single academic in a situation where the faculty and University are unremarkable.
A bunch of UK academics get to tell other academics in the UK what the world regards as internationally recognised research means.... let that sink it in for a minute. Smacks of neo-colonialism. Even international journal peer reviewers get the impact of the research that gets published wrong.
What proportion of staff returned are on fractional contracts?
After all the self-congratulatory, self-justifying pieces by those involved in (and benefiting from) delivering REF, thank you for reminding us of, as Basil Fawlty might have put it, the "bleeding obvious": REF is a fundamentally flawed exercise and terrible waste of resources in an age of austerity. Ditto TEF. Unfortunately, vested interests mean they, or something akin, are here to stay.
I agree with many of the criticisms of the REF, in particular how it instrumentalises HE and learning, how it distorts the whole process and justification of doing research, and it seems to be used by management as a tool to discipline staff. However, I find it hard to accept that it is not possible to find a useful method for evaluating research. It seems a bit rich for academics to complain - the people who at every opportunity devise and use exams to as assess the performance of their students. I mean, at the end of an undergrad's 3 years of study we give them one single quantitative measure (1st, 2.1, 2.2. etc) to judge their performance. Do we complain that this single number is too simplistic a measure with which to judge a student?
There have been complaints about and dissatisfaction with the 1st, 2.1, 2.2 etc. degree classification system for years. See /content/rising-interest-shown-grade-point-average-degree-classification-trial. Sadly, dues to inertia and inanition, nothing has come of it.
t
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs