ÁñÁ«ÊÓƵ

Radical rethink of UK¡¯s excellence frameworks is needed

<ÁñÁ«ÊÓƵ class="standfirst">Merging metrics for the REF, KEF and TEF would free up time for academics to become researchers once again, says Robert MacIntosh
April 16, 2021
Hand bursting a balloon
Source: iStock/BrianAJackson

Designing assessments that adequately measure learning outcomes but do not absorb excessive amounts of students¡¯ time is always a tricky task for academics. After all, we are the ones required to mark the mountain of exam scripts and essays that follow.

With submissions entered for the research excellence framework (REF) and the results for the first knowledge exchange framework (KEF) due imminently, academia¡¯s own outputs are now under scrutiny and many scholars are wondering if the balance between effort expended on assessment versus the insight gained has drifted out of kilter.

Since the first research assessment exercise in 1992, the level of scrutiny applied to UK university sectors has increased exponentially. The original policy intention to improve performance, enhance accountability and, in the case of the REF, to provide a basis for dispersing billions of pounds of research funding, is widely accepted. The teaching excellence framework (TEF) was introduced in 2017 to offer similar insights to current and future students about teaching, while the KEF aims to monitor how universities are addressing real-world problems.

For all their good intentions, however, the cumulative and unintended effect of the REF, TEF and KEF on the sector have been seismic. The main challenge is the amount of effort involved; every hour spent reporting, managing and monitoring performance in research, teaching or knowledge exchange is an hour not spent doing the work on which you are reporting. It would be salutary for someone to calculate the cumulative cost in person hours, pounds and managerial attention spent on REF, TEF and KEF submissions (the 2014 REF cost ?246 million alone, according to Research England). No funding is allocated to our universities to support this assessment burden meaning that institutions face hard choices for both their professional service and academic staff.

ÁñÁ«ÊÓƵ

There is also the problem of gaming the system. Academics are, by definition, curious and creative, so it is no surprise then that a collection of bright professionals have set about manipulating the systems by which they are measured. Whether it is offering work experience to recent graduates to influence employment outcomes or appointing international superstars on fractional appointments to enhance publication portfolios, ingenuity is being applied to maximise outcomes. The regulatory response has been ever more elaborate specification of what is being measured, which only?increases the time and effort involved.

A third issue is boundaries. Peer-reviewed publication shouldn¡¯t be expected to do the job of translating ideas into practice, and research-led teaching isn¡¯t necessarily excellent teaching. And yet, our universities are ecosystems. In circumstances where commercially funded research leads to amazing publications and shapes curricula, does this get reported in the REF, KEF, TEF or all of them?

ÁñÁ«ÊÓƵ

Finally, there is the thorny issue of labelling. Our universities differ enormously in shape, size, mission and history, yet a fragmented approach to assessment shrinks back from this uncomfortable reality.

A radical rethink is needed. A consolidated set of metrics operating at discipline level offers a dramatically simpler alternative, especially where those metrics already exist or are easily curated. Metrics spanning research, knowledge exchange and teaching at disciplinary level could then be aggregated up to provide a profile of the very different types of universities that operate in the UK.

Each time the REF exercise draws to a close, the same argument is rehearsed with critics arguing that metrics are too blunt an instrument. Yet the REF¡¯s peer-review panels bring other challenges and perversely ask our most esteemed scholars to spend inordinate amounts of time rating other people¡¯s work rather than doing their own.

Each academic unit in a university has local leadership of its research and teaching activities. Setting clear metrics in each of research, knowledge exchange and teaching would allow these local leaders to manage performance against those metrics at a sensible frequency, say, every four or five years.

ÁñÁ«ÊÓƵ

Central planning offices already collect much of what would be required for other mandatory and regulatory purposes. Rather than charging leading academics with onerous, and unproductive, tasks such as rereading research?that has already been subjected to peer review, they should be invited to offer an informed commentary on their group, faculty or discipline¡¯s metric-driven profile. At university level, aggregated results would give each institution the chance to offer a narrative on the current profile and future strategies for each area of activity. To avoid creating a small industry around the authoring and marking of such a narrative, a sensible word limit should apply just as it would with a student assignment.

Such an integrated, metric-driven system would be imperfect but quicker. The current arrangements are imperfect?and take too long, absorb too much time from our most talented academics and run over assessment cycles that outlive most leadership tenures, meaning poor outcomes are always someone else¡¯s fault.

In terms of public funding, the choice between further concentrating resources in areas of pre-existing excellence or attempting some form of levelling up is political. In terms of future students, potential research partners or employers, being able to see clearly what each university, and each discipline at that university, has to offer would be helpful. Few institutions excel at research and teaching and knowledge exchange in every discipline. Rather than worrying about that diversity, we should be willing to celebrate specialisation.

Post-pandemic, and post-REF 2021, it would be refreshing to think that our universities were spending less time reporting on, and more time doing, the great work they do.

ÁñÁ«ÊÓƵ

is head of the School of Social Sciences at Heriot-Watt University, in Edinburgh, and chair of the Chartered Association of Business Schools.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Related universities
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs