ÁñÁ«ÊÓƵ

Can the REF really address the research environment crisis?

<ÁñÁ«ÊÓƵ class="standfirst">The 2028 exercise¡¯²õ measurement of the quality of research environments will account for a quarter of overall marks. But what exactly does quality look like? Can it really be measured? And are there political risks in diluting the REF¡¯²õ focus on outputs? Jack Grove reports
August 31, 2023
Lab technician setting fire to a ball with a hydrogen blowtorch to illustrate Can the REF really address the research environment crisis?
Source: Getty Images/Alamy montage

On a quiet day in 1979, Martin Chalfie was studying microscopic nematodes at the Laboratory of Molecular Biology (LMB) in Cambridge when word got around that footage from Voyager 2¡¯²õ close encounter with Jupiter had just arrived in Cambridge. In response, ¡°about 35 molecular biologists piled into cars to go to the astronomy department¡±, recalls the . ¡°Everyone was excited about science, no matter the discipline.¡±

For Chalfie, now based at Columbia University, that experience captured the essence of why the LMB ¨C often known as the ¡°Nobel factory¡± ¨C was outstanding. ¡°It gave people who¡¯d proven they could do interesting things the freedom to explore other questions ¨C without demanding outcomes,¡± he says.


Campus views: Don¡¯t let the REF tail wag the academic dog


Chalfie is doubtful that such a vibrant research culture could be captured in the form of metrics, outputs or key performance indicators. That is the task, however, that Research England, acting in association with funding councils for Scotland, Wales and Northern Ireland, has set itself for the next Research Excellence Framework, scheduled for 2028. UK universities will be asked to submit data on as yet undecided indicators, at both institutional and unit level, in a ¡°people, culture and environment¡± section of the exercise that will make up 25 per cent of scores, up from 15 per cent in the 2021 exercise.

At the same time, it is hoped that the focus on metrically ¡°demonstrable outcomes¡± will see a downsizing in the amount of work involved for institutions and assessors. The University of Oxford¡¯²õ in 2021, for instance, was a densely-written, numbers-heavy tract running to 61 pages, supplemented by an 11-page institutional statement and official data on research income and doctoral degree completions, on the basis of which panellists were asked to assess the ¡°vitality¡± and ¡°sustainability¡± of each unit¡¯²õ research environment. In 2028, according to the Future Research Assessment Panel (FRAP)¡¯²õ initial decisions, published in June, a ¡°more tightly defined questionnaire-style template¡± will attempt to distil the parameters of a healthy research culture more economically.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

However, concerns have been expressed that this effort to codify something as contested and hazy as ¡°culture¡± runs risks, both political and institutional ¨C and might it even backfire by incentivising institutions to cut initiatives for which they do not feel they will receive credit.

Scientist measuring acidity level in tube with blue liquid in dark laboratory with people inside each test tube
Source:?
Getty Images/istock montage

¡°If we want to improve research culture, the REF isn¡¯t the right way to do it,¡± says Robert Insall, professor of mathematical and computational biology at the University of Glasgow, pointing to the risks of ramping up competition between institutions. ¡°Research culture is very easy to misrepresent ¨C and therefore game ¨C so I fear we¡¯ll see a lot of dissembling and distortion,¡± continues Insall, who fears the emphasis on research culture will encourage greater spending on evidencing prowess in this sphere, rather than doing research itself.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

¡°One of the things that definitely represents ¡®bad research culture¡¯ is forcing researchers to complete paperwork saying how great they are ¨C which, I imagine, is what will happen,¡± he says.

For many academics, however, the increased focus on measuring and improving research culture is hugely welcome. ¡°The REF shouldn¡¯t just be about what you¡¯ve done but where you¡¯re going in the next 10 years ¨C how you¡¯re involving technicians, PhDs and undergraduates in research,¡± says Carsten Welsch, head of physics at the University of Liverpool. ¡°The REF promotes a very specific way of doing research, in which excellence is defined by 4* [¡°world-leading¡±] outputs by individuals. This isn¡¯t doing enough to encourage collaborative research.¡±

Moreover, the REF¡¯²õ focus on individuals¡¯ outputs makes little sense as those individuals do not benefit personally from the ?1.5 billion in ¡°quality-related¡± (QR) research block grants that are distributed annually on the basis of the REF, says Steve Fuller, professor of sociology at the University of Warwick, who writes on research policy. ¡°The REF was never about funding individuals ¨C it funds institutions. It should recognise how people relate to each other and whether that institution¡¯²õ research culture is working,¡± he says.

But that brings us back to the question of what a good research culture actually looks like ¨C and how it can be measured without adding to the hefty ?471 million price tag attached to REF 2021. Neither, it seems, will be simple.

¡°Research culture has been a very useful open term because it can mean different things to different people, many of which are contradictory,¡± says one UK professor, who prefers not to be named. For Paul Nurse, the director of London¡¯²õ Francis Crick Institute, a good research culture, the professor speculates, is about ¡°having wider staircases so scientists can have conversations and interdisciplinarity can thrive. For others, it¡¯²õ about having a high-resource, low-admin environment. But for others it is about having robust systems and good data collection to ensure you don¡¯t have bullying or racism.¡±

A commissioned by Universities UK, in partnership with UK Research and Innovation (UKRI) and the Wellcome Trust, laid out the multiple facets of what might be considered research culture. It identified 12 different UK concordats, charters and agreements covering research practices, relating to issues including open data, animal research, research careers, gender equality, race and responsible metrics use. The review, by Oxford-based consultancy Oxentia, called for a ¡°concerted collective effort¡± to agree shared values that might lead to a more cohesive and streamlined approach to culture.

¡°Individual institutions are making progress on defining what research culture means but we¡¯re quite far away from bringing them together, based on that review,¡± says Elizabeth Garcha, head of research quality and impact at the University of Leeds. And those institutional differences on what makes good culture matter because the REF will need to decide what it rewards, explains Garcha. ¡°There is no agreed position, for instance, on whether institutions should submit outputs by teaching-focused staff. If 10 per cent of an institution¡¯²õ outputs come from those without direct research responsibilities, is that a good or bad thing? It might be a sign of an inclusive research culture, but, equally, is it appropriate to assess research outputs of those who are not paid to do research ¨C and to win money from their efforts?¡±

Crowd of tourists surrounds a statue of a terracotta archer
Source:?
Getty Images/istock montage

Other potential metrics are similarly ambiguous. Does having high numbers of PhD students or fixed-term postdocs signify that a lab is thriving, or might it indicate an excess of precarity and a potential shortage of mentoring? Does a solid track record of helping PhDs into academic careers evidence a job well done, or might it disguise a failure to facilitate good outcomes for doctoral graduates better suited to other professional spheres?

ÁñÁ«ÊÓƵ

ADVERTISEMENT

For Garcha, Research England¡¯²õ current support for research culture via institutional allocations of up to ?1 million a year (and overall) offers a degree of autonomy that works well for universities. At Leeds, an Enhancing Research Culture grant has supported myriad grassroots and university-wide initiatives ¨C overseen by a new dean for research culture ¨C as part of a three-year ¡°research culture roadmap¡± that has includedfor innovative practice and reviews of academic promotions, employment contracts, ethics and responsible metrics use.

The University of Sheffield, meanwhile, used its ?800,000 allocation in 2021-22 to support including establishing writing retreats for female and minority scientists, team-building days for research leaders, drop-in sessions to discuss workplace issues and efforts to promote a sex worker-inclusive research culture.

¡°The REF isn¡¯t known for fostering risk-taking approaches, so universities might become much more risk-averse¡± as research culture becomes codified in the 2028 exercise, reflects Garcha: ¡°Universities are finding out what works ¨C and what doesn¡¯t. You need space for things to go wrong ¨C although not too wrong, hopefully ¨C and share that learning. The REF isn¡¯t the sort of place where that¡¯²õ going to happen.¡± Her concern is that research culture becomes less about exploring ¡°what actually works¡± and more about ¡°window dressing stuff for the REF¡±.

Even if UK universities could unite on what good research culture looks like and agree indicators to measure it, comparisons would still be tricky, continues Garcha. That is because the 2021 REF ended the previous requirement for units of assessment to submit lists of the research-active staff whose work they intended to include in their submission: instead, units needed to submit 2.5 outputs for every officially designated full-time equivalent researcher. The new rules for 2028 will permit institutions to draw these outputs from anyone with a substantial link to the institution, including adjuncts, PhDs and even undergraduates. However, ¡°if you don¡¯t have staff lists for any given unit, who are you even talking about when you¡¯re collecting this data and attempting to make it comparable to other units?¡± asks Garcha.

¡°When you¡¯re looking at small units with high levels of non-disclosure [of personal information], it might be difficult to draw any conclusion,¡± she continues. ¡°You¡¯d need to be flexible with indicators at unit level, but if you¡¯re trying to compare certain things and award money on this basis, that¡¯²õ a tricky thing to reconcile.¡±

Consultations on potential indicators ¡°suitable for the whole sector¡± are currently being discussed, and the FRAP has suggested that these could include EDI data (which is already submitted to the Higher Education Statistics Agency), ¡°quantitative or qualitative information on the career progression and paths of current and former research staff¡±, ¡°data around open research practices¡± and ¡°outcomes of staff surveys¡±. However, all of these arguably raise issues of fairness and rigour.

¡°If we¡¯re asked to present survey data from PhD students on well-being, for instance, that¡¯²õ a tricky one,¡± explains one Russell Group research leader. ¡°Much of the satisfaction score might concern the level of PhD stipend, which isn¡¯t set by us but by UKRI, so is it right to use this?¡±

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Comparing staff surveys from different institutions may also run into difficulties, says Amanda Bretman, dean of research quality at Leeds. ¡°If you¡¯re trying to measure progress internally or the impact of a particular policy, surveys can be helpful. For other purposes, I¡¯m not sure how useful it would be,¡± she reflects.

Of course, all of these questions have lingered in the background ever since ¡°environment¡± began to be assessed in the REF several rounds ago. But increasing environment¡¯²õ weighting to 25 per cent ¨C the same as impact ¨C and clarifying its scoring system is likely to bring new scrutiny, particularly when the increase means that research outputs now count for only 45 per cent of scores (the other 5 per cent is attached to a statement describing collaborative activities and how research outputs contribute broadly to the discipline in question) . That is down from 65 per cent as recently as 2014 and 70 per cent in 2008.

The impact element of the REF was introduced in 2014 with a view specifically to currying political favour (replacing indicators of esteem and accompanied by a reduction in environment¡¯²õ weighting from 20 to 15 per cent). By contrast, many UK research leaders are privately concerned that reducing the weighting of outputs to below 50 per cent might undermine the shaky support in government for handing so much QR money to universities on a no-strings-attached basis.

¡°Dialling down research outputs to just 45 per cent is a super-risky manoeuvre for QR funding,¡± a source tells Times Higher Education. A request to reward universities for scoring well on indicators that have an unproven relevance to economic growth will not play well with sceptical ministers and Treasury advisers, he predicts.

¡°The FRAP¡¯²õ logic is, ¡®A university¡¯²õ research might be rubbish but if it¡¯²õ a nice place to work, it should get loads of money,¡¯¡± he adds. ¡°I wouldn¡¯t expect the Treasury to kick up a fuss now, but if these proposals went ahead, I can easily see it saying, ¡®We¡¯re not funding that¡¯, [with the result that] QR is quietly dialled down, perhaps by 25 per cent.¡±

On the other hand, those arguing in favour of turbocharging the research culture agenda often refer to the government¡¯²õ own R&D People and Culture strategy, as a mandate for change.

That document, which called for a ¡°positive, inclusive and respectful culture¡± in research and an end to ¡°bullying and harassment¡±, was championed by then-science minister Amanda Solloway, explains Paul Nightingale, professor of strategy at the University of Sussex, who works on science policy. ¡°It was a personal thing for Amanda, who had listened to complaints from academics and, having a background in industry, viewed these things as unacceptable.¡±

Moreover, there was a wider concern, shared by UKRI chief executive Ottoline Leyser, that ¡°the competitive model of funding that exists in the US and UK discriminates against women and other marginalised groups,¡± says Nightingale. ¡°Without them, the science talent pipeline isn¡¯t working properly, in their view. It¡¯²õ not a surprise that these changes have arrived ¨C with issues of research culture now being addressed by European science and the National Institutes of Health in America.¡±

Yet while those problems may still exist, Solloway has long since been replaced in the science brief by George Freeman, a former venture capitalist from the Cambridge life science industry, whose oft-repeated catchphrases of ¡°science superpower¡± and ¡°innovation nation¡± speak to different priorities. Freeman¡¯²õ newly-created Department for Science, Innovation and Technology (DSIT) recently sent a to Research England noting that the ¡°excellence of the UK¡¯²õ research base¡­is crucial in supporting sustainable economic growth and enhanced productivity¡±.

And although Labour¡¯²õ policies on science are unclear, fixing perceived dysfunctions in academic reward systems may struggle to gain priority over investments in AI and biotech research. ¡°Freeman will probably be gone after the next election, but his ideas that science should drive growth are fairly orthodox for Whitehall,¡± a government science adviser tells THE.

And he agrees that ¡°this research culture agenda is a very hard sell to the Treasury. They¡¯ll want to know if there is any economic pay-off on improving research culture ¨C the evidence isn¡¯t great ¨C and what the opportunity costs are. There are already some concerns in DSIT that UKRI [which was created in 2018 to replace Research Councils UK] isn¡¯t delivering and that British science isn¡¯t keeping up with more vibrant science systems, so it¡¯²õ an easy target, especially if it starts to go big on diversity.¡±

 A laboratory accident spilling a harmful substance with a person lying in the liquid
Source:?
Getty Images/istock montage

Creating new systems to manage the research culture requirements of the REF may also exacerbate concerns that UK research is too bureaucratic, the adviser adds. ¡°There are some horrific problems regarding bullying, diversity and researcher precarity, but the last thing you want is a university writing a vanity PR puff piece about how wonderful they are. That¡¯²õ likely to add another layer of hiddenness to this problem and reinforce bad behaviours.¡±

Another former government science adviser, who also prefers to speak anonymously, agreed. ¡°Paperwork assessments will just incentivise a big growth in the HR complex coming up with externally legible signals of a good culture, such as hosting workshops and well-being courses, rather than tackling fundamental root causes as to why research culture is broken,¡± he says.

¡°The REF has arguably been a major contributor to the culture problems, as it incentivises tangible outputs over on-the-ground [working conditions], but the FRAP has already been hijacked by the identity politics agenda. This will further undermine faith in the ideological impartiality of the research base, which is a big issue [for politicians] in private, including for some in Labour circles,¡± the ex-adviser adds.

The debate on REF 2028 is only just starting, with ending on 6 October. Yet the fact that many researchers and administrators are reluctant to speak out publicly against plans that they fear will damage UK research speaks to the perception that the general direction of travel has high-level support in UKRI and is not up for discussion.

¡°No one wants to be seen as trivialising the problems that exist within UK research, but there¡¯²õ a real question of what we measure,¡± a research leader at a Russell Group university tells THE. ¡°Do we think open science, for instance, is a thing to value ¨C and, if so, by how much?¡±

Of course, there were similar high-stakes questions around how impact would be measured in the 2014 REF ¨C so much so that the incoming science minister, David Willetts, paused the whole process by a year in 2010 to assure himself that the assessment methodology was robust. And even after the detailed methodology was announced, there remained a high level of anxiety within universities about how exactly to play the impact game. Regarding the details of the revised environment assessment for 2028, the Russell Group leader is confident that universities, again, ¡°will work these things out. But there is a huge amount for Research England to do, and the fear is that [the detailed guidance] will be dropped on our heads at the last minute, as REF deadlines approach.¡±

ÁñÁ«ÊÓƵ

ADVERTISEMENT

While the REF changes may drive improvements in the working lives of researchers, then, it seems likely that the coming months and years will make those of research administrators rather more fraught.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (4)
A good and even-handed piece. I am not convinced that reducing the proportion of REF for outputs is a good idea. I do think research culture / environment could be useful especially if it is linked to a template. The narrative environment statements seemed rather 'soft' and allowed some places with bad research cultures to nevertheless get good scores. However, I worry like many in the article about the level of bureaucracy, and outward facing performativity to 'demonstrate' good research culture/ environment (as defined by REF) which will not always reflect the reality of working in institutions or departments.
It is rather peculiar to expect there to be a strong research culture in UK institutions, if the whole system has been built around creating a market for academics and rewarding of individual performance through promotions and financial incentives. I hold positions in departments in both the UK and continental Europe, and in the latter there is no need to discuss how to build a research culture because it is already present. In the UK there is forever talk about "how can we build a stronger research culture" but little action due to these parameters (people moving around to shop for better jobs and being rewarded for individual performance).
The elephant in the room is how academics who treat students badly are often rewarded with non student facing roles. They are then able to focus more on their pet research subjects. End result- they contribute to higher workloads for those staff who treat students well (students run to them for help), but in so doing create a toxic REF environment at the university in which only they thrive and progress to professorship.
If you want to understand the research environment at an institution, why not survey the staff and require that as one of the submissions? Almost all institutions do this already so this shouldn't be any additional burden. However, surveys should be standardised and delivered across all the UK universities just like the NSS. These scores should then be made public just as in the case of NSS. This would be the National Academic Staff Survey (NASS), or even better The NURSS National University Research Staff Survey so that non-academic (professional services ) staff who are also part of the research environment can participate.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs
ADVERTISEMENT