榴莲视频

‘Existential risk’ to research from failure to demonstrate impact

<榴莲视频 class="standfirst">Sector leaders quizzed in Elsevier survey back shift to more holistic methods of evaluating scholarship
十一月 7, 2023
Bumper cars at a seafront fun park in Bridlington, East Riding of Yorkshire, England.
Source: iStock

Funding and public support for academic research are in?peril if?its benefit for wider society cannot be?communicated more effectively, sector leaders have warned.

In a of 400 global academic leaders, researchers and heads of?funding bodies, released by?publisher Elsevier at?this week’s Times Higher Education Innovation &?Impact Summit in?Shenzhen, China, 68?per cent agreed that the inability to?demonstrate research’s impact “could become an?existential risk”. Sixty-six?per cent agreed that public pressure for government-funded research to?make a?“tangible contribution” to?society would further intensify in?coming years.

A key problem is that assessments of research quality continue to lean too heavily on reviews of academic outputs such as journal papers and other publications. Barely half of the respondents, drawn from Australia, New Zealand, the Netherlands, the Nordic countries, Japan, the UK and the US, said they felt that existing systems of research evaluation “successfully incentivise work that can make a meaningful difference to the wider world”.

The UK has led the way on assessing impact, incorporating impact case studies into its Research Excellence Framework – which governs the distribution of ?2 billion of public funding annually – from 2014, and increasing its weighting to 25 per cent for the 2021 exercise. Other funders, for example in Australia, have started assessing impact but are yet to introduce funding incentives.

There is a clear appetite to go further, with the majority of respondents to the Elsevier survey – 68 per cent of academic leaders, 58 per cent of researchers and 72 per cent of funders – agreeing that there was “now a clear imperative for a shift to a more holistic approach to research evaluation”.

But survey respondents listed significant barriers to assessing research impact, with 56 per cent citing a lack of common frameworks or methodologies. Forty-eight per cent said they were concerned about a lack of consensus on what constitutes impact, and 45 per cent mentioned the lack of resources for change – while 27 per cent said that resistance from institutions and researchers was an obstacle.

Nick Fowler, Elsevier’s chief academic officer, said the results reflected a broader desire across academia to better evaluate research – and communicate its significance to the public.

“If universities are not seen to be delivering benefits for society, their funding will be at risk,” Dr Fowler said. “The public might say: ‘Why do we need these universities? Why don’t we fund healthcare [instead]?’”

James Wilsdon, professor of research policy at UCL, who is quoted in the Elsevier report, told THE that existing research systems were the subject of “growing frustration”, with many believing them to be “misaligned with the things that matter most in research”.

“We recognise and reward publication in certain journals and the citations that follow, but pay little attention to the teamwork, collaboration, infrastructures and support that enable those publications,” said Professor Wilsdon. “Added to which, conventional approaches can incentivise poor research practices, encourage less creativity and limit the diversity of what gets researched and who succeeds in a research career.”

When respondents were asked what factors they would include in a reformed research evaluation system, educational and academic impact still figured highly, listed by 54 per cent and 47 per cent of respondents respectively. But environmental, societal and economic impact were all cited by more than 40 per cent of respondents. Commercial impact was ranked lowest, with only one in five favouring its inclusion.

emily.dixon@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> Reader's comments (2)
The issues here are the same as they have always been. 1. If impact is viewed as something that happens directly, then 'upstream' research (e.g. pure maths) will be pushed aside because it is so difficult and so long-term to track impact a long way downstream. This would be a mistake even assuming a narrowly pragmatic view of impact; even more so if we allow slow-burning impacts on other aspects of a whole culture. 2. Impact may not be 'popular'. The same arguments as might be deployed against, for example, arts museums, can be deployed against much research. Is this a good idea?
We know that we've made a mistake by introducing all sorts of short-term indicators of research productivity, such as citations, number of publications, the h-index. But the suggestion to replace these with another short term measure, that is direct impact, is just failure to learn from our mistakes. I also note that promotion procedures reward fame and influence in the academic community, which is yet another mistake, as it rewards toxic and narcissistic behaviour. I mean some promotion criteria are just straight out of the DSM.