榴莲视频

REF more burdensome than RAE, pro v-cs state

<榴莲视频 class="standfirst">THE straw poll shows efforts to lighten load have backfired
十一月 28, 2013

Can you hear a prolonged scream reverberating around the corridors of your university’s administrative building? If so, it is likely to be the sound of exhausted, caffeine-addled research office staff putting the final touches to their 2014 research excellence framework submission, due by 29 November.

It wasn’t supposed to be like this.

When Gordon Brown, at the time?chancellor of the exchequer, announced in 2006 that the old research assessment exercise was to be scrapped, his intention was for it to be replaced by a metrics-driven approach that would be much less burdensome to administer.

Even after that approach had been rejected by the funding councils as unworkable, they still hoped to ease the burden by, for instance, shortening the template for the environment section and slashing the number of units of assessment from 67 to 36.

However, a straw poll of pro vice-chancellors for research carried out by Times Higher Education last week suggests that, if anything, the labour involved this time round has been greater than in 2008.

According to one Russell Group pro vice-chancellor, the complexity of combining several academic departments into one unit of assessment has helped to make the REF “much more onerous” than the RAE.

The other major factor that has made matters worse is the new impact element – introduced as a sop to the government after the metrics-driven approach was rejected.

Geoff Rodgers, pro vice-chancellor for research at Brunel University, agreed that impact meant that the effort required to prepare his institution’s REF submission had been “substantially greater” than that for the RAE, since “it took some time to understand the detailed requirements”.

However, he added, Brunel “got there eventually” and the resulting case studies “brilliantly” illustrated – to the researchers themselves as well as taxpayers – “the important public benefit” of the research.

Myra Nimmo, pro vice-chancellor for research at Loughborough University – where the effort involved for the REF “has not been less” than in 2008 – pointed out that while most impact case studies had to be written from scratch this time, universities will now begin gathering them “in real time”, which will make future submissions easier.

Some universities have suggested that they will constrain the number of researchers they submit on the basis of how many good case studies they can come up with, but no one THE interviewed had taken that approach.

However, only Professor Nimmo was prepared to say what proportion of staff she envisaged submitting to the REF. This would be somewhere around 85 per cent compared with around 95 per cent in 2008 – although the decline was not strategically driven at either the unit or university level, she said.

All those polled expressed confidence in their submissions.

But the Russell Group pro vice-chancellor said that institutional confidence would be better focused “in the research strategies we have put in place, and [in the belief that] we have retained, hired and supported the very best researchers to deliver them”.

He added: “That being the case…future assessment exercises should perhaps move to a simpler approach which requires every [institution] to submit every researcher rather than encouraging selectivity.”

paul.jump@tsleducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> Reader's comments (1)
And why was a metric-based system deemed 'unworkable'? Because it was rejected by academics. We brought this on ourselves. In the sciences, metrics predicted RAE outcomes pretty accurately. Yet when I have made this point, (eg http://occamstypewriter.org/athenedonald/2013/08/15/why-i-cant-write-anything-funny-about-the-ref/#comment-114639) the idea of metrics is greeted with horror. The humanities may need a different solution, but in sciences, metrics could provide a far more cost-effective and objective method for evaluating departments - but it seems it is just too simple for many academics to accept.