榴莲视频

The metrics of metrics 1

<榴莲视频 class="standfirst">
二月 28, 2008

Critics of the Cranfield study of correlations between 2001 research assessment exercise scores and citation counts (Letters, 21 February) ignore two important points. The first is that a metric based entirely on research outputs makes more sense than a measure that uses research inputs if the aim is to measure the quality rather than the cost of research. The second is that the metric used (average number of citations per article submitted to the 2001 RAE) measures in a very transparent way the impact of the published work that was used by RAE panels to assess research quality.

It was therefore fascinating to see that this objective, metric-based assessment led to such different results from the rather less transparent system preferred by the Higher Education Funding Council for England at that time, that is, peer review combined with measures of research inputs. Less than a quarter of subjects showed a good correlation with 2001 RAE ratings and even those with good correlations had significant anomalies.

I should declare an interest here. In chemistry, Huddersfield and several other post-Robbins universities, including Warwick and York, performed substantially better on the Cranfield bibliometric measure than predicted by their RAE scores, even though there was a good correlation overall.

Other metrics, such as total number of citations per department or research grants awarded, tell us more about size of unit and how expensive the research is than about research quality. Research inputs do help inform assessment of funding needs but, at least in science, technology, engineering and medicine subjects where journal publication is the norm, published outputs should provide the primary means of measuring quality.

Rob Smith
Professor and dean of applied sciences
University of Huddersfield

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.