ÁñÁ«ÊÓƵ

Pros and cons of the metrics system

<ÁñÁ«ÊÓƵ class="standfirst">
May 16, 2013

The main problem with the research excellence framework is the huge amount of time and money that it consumes, distracting academics from scholarly activities and diverting funds to employ squads of administrators to manage it (¡°For richer, for poorer¡±, 9 May).

In a , I reported an analysis I did showing that you could pretty accurately predict the research assessment exercise¡¯s psychology results by taking an H-index (indexes that attempt to measure the productivity and impact of academics¡¯ published work) based on departmental addresses.

In the comments on the post, there is further evidence that this is also true for physics. Computing the H-index takes an individual two to three hours, as opposed to the two to three years spent by legions of people working on the REF.

At present, we don¡¯t know whether such differences as exist between H- index ratings and RAE panel ratings mean that the latter were better. For both psychology and physics, once you had taken the H-index into account, additional variance in funding levels could be explained by whether there were departmental representatives on the assessment panels.

ÁñÁ«ÊÓƵ

This suggests that the additional value given by having expert assessment incorporated in such evaluations may just be adding subjective bias, which does not necessarily indicate any malpractice but could reflect the advantage that panel members have of intimate knowledge of how the panels work.

Most of us don¡¯t like metrics, and I accept that they may not work in the humanities, but I would suggest that if we are not going to use a metric like this, we need to do an analysis to justify the additional cost incurred by any alternative procedure. If the additional cost is very high, then we might decide that it is preferable to use a metric-based system, warts and all, and to divide the money saved between all institutions.

ÁñÁ«ÊÓƵ

Dorothy Bishop
Via timeshighereducation.co.uk

?

One of the problems with the use of citations as a measure of research quality is that the method assumes that the higher the number, the greater the quality. Ignoring the possibility of ¡°tit for tat¡± reciprocity between mates, what if your article is cited and immediately preceded by ¡°for a total misunderstanding of even these basics, see Mead¡­¡±?

In addition, citations don¡¯t work as a measure of anything where the chance of others quoting your work is low: if I were ploughing a relatively lonely research furrow, I¡¯d prefer to take my chances with a subpanel of the great and the good. The fact that no one else is relying on my work because no one else is interested in it (or has even heard of it) says nothing about whether it is good, bad or indifferent. Like impact, citations therefore have a tendency to skew personal research interests into institutional research agendas, favouring more of the greatly populated same, not those who are pushing boundaries and exploring for its own sake.

David Mead
Professor of public law and UK human rights
University of Essex

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs