ÁñÁ«ÊÓƵ

REF view too narrow, partially cited or not (1 of 2)

<ÁñÁ«ÊÓƵ class="standfirst">
September 29, 2011

My colleague Andrew Oswald suggests that the use of journal league tables (based on citations and impact factors) would be better than the judgement of quality offered by research excellence framework panels ("Data be damned: REF's blueprint for systemic intellectual corruption", 22 September).

When I asked him to give me the names of the journals my department ought to be targeting, he replied that there are different methodologies that give different results, so he did not have a preferred list. The choice of how we ought to count citations and calculate impact factors is therefore subjective. I suspect that what he is advocating is no more evidence-based than what he is criticising.

But the point is not just a theoretical one. The fact is that the REF is already having an enormous impact. It is not, as it describes itself, a "system for assessing the quality of research in UK higher education institutions"; if it were, this might lead us to expect judgements to be made about each institution in a systematic survey of what academics have actually achieved. In practice, the REF panels get to see only whatever outputs institutions choose to submit. Making these choices is a serious matter that carries significant risk if universities get them wrong.

The statement in the REF guidelines, "No panel will make use of journal rankings or journal impact factors in the assessment", cuts little ice in the real world since few believe it to be true: as every academic knows, many departments, especially in ambitious universities, are deciding their REF submissions precisely on the basis of these factors. Many maintain explicit lists of journals with star ratings, or which are identified as "REFable". This simplifies the management of research because the outputs do not need to be read - a glance at the journal title is sufficient.

The REF lacks credibility. The fact that it is a two-stage game prompts the question of what happens to the non-submitted research that goes unread. Often this is innovative or interdisciplinary, yet it is marginalised because it is in the wrong journals.

Dennis Leech, Professor of economics, University of Warwick

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs