ÁñÁ«ÊÓƵ

Research intelligence - Dawning of a new Era

<ÁñÁ«ÊÓƵ class="standfirst">Australia's research assessment programme is causing controversy, especially the rankings. Paul Jump investigates
September 30, 2010

As the Australian Labor Party struggled to hold on to power after August's inconclusive election, the country's researchers were on tenterhooks about whether their submissions to a controversial new research assessment programme would count for anything.

Although Labor has now formed a government, it would not have been the first time a change of regime had heralded a U-turn in research assessment policy.

The current Excellence in Research for Australia (Era) project was devised only after Labor fulfilled a 2007 election promise to scrap its proposed predecessor, the Research Quality Framework (RQF), created under the Liberal-National coalition. The RQF was cancelled before it was started owing to concerns about cost and the inclusion of an assessment of impact.

According to a spokeswoman for the Australian Research Council (ARC), which is coordinating Era, the Labor government wants a mechanism that is "streamlined and indicator-based".

ÁñÁ«ÊÓƵ

Impact has been replaced by indicators of "esteem", such as competitive research fellowships, and "applied measures" such as numbers of patents won. Total research income is also measured, and there is provision for peer review of non-journal outputs such as books and chapters.

The most controversial metric is the ranking assigned to every journal, which will be used to assess the quality of published research. Despite extensive consultation, these rankings remain hotly disputed: perceptions that Australian journals have been favoured linger, and some disciplines complain they have no top-ranking journals.

ÁñÁ«ÊÓƵ

"Everyone has a story of egregious injustices in classifications," said Mark Gibson, senior lecturer in communications and media studies at Monash University.

Lyn Yates, pro vice-chancellor of research at the University of Melbourne, noted that a hierarchy of journals makes little sense in many non-scientific fields such as her own field of education and she worries that researchers, all of whom must be assessed in the Era, may tailor their future research towards "top" journals.

Graeme Turner, director of the Centre for Critical and Cultural Studies at the University of Queensland, and chair of the creative arts and humanities panel during a pilot assessment last year, said there was evidence that this was happening. "But this is in defiance of the advice from the ARC and senior academics with a knowledge of the Era process such as myself," he added.

He said that academics had "fetishised" the rankings "to a ridiculous extent" and that the eight research evaluation committees, which are currently assessing submissions, would see the ranking as a work in progress. "They are never going to believe what (the list) indicates over what they know themselves," he said.

Dr Gibson said he was "moderately reassured" by the ARC's insistence that researchers would be measured only against others in the same field. "But whether or not it was intended, there is a growing tendency within universities to value researchers according to the grade of journal they are publishing in," he added.

ÁñÁ«ÊÓƵ

Disciplines vs departments

Claire Donovan, lecturer in sociology at the Australian National University and former chair of the RQF working group on impact, lamented Era's metrics-based approach and claimed the indicators had been chosen primarily for ease and cheapness of collection.

She also criticised Era's use of "disciplines" rather than departments as the unit of assessment because this failure to mirror universities' internal structures created considerable logistical difficulties in organising submissions.

"The tail is wagging the dog," she said.

However, Linda Butler, an Australian National University professor who advised the ARC on the bibliometric component of Era, denied Dr Donovan's claim that the approach had been chosen to minimise the cost of paying data provider Scopus to gather relevant data. Professor Butler said it had arisen from the government's desire to assess the strengths and weaknesses of different Australian disciplines against international standards.

ÁñÁ«ÊÓƵ

Professor Turner regretted the discipline-based approach but said any attempt to measure research outputs was preferable to Australia's current focus on research inputs such as competitive grant income.

The committees, which have total discretion over how to use the metrics, are due to report early next year. A spokeswoman for the ARC said the results would inform the future allocation of research funding, and may also influence funding for PhD places.

Professor Turner and Professor Yates agreed that the "real battles" would be between university departments over how much of the funding for particular disciplines should come to them.

"The potential of the disciplinary codings to create dissension and jealousies within universities, not just between them, must count as a new contribution of research assessment to the difficult and ever more competitive environment in which research is conducted," Professor Yates said.

ÁñÁ«ÊÓƵ

paul.jump@tsleducation.com.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs