ÁñÁ«ÊÓƵ

Better university outcomes require more than ill-conceived metrics

<ÁñÁ«ÊÓƵ class="standfirst">Australia¡¯s move to performance-based funding must be better thought through than England¡¯s TEF, say Gwilym Croucher and Kenneth Moore
January 3, 2019

The Australian government believes that making university funding growth conditional on performance measures will improve quality.

It announced in a December discussion paper that assessment will begin in August. The metrics are yet to be finalised, but student attrition, retention, completion and satisfaction have been mooted, alongside graduate employment and higher study rates.

The resemblance to England¡¯s Teaching Excellence and Student Outcomes Framework is not coincidental, but the TEF is not the only existing model. Most US state governments have also used performance measures for public colleges over recent decades, although many have found them difficult to implement and ceased their use.

The idea of rewarding university performance has strong intuitive appeal. When government funds are invested, the public is entitled to ask whether that money is being spent effectively. The discussion paper leaves open the question of whether Australia should focus on basic standards or adopt ¡°stretch¡± targets, but schemes like this are typically about driving improvements in productivity: getting better outcomes for fewer inputs.

ÁñÁ«ÊÓƵ

This is where the rub is. Productivity can be a deceptively simple concept. For higher education, the superficial logic of measuring the ratio of output to input hides devilish complexity.

For instance, research shows that, since the mid-2000s in particular, Australian universities have been doing more with less overall. Depending on the approach used, the increase in productivity over a recent six-year period was between 2 per cent and 11 per cent. But that variability in the estimates reveals the challenge in assessing productivity. We need not only to know how to precisely measure both the inputs and outputs of higher education, but also to all agree on their relative value.

ÁñÁ«ÊÓƵ

Universities undertake many activities, and it is not always straightforward to separate them. Depending on who you ask, you get a different opinion as to which out of teaching, research and engaging the community is the more valuable activity. The Australian scheme, like the TEF, looks?as though it will focus on teaching, but the logic of that choice is not beyond dispute: tying more funding to Australia¡¯s version of the research excellence framework, known as Excellence in Research for Australia, would also make some sense. Our recently shows that a group of five Australian universities can sit in either the top 10 or the bottom 10 in terms of productivity gains depending on how research and teaching activities are expressed in the measurement.

It is also important to be clear about exactly what the adopted metrics do and do not measure. For example, while some analyses yield detailed estimates of relative efficiency between universities, they often fail to capture what drives higher education performance. The controversy over the TEF metrics is a case in point. Measuring attrition rates, for example, tells us little about the reasons why students drop out ¨C which may or may not relate to the quality of the educational experience.

Promoting gains in productivity requires more than rewarding narrow performance indicators. Such heavy-handed and simplistic measures invite gaming ¨C and competing portrayals of what they show. And aggregating disputable metrics into league tables, where only top universities are rewarded, creates the conditions to improve scores but not to improve quality.

Performance funding schemes that have been adopted around the world have at best had limited effectiveness, and it would be a foolish minister?who did not proceed cautiously.

ÁñÁ«ÊÓƵ

The Australian government has invited public submissions on its proposed scheme, which is welcome. However, if it is serious about improving performance in higher education, it needs to support a broader public discussion around how to encourage improvements in productivity. More work needs to be done to understand the linkages between inputs, processes, outputs and quality. That way, confidence could be built that, despite the challenges of assessment, students and the public were getting the best out of their universities.

Gwilym Croucher is a researcher at the University of Melbourne¡¯s Centre for the Study of Higher Education. Kenneth Moore is a doctoral student at the centre.

<ÁñÁ«ÊÓƵ class="pane-title"> POSTSCRIPT:

Print headline:?The devil is in the details

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Related universities
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (1)
With almost no institutional moderation of undergraduate work, the decline in undergraduate academic skills goes almost unnoticed in Australia. If 'student satisfaction' is the yardstick of institutional success then we might as well give the game away.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs