ÁñÁ«ÊÓƵ

Snowball Metrics: no pinch of salt needed

<ÁñÁ«ÊÓƵ class="standfirst">John Green is the man behind a researcher-led effort to cook up sound institutional comparisons
July 17, 2014

Source: Getty

Crystal-clear concoctions: Snowball Metrics has crafted recipes that anyone can use to whip up comparisons and slice and dice data on various facets of research

Vice-chancellors¡¯ ever greater focus on rankings and bibliometrics suggests that a good benchmarking exercise might just constitute the most fun they ever have without taking their clothes off.

But their fascination with how their institution measures up against others is not merely idle. According to John Green, a life fellow of Queens¡¯ College, Cambridge and the retired chief coordinating officer of Imperial College London, senior managers are all faced with questions like: ¡°Why am I losing income in neuroscience? Is it because there is less money in the system, or because I am losing market share to Cambridge?¡±, ¡°How should I decide whether to invest in photovoltaics or nanoscience?¡± and ¡°I am looking to collaborate; how do I know who is truly strong in photovoltaics?¡±

¡°They need metrics to understand all that,¡± he says.

ÁñÁ«ÊÓƵ

The problem is that a good benchmarking exercise is not easy to come by. Before they halted the practice because of fears it may be anti-competitive, the ¡°big five¡± UK universities in terms of research income used to meet to compare their success ratios for funding applications, Green says. ¡°But Oxford never counted a funding application as lost for two years, whereas Imperial did so after six months if they hadn¡¯t had an award letter. So we were not comparing apples with apples.¡±

Other problems with existing methods of comparison include the various formats in which compliance bodies require data to be prepared, and the tendency of the figures to be outdated by the time they are published.

ÁñÁ«ÊÓƵ

Feeling that it was time he ¡°gave something back¡±, Green decided four years ago to do something about it. His aim was to create a set of ¡°bottom-up¡±, universally agreed research-related metrics, complete with standardised ¡°recipes¡± for how they should be calculated, including the data sources available for doing so. Their eminent usefulness, he hoped, would lead to their catching on around the world in a snowball effect, hence his choice of name: .

Green assembled and chaired a steering group of eight UK research intensive-universities, including Imperial, University College London and the universities of Cambridge and Oxford. He enlisted Elsevier ¨C which owns the Scopus citation database ¨C to manage the project on a pro bono basis, and to create pilot tools to test whether the recipes the group came up were ¡°cookable¡±. That tool also allowed the institutions to ¡°slice and dice¡± data according to a number of ¡°denominators¡±, such as theme, discipline and department, or to normalise for factors such as number of researchers.

A crucial part of the arrangement is that each institution only gets to see others¡¯ cooked ¨C rather than raw ¨C data, not least in order to avoid any potential for falling foul of competition law. However, for Green, a key feature of the recipes is that they are free and ¡°supplier agnostic¡±: anyone, not just Elsevier, can cook them, using commercial tools or even just spreadsheets. (He admits that such self-cooking presents an opportunity for dishonesty, but feels that only once the metrics are widely adopted will it be appropriate to consider setting up a standards agency to ¡°police¡± universities¡¯ kitchens for potential rats.)

Elsevier has also pledged to build a free Snowball Metrics Exchange, which will allow universities to form ¡°benchmarking clubs¡±. This ¡°I¡¯ll show you mine if you show me yours¡± ethos is another key aspect of how Green sees the recipes being used.

¡®Recipe¡¯ book gives user-friendly tips

The first Snowball Metrics Recipe Book was published in 2012, with 10 recipes relating to areas such as research funding and output. And so great has been the universities¡¯ enthusiasm for the unexpected insights that cooking the recipes has provided that at the end of last month , containing a further 14 recipes relating to factors such as collaboration, societal impact, intellectual property and spin-offs.

ÁñÁ«ÊÓƵ

Crucially, the new edition incorporates six metrics that have also been adopted by a parallel working group of seven universities in the US, including the University of Michigan, Northwestern University and the University of Illinois at Urbana-Champaign. According to Green, the Americans¡¯ original fears that differing nomenclatures and data sources would scupper alignment of US and UK recipes have been overcome by some minor tweaks to their wording.

The sense of a global snowball starting to gather momentum is added to by the interest from a group of Australian and New Zealand universities (including the universities of Queensland and Auckland), the Japanese RU11 Group of research-intensive institutions and the Association of Pacific Rim Universities.

Although the US universities will continue to develop their own recipes before comparing them with the UK ones ¨C which Green describes as an ¡°arduous process¡± ¨C he hopes that once they are happy with their formulations, the adoption of the UK-US recipes as global standards will begin apace.

ÁñÁ«ÊÓƵ

The prospect of global adoption is also boosted by the enthusiasm of funders for a robust way to compare institutional strengths in various disciplines: the ¡°holy grail¡±, according to Green, being to perfect a way to link research inputs with outputs via digital ¡°fingerprinting¡± of relevant documents and data.

Future role in REF mooted

Of course, all this is also potentially of great relevance for the research excellence framework and, sure enough, Green has made a submission to the Higher Education Funding Council for England¡¯s independent review of the role of metrics in research assessment. It says that if Hefce chooses to adopt metrics for the REF, it should take on board what has been achieved with Snowball Metrics and avoid ¡°reinventing the wheel¡±.

Green also suggests Hefce, or the Higher Education Statistics Agency, as the ideal ¡°neutral body¡± he would like eventually to take ownership and develop the metrics further. However, he believes that metrics should only ¡°play a part¡± in research evaluation alongside peer review, and he would ¡°hate¡± to see Snowball Metrics ¡°grabbed¡± for the 2020 REF.

¡°The great thing about the REF is that, in a rather subtle way, the academics buy into it, partly because of [their role in] the peer review bit,¡± Green says.

ÁñÁ«ÊÓƵ

¡°I hope to grow that same trust in Snowball. I don¡¯t ever want to get into the space of saying these metrics would be helpful in the REF: the sector has to work that out for itself.¡±

paul.jump@tsleducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (1)
Oh dear, here we go again. No doubt universities will manipulate the totally arbitrary weights attached to various inputs to get any answer they want. It would be a good idea if academics were to get on with real science (or whatever) rather than wasting time and money on yet more silly metrics.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs