If you were offered a 17 per cent pay rise you might assume you were being rewarded for years of ground-breaking research or innovative teaching. But Mitchell Stevens, associate professor and director of the Center for Advanced Research through Online Learning at Stanford University, was given the salary bump while working at a US liberal arts college several years ago for an entirely different reason: so the college could improve its rank in a US university league table.
Speaking as part of a panel discussion on the topic “Rankings uses and abuses: The debate and controversy over measuring excellence” at the Times Higher Education World Academic Summit last week, Stevens said all faculty at the college were given a pay rise because it was an “easy way for the school to move up in the US News 谤补苍办颈苍驳蝉”.
The influence of rankings on university strategies and government policies is a controversial topic and the debate threw up the key issue of whether universities should look to directly improve specific areas of their institution that are used as metrics in such league tables.
As Duncan Ross, director of data and analytics at THE, said during the discussion at the University of California, Berkeley: “One of the most frustrating questions I get asked by people at universities is: ‘How can I go up in the rankings?’ which is one of the least interesting questions about the data we have. What I would rather people ask is, ‘What can I do to be better at what I think is important?’”
He said this could include public service, expanding knowledge or increasing enrolment of students from poorer backgrounds.
Lance Kennedy-Phillips, vice-provost for planning and assessment at Pennsylvania State University, who also spoke on the panel, said rankings “help provide context” to what the university does, and shows how it rates compared to its peers, but they “do not drive operations” at the institution.
He noted that, as a public university, the institution has commitments that are not generally measured by league tables, such as the need to prepare students for the workplace and give them a “well-rounded” educational experience.
He also compared the higher education sector as a whole to the National Collegiate Athletic Association, which he said enabled college sports teams to “co-operate and compete”.
“For the last 25 years, partly due to rankings [universities have] emphasised the competitive side,” he said. “We have co-operation in our DNA, so I’m looking for ways we might turn on that co-operative aspect”.
But the session also explored how rankings could help promote accountability and transparency in the higher education sector.
Stevens said the “regulatory legacy” of higher education is “not adequate” and if rankings agencies saw themselves as agents of governance and universities were committed to self-regulation, the sector could successfully assess its own value. He said this would prevent universities “thinking reactively towards rankings systems” and both institutions and rankings organisations acting “defensively against government agencies that might undermine our authority”.
“One way to think about rankings is as a peculiar form of international academic governance,” he said. “We don’t have a transnational regulatory body for higher education. We do have rankings. And rankings tell us how we’re doing, how we’re supposed to be doing, what kind of school is like ours, [and] what measures matter.”
It is an idea that is unlikely to be embraced by rankings critics, many of whom argue that league tables already have too much influence over universities. And there is clearly scope to improve? the range and quality of metrics that rankings use to assess institutions.
But universities already use and collect a wealth of data to determine their strategies, improve their teaching, plug skills shortages and benchmark themselves against their peers. If rankings are seen as just another tool in their armoury to achieve this – and are properly analysed and understood – then Stevens' idea makes sense.