I went in to the lion’s den last week, accepting an invitation to join a British Academy policy forum titled “League Tables in the Public Sector”.
Among 30 leading social scientists and policymakers assembled to discuss the value of rankings, I was the only one in the room who helped produce rankings.
The British Academy’s policy centre had set up the project to examine the often negative effects of league tables and performance indicators when applied to policing, schools and higher education.
Although I had few allies in the room, I welcomed the opportunity for detailed engagement.
The meeting was held under Chatham House rules, so I cannot report contributions by named individuals (there will be a report released later in the year), but the case made against many tables was strong.
In higher education, it was claimed, domestic league tables fail to capture the diversity of the UK’s universities; measures reflect universities’ recruitment policies and staffing rather than the education they deliver; rankings employ data that are available, whether they are relevant or not; and they lack any clear concept of quality.
I agree with much of this criticism. The most damning case against domestic rankings I’ve heard was made by a vice-chancellor whose institution was placed near the bottom of the league tables. He said he could lift it quickly by at least 10 places if he closed access courses, raised entry requirements and awarded more first-class degrees. But, of course, none of these things was remotely in the interests of the university or its mission to serve its community.
The danger of rankings creating perverse incentives is a real one.
To keep our efforts as useful and honest as possible, we continue striving to fine-tune our tables.
The Times Higher Education World University Rankings, as their title suggests, are global, not domestic, rankings. They rank only 200 world universities – about 1 per cent of all institutions worldwide – all of which tend to have similar missions: almost every one aims to recruit from a global pool of the best students and staff and to push the boundaries of knowledge with cutting-edge, often cross-disciplinary and international, research.
Times Higher Education spent most of 2010 listening to critics to develop an entirely new methodology with a new data supplier, Thomson Reuters (which was last week judged to be the 39th “best” brand in the world by the company Interbrand).
The 2010-11 rankings were informed by a global opinion survey that asked stakeholders what they valued in existing rankings systems and which indicators they would like to see. The structure and methodology were scrutinised by an advisory group of more than 50 experts from 15 countries, including at least one specialist from every continent, and further shaped by free and open discussions on our website.
But the debate did not stop with the publication of the rankings on 16 September 2010. The tables – the first results of an ambitious new project – proved controversial. We will shortly begin a fresh round of consultation on refinements for the 2011-12 tables, which we will publish later this year. We will keep listening.
I would humbly suggest to my colleagues at the British Academy that this presents a case study of good practice.
We're all ears: partners consult sector to build on 2010 successes
New indicators and refinements considered for THE World University Rankings 2011