The European Commission has finally unveiled its detailed plans for U-Multirank, a university ranking system designed to counter criticism that existing rankings fail to recognise diversity and are “homogenising” institutions.
The system, developed over two years by a consortium of academics and funded by the Commission, is described as “a new, user-driven, multidimensional and multi-level ranking tool in higher education and research”. A final feasibility study, revealed at a conference in Brussels on 9 June, concluded that the system works and is ready to be implemented, depending on future funding and commercial support.
U-Multirank aims not to produce a single league table, but will allow its users to choose which institutions to compare, and which criteria to use to compare them. The idea is that the system compares like with like, takes into account the diverse range of university missions, and avoids the focus on a research-driven “reputation race” created by the existing world rankings.
I welcome the initiative. International performance comparisons are here to stay because higher education increasingly operates in a global marketplace, and they fill a crucial information gap. So any initiative to improve the range of performance indicators available for the students, lecturers, university faculty, university administrators, business leaders and policymakers who are already using the well-established global rankings is to be celebrated.
U-Multirank’蝉 attempt to provide multidimensional rankings, instead of a single hierarchical list, is also welcome. It sits comfortably alongside Times Higher Education’蝉 unashamed decision to create a single ranked list of the top 200 (the world’蝉 top 1 per cent) of research-driven, globally competitive universities, with shared global missions and global brands. By stopping at 200, THE’蝉 rankings compare institutions with a similar global outlook, and do not preclude diversity of mission and structure across higher education.
So far, so good. But when the scheme is examined in detail, there is really not much – barring a very interesting global student satisfaction survey – that is particularly innovative.
U-Multirank is described as relying on indicators in five broad areas: teaching and learning; research; knowledge transfer; international orientation; and regional engagement. Of these, “regional engagement” is the only area not explicitly covered within THE’蝉 range of 13 indicators, and the U-Multirank feasibility study accepted that it had struggled to provide reliable indicators in this area.
The project also boasts that it allows users to choose criteria to decide how much weight to place on each indicator. But to a significant extent, THE’蝉 THE World University Rankings, available since September 2010, already do this: our website allows users to rank institutions on our official combination of 13 indicators, but also, at the click of a mouse, on any one of five broad categories: teaching; research; citations; industry income; and international mix. In addition, the THE rankings’ iPhone application allows users to change the weightings of these five broad areas to suit their needs.
Another concern is that the pilot project revealed a lack of global engagement with the concept. Data were gathered from 109 European and 50 non-European institutions. But only four British universities – Newcastle, Glasgow, Coventry and Nottingham – took part, despite the UK enjoying a global reputation second only to the US.
The academics behind the system also said that they had been disappointed with the responses received from the US – the leading nation for world-class higher education – and from China, which is one of the most important emerging higher education nations.
Without wider engagement, there is a risk that the European-designed U-Multirank will be seen to be inward looking and self-serving, and will not be taken seriously by the rest of the world.