One of the most persistent criticisms of global university rankings is that they are not fair: institutions in usually rich, English-speaking countries have an inherent advantage.
They have superior financial clout. Their scholars speak the lingua franca of global scholarship and are better integrated into the established professional networks and predominantly English-language publication systems.
We must accept that, in some respects, these issues are simply part of the reality that rankings reflect: when it comes to establishing world-class universities, you need money ¨C and plenty of it ¨C to attract and retain the top talent and to build the laboratories that nurture excellence. Also, a global higher education sector needs a common global language to share ideas ¨C for now, that happens to be English, and many institutions are adjusting their practices accordingly.
But at Times Higher Education, we have always worked hard to ensure a fair appraisal of global higher education, and we have made further improvements this year to ensure that we deliver, on 30 September, the most balanced, inclusive and globally representative rankings yet.
ÁñÁ«ÊÓƵ
In terms of our income-based performance indicators, we have always applied Purchasing Power Parity adjustments to all our financial data, to level the playing field between different economies. We will continue to do this.
We have also ensured that our annual Academic Reputation Survey is representative of global scholarship. We make the survey invitation-only, to ensure we contact only senior published scholars who know what they are talking about, and achieve a representative sample of disciplines and regions.
ÁñÁ«ÊÓƵ
The survey that will be used in the 2015-16 rankings next month, which closed in January 2015, achieved the most balanced set of responses yet.
Working with Elsevier, we distributed the survey in 15 languages this time, compared to nine the previous year, and, using United Nations data on the distribution of scholars around the world, we achieved a much better geographical spread.
In particular, we have been able to correct an over-representation of US scholars seen in the past. A total of 18 per cent of responses this year hailed from North America, compared with 25 per cent last year.
Another important step towards a more inclusive ranking is the decision to source our research publication data from Elsevier¡¯s Scopus database, rather than Thomson Reuters¡¯ Web of Science. At a stroke, this means we are capturing a far wider range of publications ¨C drawing from Scopus¡¯ 22,000 indexed journals from 105 countries compared to Thomson Reuters¡¯ 12,000.
The analysis for the 2015-16 rankings will draw on more than 11 million journal articles published between 2010 and 2014, against some 6 million articles (published between 2008 and 2012) in last year¡¯s ranking, and covering a wider range of papers in languages other than English.
ÁñÁ«ÊÓƵ
With the improved coverage of non-English language journals in Scopus (which contains more than 40 languages) and better representation of non-English-speaking nations in the global reputation survey this year, one aspect of the methodology that has previously been used to support non-English-medium universities will now be phased out: the normalisation of citation data by country or region.
When calculating the citation impact for each university, in the previous rankings, Thomson Reuters would apply what it describes as a ¡°regional modification¡±, calculating the citation impact for the entire country and adjusting the scores for universities based on the national context. Thomson Reuters explained:
¡°The concept of the regional modification is to overcome the differences between publication and citation behaviour between different countries and regions. For example some regions will have English as their primary language and all the publications will be in English, this will give them an advantage over a region that publishes some of its papers in other languages (because non-English publications will have a limited audience of readers and therefore a limited ability to be cited). There are also factors to consider such as the size of the research network in that region, the ability of its researchers and academics to network at conferences and the local research, evaluation and funding policies that may influence publishing practice.¡±
ÁñÁ«ÊÓƵ
?This modification, in effect, gave an automatic boost to any university in an underperforming country and had some distorting effects. After consultation with our new bibliometric data supplier, Elsevier, and our external advisors, we have decided that this modification is not justified in the long term.
The World University Rankings are designed to judge truly global universities. Over time, these globally facing institutions are becoming increasingly integrated into global networks and publishing systems and are operating in an increasingly global recruitment market for top talent, and the use of English is on the up. So we will phase out the regional modification over time.
This decision will not be without some controversy, as some institutions previously boosted by the modification may see a drop in their citation impact score, as a more harshly realistic picture is provided.
Our approach to our flagship rankings is based on providing fair, comparable and transparent data to help policymakers, university leaders and faculty, as well as students and their families, make informed decisions. This change supports this mission.
ÁñÁ«ÊÓƵ
Phil Baty is editor of the THE World University Rankings.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login