We live in a world of informed choices. The bulk of students seeking a degree still opt for universities in their home countries. Their decisions are shaped by a variety of considerations, such as an institution's academic reputation, its proximity to home, the financial support packages it offers and the prospect of lucrative employment post-graduation.
National university rankings help to shape these decisions. These rankings tend to be produced by organisations that are suitably funded (often by governments) and that are supported by databases built on detailed information that has been systematically collected over many years. Such tables stand up to scrutiny and are trusted by students and their families.
The global comparison of universities, however, is a relatively new phenomenon - in vogue only since 2003. Global rankings receive the most attention from the policymakers of the emerging countries of Asia and the wealthy Arabian countries that aspire to develop world-class universities, plus the European nations keen to revalidate their leading universities and to ensure value for their investments.
Serendipitously, the worldwide publicity given to global rankings is narrowing the influence gap between universities in developed and emerging countries.
There are now more than 10,000 universities globally. Comparing them is by no means a simple exercise - it requires substantial resources and well-developed databases. But some organisations that publish global rankings are not yet backed by mature business models that allow them to leverage extensive databases and mature methodologies that stand up to intense scrutiny.
Nonetheless, in a few instances, policymakers have used the global tables to effect change at institutions and even across higher education systems.
Ranking organisations use different methodologies and databases that give different emphasis to research, education and reputation. As a result, a university can achieve very different positions in the various rankings.
The availability of reliable databases is central to the success of any ranking. Ranking organisations use information on research publications, such as the number of citations per paper, citations per academic and the number of papers they publish. But the databases used do not always distinguish between different authors bearing similar names, or between self-citations by the author and citations by others, or between positive and negative citations. Institutional affiliations abbreviated in multiple ways can also cause problems. More effort is needed to perfect these databases.
With the advent of global rankings, highly cited researchers have become sought after worldwide. Some have multiple institutional affiliations and sometimes the distinction between a deserving university and an opportunistic one seeking to boost its place in the rankings can become blurred. In some cases, the highly cited researchers are no longer research active, yet their host university still benefits from their presence.
Currently, global rankings also fail to properly illustrate key aspects of scientific research, such as field-specific peaks of research excellence. Given the limited availability of research funds in many countries, is it possible for any single institution to achieve high levels of global excellence in all research fields?
While the global ranking of universities has value in today's world of informed choices, such exercises are a work in progress.