ÁñÁ«ÊÓƵ

THE World University Rankings 2019: in pursuit of more significant figures

<ÁñÁ«ÊÓƵ class="standfirst">Our rankings have grown and matured over a decade and half, but they are still evolving to try to capture all the elements of excellence. Duncan Ross assesses the progress so far, and the possible routes ahead
September 28, 2018
Exploring underwater
Source: Getty (edited)

Browse the full results of the World University Rankings 2019


Over the past 14 years, the Times Higher Education World University Rankings have gone through two major changes. The first, a fundamental change to the methodology in 2010, increased the number of separate performance indicators from just six to 13. The second, a change in the scope and in data provision in 2015, involved THE bringing its institutional data collection and global reputation survey in-house as well as using Elsevier¡¯s Scopus database for research evaluation.

Over that time, °Õ±á·¡¡¯²õ overall rankings portfolio has changed, too. Our rankings have branched out across subjects and into different regions; most importantly, they have expanded to cover new aspects of teaching alongside our more traditional research focus.

There are now 1,258 universities in the World University Rankings (although we have space to list only 1,000 in this supplement), 968 in our US College Rankings, 258 in our Japan University Rankings and 242 in our Europe Teaching Rankings. Even though there is, inevitably and appropriately, overlap between these different measures, this represents about 10 per cent of the higher education sector worldwide.

People sometimes ask why don¡¯t we better reflect the whole sector, and they highlight tension in our approaches.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

The initial focus of the World University Rankings was, and remains, the ¡°world-class university¡± ¨C that ideal of the perfect all-round research university. But, as we explore different areas and components of ¡°excellence¡±, that will inevitably change.

<ÁñÁ«ÊÓƵ>Changes for 2019

World University Rankings

The biggest change this year has been around the number of universities that are included in the analysis, resulting in the addition of 155 institutions in the full online version of the table. We¡¯ve worked hard with Elsevier to see that papers (and journal articles, article reviews, conference proceedings, books and book chapters) are correctly attributed to universities, and this, together with the increasing volume of research being done around the world, ensures that more universities than ever are meeting our 1,000-paper threshold.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Elsevier, of course, continues to curate the Scopus dataset that we use for bibliometric measures, and this has resulted in better matching of institutions to their research and also in an enhanced list of suspended titles ¨C ones that we no longer include in our calculations.

The only other adjustment has been to the entry criteria for the 11 subject rankings, which will be published in stages later this year. After a reasoned objection to our original approach by some universities, for which we are grateful, we¡¯ve made the entry criteria less strict in a way that will, we hope, be more inclusive. Our goal is to include, rather than exclude.

Teaching rankings

Our series of teaching rankings has been very successful: they now cover 10 countries and more than 1,400 institutions. Over the next year, we aim to deepen our coverage of European institutions, and we plan to adjust our approach in the US and Japan.

For the US rankings, we are exploring new metrics around student debt on graduation and also investigating ways of refining our value-added metrics ¨C which seek to evaluate the additional value that institutions provide above and beyond the quality of their inputs.

For the Europe rankings, we hope to expand the number of universities participating, and intend to provide stronger definitions around some of the metrics that we employ.

For Japan, we will introduce data from a new student survey, bringing the methodology in line with the US and Europe rankings.

Impact Rankings

Our discussions with universities and governments often come back to a single question: what can you say about the impact that higher education institutions have beyond pure research?

Last year, we took steps to try to gather additional data on institutions¡¯ spin-offs and consultancy revenue. The results of this exercise were mixed ¨C although more than 400 universities provided data, some institutions had very good data, others did not. In some cases, this was because of the collection challenges, and in others because of the legal ownership of intellectual property in particular countries.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

However, this exercise spurred us to consider examining impact more widely, and more specifically innovation.

As a result, we will launch our first global impact ranking in April 2019. This will use the Sustainable Development Goals (SDG) ¨C the United Nations¡¯ set of aspirations for how we can build a more equitable and sustainable world ¨C as a framework for exploring the wider role of universities in society.

Although there are aspects of this that fall back on to research ¨C which is, after all, one of the major ways that universities change the world ¨C we will also be delving into other elements, including measures of university operations.

We look forward to starting the first round of data collection for this exciting new ranking in October.

Metal detectors on beach
Source:?
Getty (edited)
<ÁñÁ«ÊÓƵ>Changes for 2020 and beyond

This autumn, we will start our data collection process for the 2020 World University Rankings (due to be published in September 2019). We aren¡¯t expecting any major changes, but it is likely that this will be the last version of the ranking employing the current methodology.

Although we like our current methodology, with its focus on five key areas ¨C teaching, research, citations, industry income, and international outlook ¨C it has been largely unchanged for almost 10 years. It was designed for a much smaller and less international higher education sector.

We¡¯re now in a world where students and faculty increasingly look to travel to study and work, and where the focus isn¡¯t exclusively on the research output of universities.

Some of the adjustments that we want to make are already public (see below); others are less well developed. There is also a question: should we make all the changes in a single ¡°big bang¡±, or should we introduce them gradually?

We welcome input into the process. We also acknowledge that there will be a tendency for universities to want to move things in a direction that best reflects their own specific strengths ¨C that is human nature. As well as comments on the ranking methodology, you may also have ideas about the presentation of the rankings.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

When considering any change to the rankings, we stick to a few core principles:

  • The change must make the rankings stronger, fairer and more inclusive
  • We should be aware of bias in data ¨C metrics must be applicable across the world for a wide variety of universities
  • There is no one, single model of excellence
  • Beware of unintended consequences of measurement
  • If you want to decrease the impact of one metric, you need to explain which others you want to increase to compensate.

So what changes are we already committed to making?

Miners in tunnel
Source:?
Getty (edited)
<ÁñÁ«ÊÓƵ>Citations

We have already announced that we are exploring alternative (or more accurately, enhanced) bibliometric measures for the future.

The citations measure, which represents 30 per cent of the ranking, is currently derived from the Snowball Metric field-weighted citation impact. This measure was developed by a collaboration between a number of leading universities and industry with the intention of making it possible to compare citation performance across research fields with very different publication traditions.

Although it has served well, over the years we have made some adjustments to the way we use it ¨C first to account for differences in language, and then to accommodate the challenges of papers with many authors (so-called kilo author papers).

Neither change has been ideal, so we have sought out other options.

One option that is often suggested is fractional counting. But ignoring the fact that this tends to promote large universities over small ones, it also has the unintended consequence of penalising collaboration more in research areas where cooperation is less frequent. If you include someone as an author on a paper where you have two existing authors, then your share of the credit drops from 50 per cent to 33 per cent. But adding an additional author to a paper with 100 authors only diminishes your share from 1 per cent to 0.99 per cent.

Instead, we are considering using the same base for calculating the field-weighted citation impact of a paper, but rather than taking the average value of papers authored by a university, we will look to take the score at the 75th percentile. We can¡¯t take the median because this is normally zero.

We believe that this will provide a more consistent understanding of citation performance, and will certainly diminish some of the edge effects we see when individual articles (the ¡°billionaire¡± papers) have exceptionally high citation performance.

Of course, there is no reason why we should have only a single citations indicator. We may continue to include the current version of FWCI, but at a lower percentage, alongside the newer measure.

Other adjustments

Looking further ahead, there are other areas of the current rankings where some changes might be appropriate.

Our industry income metric, which measures the amount of money received from industry to support research, has to date been our sole indicator of technology transfer. Although it represents only 2.5 per cent of the World University Rankings, it¡¯s an important measure. Should we look to extend or develop additional measures? How should we treat income associated with university hospitals? We will address this within our Impact Rankings (the UN¡¯s ninth Sustainable Development Goal looks at industry, innovation and infrastructure), but can it be strengthened within the World University Rankings?

Our international measures also merit some consideration. At the moment, both the proportion of international students and the proportion of international faculty measures have no upper bound: the higher the proportion, the better. Is there a better way of doing this? It¡¯s fairly evident that for small shares of international students and staff, we would expect there to be a relationship between the internationalisation of a university and improved performance. But surely there is a limit. If a university¡¯s student or staff cohort was 90 per cent international, would it be serving its own local community?

For our Europe Teaching Rankings, we looked at a variation on this ¨C for gender balance, we took a 50:50 balance as optimum, and scores either side of this decreased. Could we take a similar approach for the international measures? If so, what is the optimum balance?

And while we are on the topic, is there a role for gender balance in more of our rankings? Can any university that isn¡¯t adequately serving half the population be said to be world-leading? Again, this will be dealt with in our Impact Rankings (the fifth Sustainable Development Goal is centred on gender equality), but could it add value in the World University Rankings?

The final aspect to consider is the overall shape of the ranking. At the moment, 30 per cent is related to our teaching metrics, 30 per cent to research, 30 per cent to citations, 7.5 per cent to internationalisation, and 2.5 per cent to technology transfer. Is that the right balance? As we now have dedicated teaching rankings, and will soon have an impact ranking, should we reduce the proportion dedicated to those and redistribute it among the other elements?

My feeling is that these proportions aren¡¯t unreasonable. They are partly down to a priori assumptions about what universities should be doing to be viewed as excellent, but there is also an element in there associated with our comfort that the data are reliable and consistent across the world. For those of you less familiar with some of our tables that are based on the World University Rankings, it¡¯s worth noting that we already adjust these proportions for some of the geographic rankings, for the Young University Rankings, and for subject rankings.

Do you have ideas about how we can improve our rankings? Send suggestions and questions to us at profilerankings @timeshighereducation.com

Duncan Ross is data and analytics director at?Times Higher Education.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs
ADVERTISEMENT