ÁñÁ«ÊÓƵ

This is why we publish the World University Rankings

<ÁñÁ«ÊÓƵ class="standfirst">°Õ±á·¡¡¯s rankings editor Phil Baty sets out why the World University Rankings are here to stay ¨C and why that's a good thing
January 16, 2018
Improvement, performance, rankings, success
Source: iStock

¡°There is no world department of education,¡± says Lydia Snover, director of institutional research at the Massachusetts Institute of Technology. But Times Higher Education, she believes, is helping to fill that gap: ¡°They are doing a real service to universities by developing definitions and data that can be used for comparison and understanding.¡±

This is the true purpose and the enduring legacy of the?THE World University Rankings.

Of course the rankings bring great insights into the strengths and shifting fortunes of individual research-led universities. We assess universities¡¯ performance with the most comprehensive and balanced ranking in the world, using 13 performance indicators covering all their key missions (teaching, research, knowledge transfer and international outlook).

The results, a vital resource to students and their families as well as to academics and university administrators and governments across the world, help to attract almost 30 million people to our website each year, and as they make headlines around the world they touch hundreds of millions more individuals.

ÁñÁ«ÊÓƵ

But amid the annual media circus around the news of who is up and who is down, and beneath the often tedious, torturous ad infinitum hand wringing about the methodological limitations and the challenges of any attempt to reduce complex universities to a series of numbers, the single most important aspect of THE¡¯s global rankings is often lost: the fact that we are building the world¡¯s largest, richest database of the world¡¯s very best universities.

Let me be clear: there is no such thing as a perfect university ranking. There is no ¡°correct¡± outcome as there is no single model of excellence in higher education, and every ranking is based on the available, comparable data, and is built on the subjective judgement (over indicators and weightings) of its compilers. THE developed its current methodology based on more than a decade of experience in rankings, after more than a year of open consultation and with the detailed expert input of more than 50 leading figures across the world, and we will continue to refine and improve the ranking.

ÁñÁ«ÊÓƵ

But while some in the sector continue to get excited about the latest supposed revelations about the limitations of global rankings, Times Higher Education is quietly getting on with a hugely ambitious project to build an extraordinary and truly unique global resource.

THE is now in the third year of an annual process to collect comprehensive data, under bespoke, clear and globally harmonised definitions, on an ever-widening range of universities across the world ¨C in direct partnership with the institutions themselves.

Last year, working individually with each institution, our data team gathered comprehensive institutional data from 1,313 research-led institutions ¨C gathering many tens of thousands of individual data points covering university staff and student numbers and profiles (including gender and national/international status) and financial data (including total income, research income and industry income), all broken down as much as possible into eight broad subject areas.

The data were combined with about 250,000 data points from more than 20,000 responses to two rounds of our annual Academic Reputation Survey, and an analysis (by Elsevier) of 56 million citations to 11.9 million research publications, including more than half a million books and book chapters, to develop the 2016-17 THE World University Rankings, and derivatives including the?THE Young University Rankings, the Asia University Rankings and the Emerging Economies University Rankings.

Data collection for the 2019 World University Rankings portfolio is currently under way ¨C as is the 2018 Academic Reputation Survey ¨C and we are confident of expanding the range and depth of our data yet further.

ÁñÁ«ÊÓƵ

But the database does not just fuel the THE¡¯s range of published rankings. It is now the basis of a range of online analytical tools ¨C DataPoints ¨C?which more than 130 universities around the world (including MIT) are now using to help them benchmark their performance against a group of peers, against a wide range of performance metrics, including those used to create the global rankings.

Institutions and academics could continue the endless, backward-looking debate about the rankers¡¯ choice of metrics and metric weightings ¨C or they could move forward and choose their own. They can tailor the underlying rankings data to suit their own needs and missions, and to inform their own strategic priorities.

Since their foundation in 2004, the THE World University Rankings have evolved far, far beyond the simple, controversial and monolithic ranked lists of universities. Online, universities can be ranked separately against five pillars of activities, and they are profiled against a range of additional contextual data. And in our DataPoints tools ¨C where the focus is on profiling and benchmarking not ranking ¨Cdeeper, richer comparisons are available.

ÁñÁ«ÊÓƵ

THE has moved well beyond the inherent limitations of rankings to offer, as MIT¡¯s Lydia Snover says, new, data-led insights to deepen our collective understanding of the dynamic world of global higher education and research.?

Phil Baty is editor of the?THE World University Rankings.
?

To learn more about how DataPoints can help you harness the power of the rankings data, get in touch with us at data@timeshighereducation.com.?

Data collection for the 2019 Rankings is underway now. Please note:

We can only include you in THE¡¯s global rankings (the THE World University Rankings, Asia University Rankings, Latin America University Rankings and Emerging Economies University Rankings) if you submit and sign-off data through our secure on-line data portal.

If you would like to submit your institution to the database and be considered for inclusion in the THE¡¯s range of global rankings, please email: profilerankings@timeshighereducation.com

ÁñÁ«ÊÓƵ

Universities are eligible for inclusion in the 2019 THE World University Rankings if they: teach undergraduates; they publish more than 1,000 research papers (indexed by Scopus) over a five year period (between 2013 and 2017); and they have a broad range of activity (no more than 80 per cent of activity exclusively in any single subject area).?

Data collection for 2019 ends on 30 March 2018.

<ÁñÁ«ÊÓƵ class="pane-title"> POSTSCRIPT:

This article was originally published on 8 February 2017, but was re-published on 16 January 2018 with the copy amended to reflect the most recent rankings cycle.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Related universities
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (12)
Phil Baty may regard concerns about the methodology and pernicious impacts of university ranking as ¡°tedious ¡­ hand-wringing¡± and ¡°backward-looking¡± but I hope he will forgive me for begging to differ. These are serious issues that demand constant attention. And while he may be right to emphasise that the utility of their data-gathering is in providing a spectrum of benchmark indicators rather than overall ranking, every year when these tables are released, the headline stories are about the ¡°overall performance¡±,?an arbitrary aggregate of scores and opinions?of individual institutions presented at a level of precision (three significant figures) for which no meaningful justification has ever been articulated. I propose to continue to look broadly ¨C?backwards, forwards and all around ¨C?at the implications of the activities of university rankers, and to continue this debate. I'm sure Phil is up for it, despite what he writes above! It is the least I can do for colleagues beset by the pressure that numbers endlessly exert to supplant rather than inform judgement.
Good points by scurry. The false precision of these rankings undermines their credibility. Estimates of uncertainties would be expected.
Thanks for your thoughtful contribution (as ever) Stephen. My intention, of course, is not to shut down debate around the role of rankings and their uses and abuses, but to seek to move the conversations forward to explore the powerful, helpful contributions they can make by creating unique new globally comparable data sets and by putting the data in the hands of the user to allow bespoke analyses. I personally will be attending at least 20 detailed face-to-face data "masterclasses" across the world this calendar year with our university stakeholders, to ensure we continue to open our activities and our data to the scrutiny of the university sector and to ensure that what we do continues to add value.
I actually don't have a problem with numerical data being gathered and compiled with regard to every dimension relevant to the academic condition, a set of vital indicators, if you will. And I understand that the data gathering techniques are never perfect and are always subject to improvement. However, what is more disturbing is the endless packaging and repackaging of these data into ever greater number of league tables that salami slice the data in various ways. I imagine that financial considerations drive the THE in this direction. But that's open to an entirely different set of ideologically based objections about which indicators are combined to produce some composite score, etc. One suspects that the hands of the potential clients for these tables are all over the process. Moreover, if you concoct enough of these league tables, everyone will be able to consider themselves a winner at some point!
Steve, let me be absolutely clear: your suggestion that "the hands of the potential clients for these tables are all over the process" is absolutely false and without any foundation. Our integrity is everything and you will have seen, no doubt, that we brought in PwC last year to carry out a full independent audit of our data handling and our calculations for the rankings - precisely because we recognise how much weight is placed on ranking results by government and university governing bodies around the world. Their full report is available via the methodology section of our rankings website. In terms of the "endless packaging and repackaging" I'd suggest that's a good thing. One of the major criticisms of global rankings is that they promote the idea of a single model of excellence and encourage uniformity towards one, predominantly US model of the research university. The proliferation of rankings helps celebrate diversity and to recognise context. For example our new US College Ranking with the Wall Street Journal is focussed on teaching - using a unique student engagement survey of over 100,000 current US students and focussing heavily on graduate outcomes, it presents a very different picture of US universities compared to the research and prestige-focused World University Rankings. We're proud to reflect more of the diversity of global higher education with a growing range of different rankings.
Hmm. When I read justifications like "to seek to move the conversations forward to explore the powerful, helpful contributions they can make", I see the hallmark of management-speak. Indeed, performance tables are now pervasive in managers' treatment of staff: low positions equals failing staff. So now instead of pursuing academic quality we strive to climb the tables. As for PWC ensuring integrity, a company that has been condemned for its aggressive tax avoidance schemes does not instil any confidence in a dubious ranking exercise.
Phil Baty is being disingenuous: the ratings help the sale of the Higher, and serve as a tool for universities engaged in self promotion. PWC is a company whose record speaks for itself, just look at any copy of Private Eye. I urge Mr Baty to explain what role zero hours contracts, the closing of departments and the imposition of ¡®voluntary redundancies¡¯ play in these ratings. If not, why not?
With apologies to the late, great Tom Lehrer Gather round and I¡¯ll publicise Ratings UK A tool made to justify A manager¡¯s long goodbye. It may not be accurate but it¡¯s here to stay. My ratings have weightings, they won¡¯t go away. Don¡¯t say they are defective Vice chancellors know they¡¯re effective. Once the ratings go up far more lolly for me Sing Glynnis and David and Nicola D. Some have harsh words for this terrible scheme. But we know our attitude Is crushed into gratitude. Sacking the staff is a manager¡¯s dream. If you take the dregs then I¡¯ll get all the cream. Phil TKS likes to call us all mates He follows the money that wins the debates Although Education is far far away They¡¯re priceless in China, Phil¡¯s Ratings UK.
He's not dead
The use of 'World' in the title is undermined by the overwhelming dominance of English-medium publication its research and knowledge transfer sectors. 'Most like us' would be a better term to use - and even that is fraught with difficulty. How does it attempt to measure high-quality, locally-relevant research which is published in local languages for local consumption? Simply, it doesn't.
I don't have an opinion either way; I haven't read into it. But would anyone expect a leader at the 5th ranked university to do anything other than praise the ranking system that put them at the top of the pile?
The ¡°reason¡± that THE publishes its ranking? There are two: 1. To generate revenue for the publisher; 2. To promote institutions of higher ed based in the Commonwealth countries. For a serious ranking of world universities managed by an objective group of international educators, there is no substitute for the Shanghai ARWU list. It is objective and honest, as its purpose was to guide the government of China in identifying the world¡¯s top institutions. Its methodology favors large research institutions¡ªand that is by design. The other honest, objective ranking is Webometrics, a ranking system based in Spain and run by their National Research Council. Although its approach is entirely different than that of Shanghai, it is interesting that there results consistently mirror those of ARWU. The correlation tends to validate the integrity of both. The least useful of the major ranking systems is QS. THE falls somewhere between Shanghai and QS.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs