ÁñÁ«ÊÓƵ

In-house renovation

<ÁñÁ«ÊÓƵ class="standfirst">More data, more institutions and elimination of kilo-author papers: full control has enabled THE to make world rankings even better, writes Duncan Ross
September 30, 2015
In-house renovation
Source: Rex

View the full World University Rankings 2015-2016 results


This is a very significant year in the 12-year history of the Times Higher Education World University Rankings.

When we took production of the rankings in-house in November 2014, we aimed to reproduce the methodology from previous years as closely as possible. This approach is well established and understood, both by universities and by the rankings¡¯ other users, such as funders, prospective students and the public. We also announced that the data would be made available to universities for further analysis, so that they can understand their performance and gain insights that could improve their effectiveness.

But we also made three major decisions that inevitably have had an impact on the results ¨C changes we believe are for the better. We have increased the number of institutions ranked from 400 to 800; changed our bibliometric data supplier; and rebalanced our Academic Reputation Survey.

The result is a broader and more powerful ranking, one that addresses the top 4.5 per cent of global institutions, those with a?significant research focus across a range of subjects.

ÁñÁ«ÊÓƵ

We were well aware of the large number of excellent research-intensive universities that were not represented in the THE tables. As a result, we reached out to more institutions than ever, finally getting support (and data) from more than 1,100 universities.

This increase has changed the baselines from which we generate the tables¡¯ normalised metrics, a change felt particularly by the 2014-15 rankings¡¯ lower-rated institutions. Some excellent universities are more lowly placed in the 2015-16 tables than the 2014-15 ones simply because of the inclusion of institutions that were not part of last year¡¯s survey.

ÁñÁ«ÊÓƵ

So how much further should we?go?

With their focus on the world¡¯s top research universities, it is unlikely that the overall global rankings will ever extend beyond 1,000 institutions, as our requirement for a minimum number of papers is likely to disproportionately exclude institutions below the current input base.

We have replaced Thomson Reuters¡¯ Web of Science with Elsevier¡¯s Scopus as the source for the 2015-16 rankings¡¯ bibliometric measures. We think that Scopus¡¯ wider reach ¨C the greater range of journals it includes, plus papers written in languages other than English ¨C will strengthen the international nature of the process. Bibliometrics make up 38.5?per cent of the ranking results.

Adequately accounting in bibliometric terms for countries and institutions where English is not the first language is a challenge. In previous versions of the rankings, this was addressed through country-based normalisation: universities had their score adjusted by the norm for their country (which was also intended to account for the varying financial status of their higher education sectors).

However, we do not think that this position is justified in the long term: economic differences are already adjusted for by using purchasing-power parity in financial metrics, and the reality is that English is the lingua franca of international research.

In the short term, however, we have decided to maintain an element of country normalisation by generating a balanced metric that reflects both country-normalised and non-country-normalised citation impact.

One bibliometric problem we have long been aware of is the existence of ¡°kilo-author papers¡±, those with hundreds, even thousands, of authors. For example, Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC has hundreds of authors and is extremely highly cited.

ÁñÁ«ÊÓƵ

Kilo-author papers cause strange effects and there is no obvious or consistent way of identifying their key authors. As a result, this year we have decided to exclude papers with more than 1,000 authors ¨C 649 of the 11?million-plus papers under scrutiny. Consequently, a small number of institutions whose scholars have participated in a large number of kilo-author papers have been affected (see a further discussion of this issue).

The Academic Reputation Survey has a key place in the rankings, making up 33?per cent of the final scores. Although some have criticised it because it is an inherently subjective measure, it does an excellent job of revealing scholars¡¯ institutional sentiments.

ÁñÁ«ÊÓƵ

This year we took extra care to make the survey more geographically balanced, ensuring better representation outside Europe and the US ¨C a move that matches the sector¡¯s growing internationalisation. However, this has also increased the rankings¡¯ volatility. As a result of these changes, we believe it is not valid to make direct comparisons with last year¡¯s results.

After all of these improvements, it is likely that we will not make many major methodological changes to the rankings over the next few years. However, we are always looking for opportunities to fine-tune our analysis.

Now that the creation of the rankings has moved directly under THE¡¯s control, we have a greater opportunity to explore the impact of potential changes to them in advance and to share our approaches and ideas with the academy. This will allow us to publish new views on universities alongside the main rankings, and we hope that you will find these insights helpful.

We are also interested in finding ways to reflect the existence of multi-author papers more fairly in future. Simply excluding kilo-author papers is a reasonable approach this year, but it is not ideal. In the absence of an agreed academic framework for identifying the key authors, we will explore the possibility of fractional attribution (ie, for a paper with n authors, each author counts as 1/n in the calculation of its overall impact). This seems a better approach to the problem, although it makes interpreting the results more challenging and raises other issues about the nature of knowledge.

The Academic Reputation Survey will continue to be developed (not least to make it easier to fill in). Despite requests, we have no intention of allowing universities to nominate the respondents.

Another area that is clearly of interest is the growing impact of massive open online courses within the sector, both as an extension of universities¡¯ core teaching missions and as a mechanism by which they can extend their reach and reputation. Moocs also raise questions about ranking measures such as student numbers and staff-to-student ratios. We will be thinking hard about how future rankings will reflect these issues. We welcome your views.

ÁñÁ«ÊÓƵ

Duncan Ross
Data and analytics director, Times Higher Education
duncan.ross@tesglobal.com
Twitter:

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs