ÁñÁ«ÊÓƵ

THE Latin America University Rankings 2018: careful calibration

<ÁñÁ«ÊÓƵ class="standfirst">Alternative measures could lead to refinements and a more stable dataset, writes Duncan Ross
July 18, 2018
measuring tape
Source: iStock

Browse the full?Times Higher Education?Latin America University Rankings 2018 results

One of the ways we evaluate universities is by looking at their published output.

This type of bibliometric measure has a long history, and there are various things we could choose to explore ¨C from simple measures like a count of papers or citations to more complex ones like the h-index.

There are also decisions to be made about the scope of this exploration. What types of publications and sources should we use?

In the?Times Higher Education?World University Rankings, we¡¯ve chosen to use a ¡°snowball metric¡± ¨C field-weighted citation impact. This is quite complex ¨C it measures the relative impact of a paper within its subject area, year of publication and publication type. In practice, this gives us a matrix of 8,600 cells into which a publication may fall.

ÁñÁ«ÊÓƵ

For each cell, we calculate the average number of citations received by a paper, and then the ratio of the citations of a specific paper to this average.

The final step is to calculate the average value of publications associated with a particular university.

ÁñÁ«ÊÓƵ

This seems to be a strong measure ¨C it is ¡°objective¡±, the calculation minimises subject-specific issues (such as the relatively low citation count for papers in the arts and humanities compared with the sciences), and it is available across all research-intensive universities. So why would we want to change it?

Well, as we explore the data we see some oddities, both at the publication level and also at the university level.

In terms of papers, the first, and somewhat depressing, insight is that most papers are never cited. And of those that are, a small percentage have a huge number of citations. This causes problems when we look at using the average FWCI of a university.

A few, very highly cited papers can raise the score of a university significantly without being typical of that university¡¯s output. If those papers drop out of the time period we analyse (each edition of the ranking analyses publications indexed over the previous five years), a university may see significant changes to its score from one year to the next.

So what could we do? Well, instead of employing the average, we could use the median. This is a preferred approach in the world of statistics when looking at this type of distribution ¨C and is why statistical agencies tend to look at median rather than mean salaries.

ÁñÁ«ÊÓƵ

But we can¡¯t do that. Our median (for most universities) would be zero.

However, if we¡¯re looking at a measure based on a percentile there is no specific reason to choose the 50th percentile (the median) ¨C so instead we¡¯re exploring the 75th percentile (see graph below).

This alternative approach would produce much greater stability in our Latin America University Rankings in particular; universities in this ranking are even more susceptible to year-on-year changes in citation count as the table¡¯s lower eligibility criteria mean that they need to publish only 200 papers in a five-year period (down from 1,000 in the World University Rankings).

ÁñÁ«ÊÓƵ

Of course there will be winners and losers (we can see some of the potential impact in the graph below).

Another benefit of this alternative approach is that it would allow us to fully reincorporate ¡°kilo author papers¡± ¨C big science papers with thousands of authors. We currently use a fractional counting approach to deal with these articles, so they do not have a disproportionate impact on the citation scores of a small number of universities, but moving to a single approach across all publications seems the right thing to do.

And one final positive ¨C we could open our rankings more widely. If this approach provides stability to the citation count of universities with fewer than 1,000 papers, then there is no reason for us to exclude universities that are en route to research excellence.?

To share your views, email?profilerankings@timeshighereducation.com

ÁñÁ«ÊÓƵ

Duncan Ross is data and analytics director,?Times Higher Education?

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (2)
Hi Duncan, I see a tension here between, 1) the desire have greater stability in your indicator in order to include more universities, and 2) the need for fairer indicator that would correct the bias caused by rare, highly cited papers. The second objective should definitely have priority over the first. If so, then I believe using the 75th percentile is too low. Suppose two institutions that have exactly the same research output: 1000 articles. 750 of their articles have the exact same impact, but for University A, the remaining 250 papers have a very high impact, while for the University B, these 250 have the same impact as the paper located at the 75th percentile. If you measure scientific impact using the impact of the article located at the 75th percentile of the distribution, these two universities will have the same score. This would be clearly unfair for University A. I would be interesting the compare the distributions you get using the 75th percentile, and the 95th percentile. Alex
Hi Alex, that is a tension, but one we don't see much in the real data. The same worry could be expressed about median values, of course! I think (although I haven't checked) that the problem is far smaller than 5 percentile points, it's much more akin to the challenges caused by billionaires on salary data.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs