?
Phil Baty describes the methodological refinements that have made the 2011-12 rankings even more accurate and comprehensive
No project that seeks to reduce the amazing variety of university activity into a single ranked list can ever be perfect, but Times Higher Education can make bullish claims for the sophistication and utility of its annual World University Rankings.
These rankings:
- Examine all core missions of the modern global university - research, teaching, knowledge transfer and international activity
- Employ the world's largest reputation survey, drawing on the expert views
- of more than 17,500 experienced academics, collected in 2011 from 137 countries
- Reflect the unique subject mix of every institution across the full range of performance indicators
- Are based on unprecedented levels of partnership with the world's universities
- Give parity to excellence in the arts and humanities, social sciences, science, technology, engineering, mathematics and medicine.
Although this is the eighth year THE has published a global ranking, a new approach was developed during 10 months of open consultation in 2010, involving expert input from more than 50 leading figures from 15 countries. Last year's tables set a new standard, underpinned by a new methodology that quickly earned widespread acceptance and support.
This year's tables are based on the same fundamentals. As with last year, the rankings use 13 performance indicators, grouped into five areas:
- Teaching ¡ª the learning environment (worth 30 per cent of the overall ranking score)
- Research ¡ª volume, income and reputation (worth 30 per cent)
- Citations ¡ª research influence (worth 30 per cent)
- Industry income ¡ª innovation (worth 2.5 per cent)
- International outlook ¡ª staff, students and research (worth 7.5 per cent).
But 2010-11 was the first year of a highly ambitious project with a wide range of innovations, and we have had 12 months to reflect on the work and to consult widely with the sector on further refinements.
ÁñÁ«ÊÓƵ
We have made some methodological improvements this year that we are confident make the tables even more illustrative, helping them towards long-term stability.
Disciplinary mix
The most significant improvement has been made possible by the extraordinary level of engagement from the academic community, with institutions providing rich data at the subject level this year. This information allows us to properly reflect every university's unique disciplinary mix.
ÁñÁ«ÊÓƵ
Last year, our "research influence" indicator (based on the number of citations a university's research papers earn) was normalised to take account of the variations in citation volumes between disciplines.
But thanks to the extra data captured this year, we have been able to normalise our research productivity indicator, which looks at the total number of papers a university publishes scaled against its size. This is an important change: researchers in the life sciences and medicine, for example, typically publish two or three papers a year, whereas those in the arts, humanities and social sciences publish less than half a paper on average.
We have also normalised the indicator that looks at the number of PhDs awarded by each institution and the one that measures university research income. It is fair to recognise that a typical grant awarded in the humanities, for example, will not be as big as one in the hard sciences.
These changes give parity to arts, humanities and social science subjects, and help to explain why the London School of Economics, to note a striking example, has risen up the table this year.
Citation counts
Another sensible step forward is that we have dampened down the effect of rare highly cited research papers on smaller institutions' standings.
Our "Citations - research influence" indicator looks at 6 million journal articles with some 50 million citations published over five years, but we consider citations per paper, not per staff member, to reward high-quality, not high-quantity, research.
Last year, it became clear that one or two extremely highly cited papers could disproportionately boost the overall score awarded to relatively small universities. This year we have increased the minimum publication threshold for inclusion from 50 papers a year to 200. We have also lengthened the period over which we collect citations from five years to six, reducing the impact of papers that buck the global benchmark for their year of publication with a lot of early citations (most papers take more time to accumulate references).
Nevertheless, this indicator remains independent of a university's size and allows smaller institutions to score as highly as much larger ones.
ÁñÁ«ÊÓƵ
The citations indicator has also been modified to help recognise strong performances by institutions in nations where there are less-established research networks and lower innate citation rates.
In a further improvement, this regional normalisation also takes account of each country's subject mix.
New indicator
Perhaps the most visible change this year is the introduction of an entirely new indicator.
The category that considers international outlook, last year based only on the proportion of international staff and students at institutions in the field, has been enhanced with an indicator that examines the proportion of research papers each institution publishes with at least one international co-author.
To accommodate the change, we have dropped one indicator used in 2010-11, "Public research income/total research income", which suffered from the lack of readily comparable data between countries.
Each of the changes has been made only after careful consideration and detailed, expert feedback: our goal is to quickly allow the methodology to settle down in order to produce stable annual comparisons.
While the changes have caused some instability between this year's and last year's results, we are confident that they have enhanced the picture of global higher education that the rankings paint.
A close look at the elements of a top scorecard
We detail the criteria for including institutions in the rankings and the individual components of the different indicators we use to compare universities
Teaching: the learning environment (30%)
This category employs five separate performance indicators designed to provide a clear sense of the teaching and learning environment of each institution from both the student and the academic perspective.
The dominant indicator here uses the results of the world's largest academic reputational survey ever.
Thomson Reuters carried out its Academic Reputation Survey - a worldwide poll of experienced scholars - in spring 2011. It examined the perceived prestige of institutions in both research and teaching. There were 17,554 responses (30 per cent more than in 2010), statistically representative of global higher education's geographical and subject mix.
The results of the survey with regard to teaching make up 15 per cent of the overall rankings score.
Our teaching and learning category also employs a staff-to-student ratio as a simple proxy for teaching quality - suggesting that where there is a low ratio of students to staff, the former will get the personal attention they require from the institution's faculty. Last year's staff-to-student ratio was based on the number of undergraduate first-year students, but after further consultation with our expert advisory group, this has been changed to an institution's total student numbers, as this was considered fairer.
As this measure serves as only a crude proxy - after all, you cannot judge the quality of the food in a restaurant by the number of waiters employed to serve it - it receives a relatively low weighting: it is worth just 4.5 per cent of the overall ranking scores.
The teaching category also examines the ratio of PhD to bachelor's degrees awarded by each institution. We believe that institutions with a high density of research students are more knowledge-intensive, and that the presence of an active postgraduate community is a marker of a research-led teaching environment valued by undergraduates and postgraduates alike.
The PhD-to-bachelor's ratio is worth 2.25 per cent of the overall ranking scores.
The teaching category also uses data on the number of PhDs awarded by an institution, scaled against its size as measured by the number of academic staff.
As well as giving a sense of how committed an institution is to nurturing the next generation of academics, a high proportion of postgraduate research students also suggests the provision of teaching at the highest level that is attractive to graduates and good at developing them.
Undergraduates also tend to value working in a rich environment that includes postgraduates. In an improvement from last year, this indicator is now normalised to take account of a university's unique subject mix, reflecting the different volume of PhD awards in different disciplines.
The indicator makes up 6 per cent of the overall scores.
The final indicator in the teaching category is a simple measure of institutional income scaled against academic staff numbers.
This figure, adjusted for purchasing-power parity so that all nations compete on a level playing field, indicates the general status of an institution and gives a broad sense of the infrastructure and facilities available to students and staff. This measure is worth 2.25 per cent overall.
Research: volume, income, reputation (30%)
This category is made up of three separate indicators. The most prominent looks at a university's reputation for research excellence among its peers, based on the 17,000-plus responses to the annual Academic Reputation Survey.
Consultation with our expert advisers suggested that confidence in this indicator was higher than in the teaching reputational survey because academics are likely to be more knowledgeable about the reputation of research departments in their specialist fields. For this reason, it is given a higher weighting: it is worth 18 per cent of the overall score, reduced slightly from last year's figure of 19.5 per cent as part of our commitment to reduce the overall impact of subjective measures.
ÁñÁ«ÊÓƵ
This category also looks at a university's research income, scaled against staff numbers and normalised for purchasing-power parity.
This is a controversial indicator because it can be influenced by national policy and economic circumstances. But research income is crucial to the development of world-class research, and because much of it is subject to competition and judged by peer review, our experts suggested that it was a valid measure.
In an improvement on 2010-11, this indicator is also fully normalised to take account of each university's distinct subject profile. This reflects the fact that research grants in science subjects are often bigger than those for the highest-quality social science, arts and humanities research.
To reflect the increased rigour of this indicator, its weighting has been increased slightly from 5.25 per cent to 6 per cent.
The research environment category also includes a simple measure of research output scaled against staff numbers.
In another indicator newly normalised for the 2011-12 rankings, we count the number of papers published in the academic journals indexed by Thomson Reuters per academic staff member, scaled for a university's total size. This gives an idea of an institution's ability to get papers published in quality peer-reviewed journals. The indicator is worth 6 per cent overall, increased from last year's figure of 4.5 per cent to better recognise the importance of research productivity.
Citations: research influence (30%)
We examine a university's research influence by capturing the number of times all of its published work is cited by scholars around the world.
Worth 30 per cent of the overall score, this single indicator is the largest of all the 13 employed to create the rankings, although we have reduced its weighting from last year's 32.5 per cent to accommodate an additional indicator in the "International outlook - staff, students and research" category, and to give it parity with "Teaching - the learning environment".
This generous weighting reflects the relatively high level of confidence the global academic community has in the indicator as a proxy for research quality.
The use of citations to indicate quality is controversial, but there is clear evidence of a strong correlation between citation counts and research performance.
The data are drawn from the 12,000 academic journals indexed by Thomson Reuters' Web of Science database and include all indexed journals published in the five years between 2005 and 2009. Citations to these papers made in the six years from 2005 to 2010 are collected - increasing the range by an additional year compared with 2010-11, thus improving the stability of the results and decreasing the impact of exceptionally highly cited papers on institutional scores.
The findings are fully normalised to reflect variations in citation volume between different subject areas. This means that institutions with high levels of research activity in subjects with traditionally high citation counts do not gain an unfair advantage.
For institutions with relatively few papers, citation impact may be significantly boosted by a small number of highly cited papers. We have moved to remove such distortions by excluding from the rankings any institution that publishes fewer than 200 papers a year.
International outlook: people, research (7.5%)
Our international category looks at both diversity on campus and how much each university's academics collaborate with international colleagues on research projects - all signs of how global an institution is in its outlook.
The ability of a university to compete in a competitive global market for undergraduates and postgraduates is key to its success on the world stage; this factor is measured here by the ratio of international to domestic students. This is worth 2.5 per cent of the overall score.
As with competition for students, the top universities also operate in a tough market for the best faculty. So in this category we give a 2.5 per cent weighting to the ratio of international to domestic staff.
A third indicator has been introduced this year in this category. We calculate the proportion of a university's total research journal publications with at least one international co-author and reward the higher volumes.
This indicator, which is also worth 2.5 per cent, is normalised to account for a university's subject mix and uses the same five-year window that is employed in the "Citations - research influence" category.
Industry income: innovation (2.5%)
A university's ability to help industry with innovations, inventions and consultancy has become such an important activity that it is often known as its "third mission", alongside teaching and research.
This category seeks to capture such knowledge transfer by looking at how much research income an institution earns from industry, scaled against the number of its academic staff.
It suggests the extent to which businesses are willing to pay for research and a university's ability to attract funding in the competitive commercial marketplace - key indicators of quality.
However, because the figures provided by institutions for this indicator are relatively patchy, we have given the category a low weighting: it is worth 2.5 per cent of the overall ranking score.
Exclusions
Universities are excluded from the Times Higher Education World University Rankings if they do not teach undergraduates; if they teach only a single narrow subject; or if their research output amounted to fewer than 1,000 articles between 2005 and 2009 (200 a year).
In some exceptional cases, institutions below the 200-paper threshold are included if they have a particular focus on disciplines with generally low publication volumes, such as engineering or the arts and humanities.
Further exceptions to the threshold are made for the six specialist subject tables.
Scores
To calculate the overall rankings, "Z-scores" were created for all datasets except for the results of the reputation survey.
The calculation of Z-scores standardises the different data types on a common scale and allows fair comparisons between different types of data - essential when combining diverse information into a single ranking.
Each data point is given a score based on its distance from the mean average of the entire dataset, where the scale is the standard deviation of the dataset. The Z-score is then turned into a "cumulative probability score" to arrive at the final totals.
If University X has a cumulative probability score of 98, then a random institution from the same data distribution will fall below the institution 98 per cent of the time.
For the results of the reputation survey, the data are highly skewed in favour of a small number of institutions at the top of the rankings, so this year we have added an exponential component to increase differentiation between institutions lower down the scale.
Data collection
Institutions provide and sign off their institutional data for use in the rankings. On the rare occasions when a particular data point is missing - which affects only low-weighted indicators such as industrial income - we enter a low estimate between the average value of the indicators and the lowest value reported: the 25th percentile of the other indicators.
By doing this, we avoid penalising an institution too harshly with a "zero" value for data that it overlooks or does not provide but we do not reward it for withholding them.
For Seoul National University and Nanjing University, staff, student and funding data from last year were combined with this year's reputation survey and publications information.
Otherwise, those institutions that declined or were unable to provide sufficient data were excluded.
ÁñÁ«ÊÓƵ
Phil Baty is editor, Times Higher Education World University Rankings
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login