Peer review is key to identifying top quality but a fairer overall picture emerges due to changes in analysis. The extensive discussion of The Times Higher-QS World University Rankings that has taken place worldwide since their first appearance in 2004 has strengthened our belief in the general approach we have taken, and we have not made any fundamental changes to our methodology in that time.
Like the first three editions, this ranking is a composite indicator integrating peer review and opinion with quantitative data. The data- gathering for the rankings has grown in quality and quantity during their lifetime.
The core of our methodology is the belief that expert opinion is a valid way to assess the standing of top universities. Our rankings contain two strands of peer review. The more important is academic opinion, worth 40 per cent of the total score available in the rankings. The opinions are gathered, like the rest of the rankings data, by our partners QS Quacquarelli Symonds (), which has built up a database of e-mail addresses of active academics across the world. They are invited to tell QS what area of academic life they come from, choosing from science, biomedicine, technology, social science or the arts and humanities. They are then asked to list up to 30 universities that they regard as the leaders in the academic field they know about, and in 2007 we have strengthened our measures to prevent anyone voting for his or her own institution.
This year we have the opinions of 5,101 experts, of whom 41 per cent are in Europe, the Middle East and Africa, 30 per cent in the Americas, and 29 per cent in the Asia-Pacific region. This includes respondents from 2005 and 2006 whose data have been aggregated with new responses from this year. No data more than three years old is used, and only the most recent data is taken from anyone who has responded more than once.
A further 10 per cent of the possible score in these rankings is derived from active recruiters of graduates. QS asks major global and national employers across the public and private sectors which universities they like to hire from. This year's sample includes 1,471 people, with 43 per cent in the Americas, 32 per cent in Europe and 25 per cent in Asia- Pacific.
The first major change to this year's rankings is in the way that these responses, and the quantitative data that makes up the rest of the table, are processed. In the past, the topmost institution on any measure has received maximum score. The others are then given a fraction of this percentage proportional to their score.
This approach has the drawback that an exceptional institution distorts the results. In 2006, our measure of citations per staff member gave the top score of 100 to the California Institute of Technology, while Harvard University, in second place, scored only 55. So almost half the variation on this measure was between the first and second-place universities.
We have solved this problem by switching from this arithmetical measure to a Z-score, which determines how far away any institution's score is from the average. Some universities suffer as a result, such as CalTech on citations and the London School of Economics on overseas students. But this approach gives fairer results and is used by other rankings organisations.
Our quantitative measures are designed to capture key components of academic success. QS gathers the underlying data from national bodies where possible, but much of it is collected directly from universities themselves. of these measures, staff-to-student ratio is a classic gauge of an institution's commitment to teaching. This year we have improved its rigour by obtaining full and part-time numbers for staff and students, and using full-time equivalents throughout as far as possible. This measure is worth 20 per cent of the total possible score.
A further 20 per cent of the possible score is designed to reward research excellence. Citations of an institution's published papers by others are the accepted measure of research quality. We have used five years of citations between 2002 and 2006 as indexed by Scopus, a leading supplier of such data. Scopus () has replaced Thomson Scientific as supplier of citations data. We are confident that Scopus's coverage is at least as thorough as Thomson's, especially in non-English language journals. We divide the number of citations by the number of full-time equivalent staff to give an indication of the density of research firepower on each university campus.
The final part of our score is designed to measure universities' attractiveness to staff and students. It allots five percentage points for the number of their staff who come from other countries, and a further five for their percentage of overseas students. It shows us which institutions are serious about globalisation, and points to the places where ambitious and mobile academics and students want to be.