This week's Times league tables have been changed to meet some universities's concerns over the picture they paint of the sector. In an introduction to the data used to compile the tables Tom Cannon (left), their principal architect, explains how and why the approach was modified.
The year since the publication of the last set of Times rankings has been marked by extensive discussions about the data, the nature and form of the information used and its presentation. This has prompted a number of changes in the mix of information and the form of the rankings.
The clearest changes lie in the number of indicators and the relative weight given to specific measures. Critics have suggested that there were one or two examples of double counting which gave too much prominence to certain variables. The best example of this lay in research which was measured twice - first, on an income per capita basis, then in a form derived from the research assessment exercise. The strength of these sentiments persuaded us to drop the research income per capita measure.
This decision was made easier by problems some institutions seem to have in distinguishing some forms of research income from other sources of revenue, for example development or project funds. Concern about the data convinced us that several measures pose major problems for universities. This is especially true about staff qualifications. Although most universities have little difficulty identifying those staff with PhDs, many found it hard to identify the percentage with professional qualifications. Some solved the problem by defining all qualifications as professional qualifications including first degrees. The concerns expressed about this convinced us that we should eschew using staff qualifications as a measure - at least in the short term.
ÁñÁ«ÊÓƵ
Reduction in the number of indicators allowed us to tackle another long standing worry. This misgiving centred on the different weightings applied to the variables used. This year the approach adopted in The THES of giving each factor an equal weight is used in the Times. Overall the effect has been limited. It seems to stretch the range rather more than last year and reduce the clustering effect slightly. A small number of universities change their position but estimates of the outcome under "the old rules" suggest that other factors are at play.
A major feature of this year's exercise was the greater involvement of universities in discussions about form and content of material. The series of articles on quality and diversity in The THES has helped to shape the debate on the future pattern of higher education in the United Kingdom.
ÁñÁ«ÊÓƵ
This was especially noticeable in responses to questions about the nature of the student populations and their implications for the university mission. Virtually all respondents could provide usable information on gender mix, mature students, ethnic minorities and recruits through access courses or without traditional entry qualifications. There is far less material on the socio-economic background of students. Diversity for a university goes far beyond these input measures. Notions of diversity must incorporate the individual and qualitative issues presented so coherently in the series in The THES last year. There is, however, increasing recognition that measures like the numbers from non-traditional groups can give useful insights on the way diverse missions are delivered.
The term university encompasses an increasingly broad church. This is not only because of the increased number of diverse institutions following different missions but the different populations they reach. It is not clear, however, that policymakers inside or outside fully appreciate the implications of this shift. Currently most universities seem preoccupied with the proposed research assessment exercise. I find it hard to agree with the priorities of the dean of a major faculty in the city centre university of a deprived northern city who asserts "our top priority over the next few years is to get a three in the next assessment exercise".
The precedence given to research contrasts with slow progress made in the teaching assessment. The lack of any system wide view on teaching standards is the biggest gap in both The Times analysis and the picture provided to the wider community of the work of universities.
Much of the comment on the tables was very constructive and helpful. Sam Moore of Manchester University, for example, highlighted a potential problem with definitions of completion rates. Our aim was to measure the proportion of all students who entered the university who graduated in the normal year of completion. He pointed out that it was possible to interpret our definition of completion to mean the proportion of final-year students or those taking final examinations who successfully completed their course. It took two days of telephone calls to identify and correct this misinterpretation.
ÁñÁ«ÊÓƵ
The emergence of this problem at this stage was a timely reminder of Finagel's first rule of research. This is that in any collection of data the figure most obviously correct beyond all need of checking is the mistake. The rule has two corollaries; first, no one you ask for help will spot it and second, anyone who stops by with unsought help will spot it. The overall pattern of data gathering was broadly similar to previous years. Roughly the same number of institutions were visited to gather local information. The greatest shift lies in the amount of secondary data available. This has increased significantly. Last year, a small number of universities did not collaborate with the data gathering exercise. This year every institution participated. This greater involvement may reflect progress made by the Higher Education Funding Council for England and the Higher Education Statistics Agency in agreeing standard data sets to be gathered and disseminated about all universities. Peter Toyne of Liverpool John Moores University pointed out that the statistics agency will soon provide a comparable and comprehensive set of data.
This progress in making available to the wider community information on entry standards, staff/student ratio, completion rates, expenditure profiles and employment prospects is welcome. It will be even more helpful if the data set is expanded to give a better profile of the university system. It would help, for example, if a fuller profile was available of the successes of students, that is, beyond firsts to all classes of degree.
I hope that universities will disseminate much of this information locally through their annual reports. These remain a very disappointing source of information about those institutions. The apparent progress made last year in deciding to publish top salaries seems to have encountered major problems of implementation. University annual reports do not even compare well with those of public limited companies.
Tom Cannon is professor of corporate responsibility at Manchester University.
ÁñÁ«ÊÓƵ
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login