ÁñÁ«ÊÓƵ

TEF-REF ranking marks rise of 'new elite' in UK higher education

<ÁñÁ«ÊÓƵ class="standfirst">Lancaster University's vice-chancellor Mark E. Smith and Nicola Owen argue that a new composite ranking offers a more nuanced view of institutional excellence
March 15, 2018
gold silver bronze TEF rating

Something is stirring in the perceived hierarchy of higher education institutions within the UK.

New perspectives on performance are emerging as more metrics become available, complementing those that have been around for some time.

The introduction of the teaching excellence framework is already achieving some of its aims by creating a new dynamic that is disrupting perceptions about "quality" and "elites". The forthcoming changes to the teaching excellence framework and the research excellence framework, soon to be supplemented by the introduction of a knowledge exchange framework, should provide further insight into consistency of performance across the traditional core missions of a university.

To illustrate the potential impact of these new measures on traditional notions of quality within the higher education sector, we have constructed a new university league table that combines the quantitative elements of the TEF and the REF.

ÁñÁ«ÊÓƵ


Combined TEF/REF Ranking?

InstitutionAverage performance against TEF benchmarks (expressed as Z-score)TEF rankREF GPA (adjusted for % of all academic staff submitted and expressed as Z-score)REF rankOverall score (TEF + REF)Overall rank
Source: Lancaster University. See bottom of blog for methodology notes.

It is an attempt to show how universities perform from the combined perspective of education and research activities. This analysis clearly demonstrates that the common shorthand of ascribing performance to mission groups and historical reputation is at best outdated in several cases. We would go so far as to suggest that it signals the emergence of a ¡°new elite¡± of universities.

ÁñÁ«ÊÓƵ

In keeping with the penchant of observers of the sector to measure "performance" by creating league table rankings, and perhaps playing to the stereotype that senior managers like simplification, our league table straightforwardly combines the most recent REF and TEF metrics.

Despite well-known concerns about the robustness of TEF data, and a recognition by the Department for Education (DfE) that further improvements to TEF can be made, the data underlying REF and TEF are arguably much more robust than using brand references or historical reputations, which are often used as sloppy shorthand for high quality.

Our approach follows that commonly used in other national leagues tables, with our only "free" choice being the weighting of the REF grade point average for intensity on the basis of the total number of staff on academic staff contracts. This choice aligns with our belief that the research intensity of a university should really reflect all staff engaged in the academic endeavour.

This approach has the advantages of being more likely to reflect the engagement students have with staff developing tomorrow¡¯s thinking of their discipline, as well as overcoming game playing with contract changes and/or preventing distortion by universities only having niches of research excellence that are a very small part of the activity of that university.

The league table provides some calibrations points, as the three genuinely world "Champions League Division" UK institutions occupy the top three places. Yet the methodology also challenges what might be perceived as the conventional order of things. The universities of Keele and Coventry find themselves among the top 20. Meanwhile, some recognised world class institutions have depressed positions because of well-rehearsed reasons around weaker TEF performance than the average (e.g. the LSE appearing at 64).

To answer the obvious suspicion that we might construct a league table that deliberately favours our own institution, we should point out that Lancaster has little to gain as we perform well in all three conventional UK league tables, being currently inside the top 10 of all of them. The table we have constructed places us at 8th, below our most recent league table in the Good University Guide of 6th?. Institutions performing very strongly in the TEF (such as Coventry) are rewarded with high placings.

ÁñÁ«ÊÓƵ

Perhaps more interesting than the rankings of particular universities are the trends that this new league table suggests. It identifies those universities that can genuinely combine high quality education and teaching with research intensity. There is an interesting cadre of universities in the top 20 that includes Loughborough, Bath, Surrey and Dundee alongside Lancaster. These medium-sized, campus-based, genuinely research intensive universities are now clearly a key component of the emerging new elite.

Does higher education really need another league table? Perhaps not ¨C but we believe there are good reasons for examining what institutions, stakeholders and students can learn from the new metrics and what they say about quality and performance within the sector.

ÁñÁ«ÊÓƵ

The changes in the higher education regulatory system since 2015 were designed to help students make well-informed choices; to drive improvements in teaching quality by assigning teaching the same significance as research; and to enable disruption in the higher education market, reducing the emphasis on "a long established track record" and to promote competition through greater transparency and a more "level" and open regulatory system.

The aim of placing ¡®students at the heart of the system¡¯ has since become even more important in political and policy terms, with an increased focus on value for money. The HE sector has also just endured a summer of almost unprecedented criticism focusing on value for money, accusations about cartels and the quality of teaching and standards. There is a clear thirst and need for better information on what universities have to offer.

The government is now consulting on the tricky challenge of developing TEF measures at subject-level, as well as taking a more critical look at graduate outcomes, teaching intensity and degree standards. While these developments are undoubtedly important we would urge caution.

Detailed benchmarks on disciplinary and geographical grounds will be needed to prevent students from being misled. At the same time, there is a real opportunity to inform students¡¯ choice beyond institutional brands and highlight specific experiences students most value. Changes to the REF should also shine a light on the true ¡®intensity¡¯ of research-intensives and highlight real excellence, wherever it is found.

Despite the breadth of information available, students still have to navigate the shorthand of what a ¡®leading¡¯ university is through misinformation and shorthand used by the press, politicians or their school advisors using outdated notions of excellence and elites. Given the DfE¡¯s commitment to the TEF, it is surprising to see that it still uses old-fashioned notions in policy documents and statements, as well as to rank secondary school destination data.

It is time to use new metrics to provide students and parents with a fresh perspective of what an ¡®elite¡¯ university is and create a more level playing field than that based on shorthand and historical artefact. The disruption this is providing to our current HE ecosystem should ultimately translate into both higher quality teaching, an understanding of genuine research intensity and which universities deliver on bot

ÁñÁ«ÊÓƵ

Mark E. Smith is vice-chancellor of Lancaster University, where Nicola Owen is chief administrative officer

Notes on methodology for table:
All six TEF metrics were combined by taking the numerical difference between the indicator and the benchmark for each. The REF grade point average for research intensity was calculated by normalising on the basis of the total number of staff on academic staff contracts. This latter choice aligns with our belief that the research intensity of a university should really reflect all staff engaged in the academic endeavour. A Z-score methodology was then used which allows completely different measures to be combined by looking at how an individual institution¡¯s indicators vary from the mean normalised by the standard distribution of each indicator. We equally weighted both the teaching (TEF) and research (REF) measures therefore institutions that did not take part in the TEF do not appear.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Related universities
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (2)
Any attempt to engineer new rankings needs to avoid distortion caused by inconsistent data definitions and subject differences. How research intensity is calculated is really critical here, as different approaches will give very different answers. The article states in the footnotes that ¡°The REF grade point average for research intensity was calculated by normalising on the basis of the total number of staff on academic staff contracts. This latter choice aligns with our belief that the research intensity of a university should really reflect all staff engaged in the academic endeavour¡±. Yet the table heading says something different; that the ¡°REF GPA adjusted for the % of staff submitted¡± is used. Which is it? We think it is the former and would highlight there are significant flaws with this approach of using all academic staff in a measure of research intensity. The problems with using staff data in the public domain include: ? Universities classify different types of non-standard staff who teach in different ways - some categorise graduate teaching assistants as academic staff while some use a different category. Where teaching staff are included it depresses the research intensity. ? Universities vary substantially in their discipline base, and consequently in the proportion of academic staff with teaching focused contracts e.g. languages, nursing, conservatoire subjects, foundation studies. In these subjects, even in research intensives, there is a higher number of teachers compared with researchers, so subject mix has a distorting effect. As mentioned above we need to be very careful and transparent about the way we use figures to compare universities or there is a real risk of distortion due to inconsistent data definitions and subject differences. The better option is to use a reliable measure of research intensity that hones in on academic researchers, instead of based on a poorly defined and subject dependent categorisation of academic staff. Posted on behalf of Dr Sonia Virdee, Director of Strategic Planning and Change, University of Essex
Also be careful some of these Universities have lots of Mickey mouse degrees - excellence in sports science or geneder studies is maybe not so important as excellence in engineering or physics.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs