榴莲视频

‘New elite’ emerges as UK ranking combines TEF and REF

<榴莲视频 class="standfirst">Architects of merged league table say results show how old hierarchies are outdated, but pre-92 institutions still dominate
三月 15, 2018
Loughborough University hot air balloon
Source: Alamy

A new league table that attempts to combine the results of the teaching and research excellence frameworks demonstrates that a “new elite” of universities is emerging in UK higher education.

That is the claim of two senior university leaders who created the ranking by putting together grade point averages from the 2014 REF, weighted for the number of staff submitted, and the average score across the six metrics underpinning the 2017 TEF.

The table – which for final scores gives equal weighting to both exercises –?is still headed by the three UK research universities that tend to rank highest in international league tables (the universities of Cambridge and?Oxford?and Imperial College London).

However, several smaller research-led institutions and some modern universities achieve relatively high placings in the list thanks to strong TEF scores. They include Loughborough University (5th), the universities of Surrey and Bath (6th and 7th respectively), Coventry University (18th) and Liverpool Hope University (37th).


TEF/REF ranking top 20

See here for full table, scoring and methodology

Institution TEF rank REF rank Overall Rank
University of Cambridge 13 1 1
University of Oxford 10 3 2
Imperial College London 28 2 3
University of St Andrews 8 6 4
Loughborough University 6 9 5
University of Surrey 2 29 6
University of Bath 5 24 7
Lancaster University 22 8 8
University of Birmingham 16 13 9
Keele University 3 37 10
University of Dundee 4 32 11
University of Exeter 16 21 12
University of Leeds 14 26 13
Newcastle University 32 10 14
Durham University 33 11 15
Royal Holloway, University of London 31 14 16
University of Bristol 54 4 17
Coventry University 1 95 18
University of York 30 23 19
University of East Anglia 9 39 20
Source: Lancaster University.?

Writing online for?Times Higher Education,?Mark Smith, vice-chancellor of Lancaster University, and Nicola Owen, the institution’s chief administrative officer, who worked on the ranking with the institution’s data analytics unit, say?that combining TEF and REF metrics was worthwhile “despite well-known concerns about the robustness of TEF data”.

This was because “the data underlying REF and TEF are arguably much more robust than using brand references or historical reputations which are often used as sloppy shorthand for high quality”.

The pair add that the list produces “an interesting cadre of universities in the top 20” that are “medium-sized, campus-based, genuinely research-intensive universities” that in their opinion “are now clearly a key component of the emerging new elite”.

Addressing the “obvious suspicion” that they constructed the table to favour Lancaster – which is 8th?– they point out that the institution “has little to gain, as we perform well in all three conventional UK league tables, being currently inside the top 10 of all of them”.

However, they accept that “some recognised world-class institutions have depressed positions because of well-rehearsed reasons around weaker TEF performance than the average”, highlighting the London School of Economics’ placing (64th).?

This is likely to be one of the criticisms of combining the exercises as, apart from Imperial, London universities – which by and large performed badly in the TEF – all appear relatively low in the list.

The table is also still dominated by pre-92 universities. This could arguably be because of the method used to weight REF scores, which reflect the percentage of all academics – including teaching-only staff?– submitted to the exercise. Such an approach could have amplified REF scores for research-intensives and depressed them for institutions with more of a teaching focus.

Professor Smith told Times Higher Education that combining the data did inevitably “lend bias” towards universities “whose missions are both teaching and research-focused”.

“We are transparent about this and it comes from our belief that having excellence in both research and teaching is an important factor in defining leading universities internationally,” he added.

Alan Palmer, head of policy and research at MillionPlus, which represents a group of post-92 universities, warned that rather than alter existing hierarchies, “blunt combinations” of the REF and TEF risked reinforcing them.

“The intent behind the TEF was to identify and recognise excellence in teaching and so raise its status. Dovetailing TEF results to REF league tables does little to achieve this and will do nothing to help students make informed decisions about the courses that are right for them," he said.

simon.baker@timeshighereducation.com

<榴莲视频 class="feature-half-width__title"> Find out more about THE DataPoints

THE DataPoints is designed with the forward-looking and growth-minded institution in view

2月 2日

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> Reader's comments (3)
Multiply a relatively meaningless number (REF) with a totally useless number (TEF) and obtain new insights? No.
Any attempt to engineer new rankings needs to avoid distortion caused by inconsistent data definitions and subject differences. How research intensity is calculated is really critical here, as different approaches will give very different answers. The article states in the footnotes that “The REF grade point average for research intensity was calculated by normalising on the basis of the total number of staff on academic staff contracts. This latter choice aligns with our belief that the research intensity of a university should really reflect all staff engaged in the academic endeavour”. Yet the table heading says something different; that the “REF GPA adjusted for the % of staff submitted” is used. Which is it? We think it is the former and would highlight there are significant flaws with this approach of using all academic staff in a measure of research intensity. The problems with using staff data in the public domain include: ? Universities classify different types of non-standard staff who teach in different ways - some categorise graduate teaching assistants as academic staff while some use a different category. Where teaching staff are included it depresses the research intensity. ? Universities vary substantially in their discipline base, and consequently in the proportion of academic staff with teaching focused contracts e.g. languages, nursing, conservatoire subjects, foundation studies. In these subjects, even in research intensives, there is a higher number of teachers compared with researchers, so subject mix has a distorting effect. As mentioned above we need to be very careful and transparent about the way we use figures to compare universities or there is a real risk of distortion due to inconsistent data definitions and subject differences. The better option is to use a reliable measure of research intensity that hones in on academic researchers, instead of based on a poorly defined and subject dependent categorisation of academic staff. Posted on behalf of Dr Sonia Virdee, Director of Strategic Planning and Change
Hi, thanks for your comment. Sorry if the column heading in the main table on the blog isn't clear but it's my understanding that the REF scores were indeed calculated on the basis of the % of all academic staff (including teaching-only staff and not just those REF eligible) who were submitted to the REF. I do discuss in the news article how this could have depressed the REF score for unis with a lot of teaching-only staff, as you point out. Simon.