ÁñÁ«ÊÓƵ

How might a teaching excellence framework be built?

<ÁñÁ«ÊÓƵ class="standfirst">As a vague policy commitment moves towards reality, Jack Grove assesses the potential ways and means
July 23, 2015
Silhouette of people building scaffolding
Source: Getty

When he was working on the Conservative Party¡¯s 2015 manifesto earlier this year, Jo Johnson ¨C then head of the No 10 Policy Unit ¨C probably didn¡¯t give much thought to a short and colourless statement on page 35.

The commitment to ¡°introduce a framework to recognise universities offering the highest teaching quality¡± didn¡¯t even merit its own sentence. Instead, it was bound up in a checklist of aspirations for higher education, including more two-year degrees and better data for university applicants.

But scroll forward a few months and Johnson, the new minister for universities and science, finds himself charged with turning this vaguest of policy promises into reality.

And the policy took on a new significance on 8 July when the chancellor, George Osborne, announced that the government would ¡°link the student fee cap to inflation for those institutions that can show they offer high-quality teaching¡±, in what was widely interpreted as a reference to the forthcoming ¡°teaching excellence framework¡±.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

So what might the TEF look like? And what could it deliver for students, lecturers and universities?

Supporters view the idea as a long overdue corrective to a system that measures and rewards only research quality, via the research excellence framework, on the basis of which more than ?1 billion in annual funding rides.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Delivering his first major policy speech on 1 July, Johnson said the TEF would ¡°root out bad teaching¡± and provide ¡°incentives to make good teaching even better¡±.

The framework should be informed by ¡°a clear set of outcome-focused criteria and metrics¡±, he said, and he invited universities to submit their comments on such measures over the summer.

For David Palfreyman, director of the Oxford Centre for Higher Education Policy Studies, ¡°the big problem is that there is so much fuzziness around defining ¡®quality¡¯ in higher education, let alone measuring it¡±.

However, many sector experts believe that the feat is not impossible. If it can be done for research, why not for teaching? Several metrics already provide good indicators of teaching quality and, knitted together with new assessment mechanisms under development, a credible system for ascertaining and ultimately ranking quality could be achieved, they say.

David Willetts, who held the higher education brief for most of the last Parliament, certainly believes that there are already some indicators that could inform a future framework.

In a recent interview with Times Higher Education, the former universities and science minister described teaching as ¡°by far the weakest aspect of English higher education¡± (¡°I thought ¨C and still believe ¨C that what I was doing was in the interests of young people. But having so many of them so angry¡­¡±, Features, 18 June). He went on to mention the engagement metrics championed by Graham Gibbs, former director of the Oxford Learning Institute at the University of Oxford.

Writing in , his 2010 report for the Higher Education Academy, Gibbs maintains that the quality of university education can be assessed on the basis of various measures of class size, teaching staff, the effort students make and the quality of feedback they receive.

Student engagement is already successfully measured in the US by the National Survey of Student Engagement, Gibbs writes. That survey ¨C in which students are asked how frequently they contribute to class discussions, how much they talk over ideas outside class with faculty members and how often they come to class unprepared ¨C has informed a UK endeavour, the HEA¡¯s UK Engagement Survey, which was piloted in 2013 and had a larger trial in 2014.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Last year, more than 25,000 students at 32 UK institutions took part in UKES, although this represented a response rate of only 13 per cent. The survey included 39 questions drawn from the NSSE, as well as 11 unique to UKES.

The trial was launched amid concerns that the UK¡¯s National Student Survey has outlived its initial usefulness as a method of targeting areas of weakness in order to improve the student experience. With 86 per cent of students responding to last year¡¯s NSS stating that they were satisfied overall with their course and just 7 per cent of respondents dissatisfied, many wonder if scores can go much higher. (However, there remains some room for improvement in the area of assessment and feedback: 72 per cent said that they were satisfied with this aspect of the teaching they received.)

A 2014 review of the NSS commissioned by the Higher Education Funding Council for England found that many institutions felt that the survey did not take sufficient account of student engagement with learning, and it recommended introducing 11 new NSSE-style questions by 2017.

But the on the UKES pilot says that finding the right questions to allow meaningful comparisons between subjects could be difficult, with ¡°pronounced variations¡± in engagement by discipline. The skills acquired also differed by subject, with 64 per cent of language students participating in the UKES pilot reporting little improvement in numerical skills compared with 3 per cent of engineers.

In his speech, Johnson said that he was pleased to see the piloting of NSS questions on engagement, as ¡°this was shown in the US to be a good proxy for the value add of a university in terms of ¡®learning gain¡¯¡±.

Alexander McCormick, the director of the NSSE, says he would be wary of adapting the survey ¨C used principally in the US to drive institutional improvements internally ¨C as the basis for a national ranking. ¡°We see the information produced as valuable for internal diagnosis of problems, rather than accountability,¡± explains McCormick, who is based at Indiana University.

Linking financial rewards to NSSE-type indicators would be perilous, according to McCormick: ¡°If students feel this is a high-stakes measure, it will corrupt the data, which is already happening a bit to the National Student Survey data.¡± He doubts that teaching quality can be ¡°simply measured by a survey¡± because it is almost impossible to tease out the ¡°fine divisions¡± that a ranking of universities would require.

Indeed, concerns about complexity appear to have put paid to a highly controversial plan announced in late 2013 by US President Barack Obama to rank universities on the basis of the value they offered in terms of cost, debt, graduation rates and earnings on graduation. Jamienne Dudley, the US deputy undersecretary of education, wrote in a earlier this month that the government now intends merely to release ¡°new, easy-to-use tools¡± that will give students ¡°more data than ever before¡± to compare tuition fees and outcomes.

So what about Gibbs¡¯ other suggested indicators? Student-to-staff ratios might be viewed as a simple and reliable measure of institutions¡¯ investment in teaching. But McCormick is not a fan. Such a ratio ¡°simply measures the wealth of an institution¡± rather than its determination to incentivise and encourage good teaching, he argues.

Furthermore, low student-to-staff ratios can give a distorted impression of the number of lecturers available to learners if staff are spending most of their time doing research, the University and College Union has warned. Many lecturers counted as full-time in the ratios used by newspaper league tables actually taught only one day a week on average, a 2012 study by the union found.

This leaves open the question of whether new data could be collected that give more accurate information on seminar sizes, for example.

What about the NSS, which some institutions already value highly as an indicator of their teaching quality? Could that provide a sound basis for a TEF?

Paul Ramsden, former chief executive of the HEA, who helped to develop a direct forerunner of the NSS in Australia in the 1990s, does not think the idea should be discounted. He argues that the annual survey ¡°already acts as a proxy for learning gain¡± because students who report more positive experiences gain better degrees, even after controlling for entry scores.

But Quintin McKellar, vice-chancellor of the University of Hertfordshire, believes that the inclusion of the NSS in a TEF would be problematic.

¡°Students are as good a measure of educational quality as you can get, but there are a number of frailties in using NSS scores,¡± he says. ¡°Students will often assess lecturers on the basis of how entertained they felt in class, rather than the substance of what they were taught, so that may drive the way people teach.¡±

McKellar identifies grade inflation as another risk, pointing out that studies have shown that students will reward lenient markers with higher personal ratings. Such grade creep, he says, could be especially serious in any system that rewards value-added metrics, in which points are awarded for the number of students who enter university with low entry grades but go on to achieve a first or a 2:1 in their degree. ¡°Universities are not naive to the fact that good scores lead to improvements in league table rankings,¡± he says ¡ª although he adds that grade inflation brings with it ¡°real reputational risks¡±.

Dew drops on twigs at sunrise
Source:?
Getty

Other academics have expressed concerns about the NSS¡¯ focus on ¡°satisfaction¡± and the pressure on students to report good scores.

How Johnson intends to deliver on his promise that the TEF will provide ¡°incentives for the sector to tackle degree inflation¡± will be the subject of much interest when his Green Paper is published this autumn.

Johnson also called for the TEF to be ¡°underpinned by an external assessment process undertaken by an independent quality body from within the existing landscape¡±, raising questions about how this will be achieved.

McKellar has previously asked how students, ¡°who have little previous experience of university teaching and few reliable benchmarks, [can] make a balanced judgement¡± about teaching quality (¡°A jury of their peers¡±, 24 May 2012). He believes that academics seeking to be promoted on the basis of their teaching should instead be required to undergo class observations. Does this peer review approach ¨C as used, of course, in the REF and other parts of academia ¨C offer a potential way forward?

¡°Peer review for a TEF would be much more complex than for a REF because there would be competing views on how to teach from individuals at competitor universities,¡± McKellar says. ¡°It could only be done if you had a large enough cohort of independent peers, otherwise the game-playing between competing universities would render the exercise worse than useless.¡±

Even if an Ofsted-style body for observing teaching in higher education were created, he wonders who and what it would assess. ¡°Our university has more than 1,000 academics; how many of these people would you need to peer review to gain a representative sample? You would also need to assess them in different environments ¨C the lecture theatre, laboratories, seminars, computer-assisted learning ¨C to get a reasonable insight into how they taught.¡±

Sir Tim O¡¯Shea, principal of the University of Edinburgh, has warned of the danger of creating a teaching assessment exercise that becomes ¡°a bureaucratic industry where we spend our time, rather than devising assessments and supporting learning, filling in forms and feeding tuna fish sandwiches to visiting assessors¡±.

But metrics and a system of peer observation are just two options. Many TEF advocates believe the process should also be informed by emerging methods of quantifying learning gain. Some of the standardised tests used in the US, such as the Collegiate Learning Assessment, or the Assessment of Higher Education Learning Outcomes being developed by the Organisation for Economic Cooperation and Development, might provide useful data on how well different universities develop graduates¡¯ critical thinking skills, according to Willetts.

However, educators might also be loath to ask their students to sit a test when they have had no hand in creating it, or have little idea about what it might contain. Lecturers in the US have questioned the validity of such tests, which are normally based on a business scenario, claiming that they reward shallow and specious arguments and stifle creative thinking.

Making sure that enough university students sit the exams ¨C which do not contribute to their final degree ¨C is another problem. Critics claim that the low participation rates for CLA tests in the US have skewed the results. In Brazil, where universities already ask students to sit entry and exit exams similar to the proposed Ahelo system, or did not take them seriously, meaning that many of Brazil¡¯s most highly regarded universities were adjudged to be the worst for learning gain.

The impact of learning gain assessments on the development of new university courses would also be ¡°stultifying¡±, warns McKellar. ¡°Instead of having unique and creative courses, you would have educational programmes to improve results in this test,¡± he says.

And last week it emerged that the Department for Business, Innovation and Skills has declined the opportunity to take part in the OECD¡¯s Ahelo project to measure learning outcomes of university graduates around the world.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Other methods of determining teaching quality have also been suggested.

Could an independent assessment of a university¡¯s teaching and learning strategy ¨C how it rewards and enables outstanding teaching ¨C be coupled with indicators on dropout rates, student experience and graduate destinations? One possibility would be a REF-style system, involving star-graded, peer-assessed judgement of excellence by academics, students and even industry experts.

Data indicating how hard an institution¡¯s students work outside class (in which UK universities perform particularly poorly by international standards) could also be relevant to a TEF. Information showing the prevalence of formal teaching qualifications among an institution¡¯s academic staff, which is already collected by the Higher Education Statistics Agency, might also provide a useful insight into the value placed on teaching, some believe.

And the government may consider including a metric on the proportion of disadvantaged students admitted by each institution; Johnson said that he wanted the TEF ¡°to recognise those institutions that do most to welcome students from a range of backgrounds and support their retention and progression to further study or a graduate job¡± (see ¡®Carat and stick: can an education¡¯s worth be weighed in gold?¡¯ box, below).

¡°There are enough building blocks in place and others under development to make [a TEF] happen,¡± claims Chris Rust, professor of higher education at Oxford Brookes University. He believes that the threat of grade inflation could be countered by the ¡°degree calibration¡± groups mentioned in proposals by Hefce for its new quality assurance regime, in which, she believes, subject experts should blind-mark final-year dissertations.

However, any indicator is likely to be open to game-playing and distortion by academics, argues Todd Huffman, lecturer in physics at the University of Oxford. Academics are ¡°carefully trained optimisers¡± who ¡°will take any metric and then distort the playing field in ridiculous and clever ways to obtain the funding we think we need to carry out our core mission. If the government [creates] levers, we will pull them until they break, simply because there is never enough money to do what we all know is the right thing by our students and our research.¡±

What of the government¡¯s plans to link TEF results to the future level of tuition fees? The move has been welcomed by the Russell Group of research-intensive universities, and University Alliance called the decision ¡°reasonable¡±.

But is the approach fair? When Labour proposed a link between university funding and course quality in its 2009 framework on the future of higher education, the sector voiced concerns. At the time, Bahram Bekhradnia, who was then director of the Higher Education Policy Institute, argued that ¡°in research, you are quite happy to have some universities do more research than others and be better funded ¨C but in teaching it is completely different and highly doubtful that you would want to penalise those students who attend universities that are already¡­less ¡®good¡¯ ¡±.

Speaking before Osborne¡¯s announcement, Hepi¡¯s current director, Nick Hillman, said that universities¡¯ thirst for success in the REF is ¡°as much to do with reputation and league table placing as money¡±, but he added that it might be ¡°a bit odd to say: ¡®You are really good at teaching: here is some more money.¡¯ What do you do with it? Do some more teaching?¡±

And allowing some institutions to charge more for their teaching ignores the variable quality found across and within all institutions, the NSSE¡¯s McCormick believes. ¡°Politicians want one number for [each] university, so they distribute rewards on that [basis]. But some students at ¡®teaching excellent universities¡¯ have less than excellent experiences in their classroom,¡± he says.

So might a department-level approach work better, with top-ranked departments permitted to charge more than their rivals, even if that meant arts students paying more than their peers in the sciences? That would definitely require legislation and would throw up new perils for the Treasury, says Palfreyman, co-author of The Law of Higher Education. ¡°Teaching in an astrology degree might be rated highly, allowing, say, ?12,000 to be charged, yet the employability and hence loan repayment record of astrology grads may be revealed as poor,¡± he says.

Despite all the concerns, many believe that a workable TEF is achievable and vital to the improvement of teaching in universities.

¡°As long as we avoid the pitfalls of the REF, a TEF has more going for it than against it,¡± Rust believes. ¡°Anything that attempts to put teaching on a par with research is a good thing. Students have had a raw deal for too long as teaching and learning budgets are plundered to support research.¡±

Tangle of cables secured with cable tie
Source:?
Getty
<ÁñÁ«ÊÓƵ>Count the right things in: a v-c calls for ¡®evidenced reflection¡¯ of the impact of teaching

With students making a substantial investment in their university education, it is easy to understand the government¡¯s desire to quantify the value of the teaching they benefit from.

We can all get behind a mechanism that demonstrates the effectiveness of universities as vehicles through which to invest in the country¡¯s future talent ¨C particularly as we approach what is bound to be a bruising period of public spending cuts.

But there are a few hurdles that a teaching excellence framework must clear if it is to prove effective. First, it needs to measure the right things. It cannot be a superficial extension of the data provided through the Key Information Set, a variant on the Quality Assurance Agency¡¯s higher education review or some rehash of the subject league tables that drive universities to offer higher and higher proportions of firsts and 2:1s.

It must be an evidenced reflection of the impact the university experience has on students, examining how a university supports students to push their academic limits and helps to set them on ambitious career paths, not just how well already successful students maintain their success.

At Aston University, we pride ourselves on supporting students in developing independent learning skills and on feeding research and enterprise engagement activity back into teaching. We are by no means alone in delivering such added value, but it is not something that is easily captured through a tally of contact hours, staff-to-student ratios and percentages of top degree grades. Teaching, research and employability go hand in hand, each supporting the other; the best universities are the ones that understand and act on this.

Second, the TEF needs to make life easier rather than harder for university staff. It would be ironic if the time and resource needed to comply with a system actually detracts from the quality of the process it is designed to assess. One of the criticisms that can be made of the research excellence framework is that it does just this. The TEF needs to be the result of an overarching reform of the ways in which the quality of the entire student experience ¨C not just classroom etiquette ¨C is assessed.

This means that it is important to consider where the TEF would fit in relation to (or in place of) Key Information Sets, the National Student Survey and the Destination of Leavers from Higher Education survey. A tightly focused new assessment framework could reduce the current expensive, overlapping bureaucracy that institutions have to deal with. Could we adopt the Cabinet Office¡¯s phrase and refer to this as our own ¡°red tape challenge¡±?

Finally, the TEF needs to be meaningful. It must give recognition to the universities that offer students the most value and provide strong motivation for those that need to improve. The Budget announced plans to allow universities with high-quality teaching to raise fees in line with inflation. That is certainly an idea with some merit that warrants further investigation.

The ¡°value¡± equation has two sides. It¡¯s about the benefits and experience that students get, as well as about how much they pay. The expectation, when the Browne Review recommended the lifting of the fee cap, was that only universities offering the best-quality education would be permitted by the market to charge the highest rate.

But without a properly evidenced measure of that quality, students have not had all the information they need when making their decision. How can they know what the right price to pay for something is if its value is unclear? So we have been left with a system in which research reputations, marketing spend and a handful of ¡°malleable measures¡± counted more than real ¡°value add¡±.

Aligning fees with a quality assessment based on research evidence would build recognition of the long-term value for students. It could change the way universities are ranked and, more importantly, it could give more of our graduates a richer, more tailored experience ¨C and a better start in their careers.

It wouldn¡¯t be an easy option, but it would be an opportunity for Jo Johnson, the new minister for universities and science, to complete the task his predecessors left unfinished and truly put students at the heart of the system.

Dame Julia King is vice-chancellor of Aston University.


<ÁñÁ«ÊÓƵ>Carat and stick: can an education¡¯s worth be weighed in gold?

Metrics on graduate earnings may be the most controversial of any student outcome dataset under consideration for the teaching excellence framework ¨C and academics have described them as ¡°dangerous¡±.

During his first major policy speech on 1 July, Jo Johnson spoke approvingly about work already under way on new methods of measuring graduates¡¯ success in the labour market, which could include the use of Department for Work and Pensions data on unemployment rates by institution. Ministers are thought to be keen to include measures of employment success in the TEF.

Long-awaited research by Anna Vignoles and Neil Shephard into graduate earnings by institutions, using hitherto unseen Treasury data, will also pique the interest of policymakers when it is finally published. Many expect that this rich data seam will provide a more accurate and comprehensive picture of graduate earnings than that captured by the Destination of Leavers from Higher Education survey, which reports results of a sample of graduates at six months and at two years after graduation.

But vice-chancellors and academics at many universities are likely to have deep concerns about any approach that makes graduate employment rates and earnings central to an exercise to measure teaching quality, with some arguing that graduates¡¯ earnings will have little or no relationship to the quality of pedagogy they experienced at university.

A 2010 study by the educationalist Graham Gibbs for the Higher Education Academy drew together research spanning three decades and found that information about graduate earnings and employment rates tells applicants little about the quality of education they can expect to receive. This is because the data are strongly influenced by factors such as institutional reputation, ¡°invalid¡± league tables, students¡¯ entry grades and social class, according to Gibbs¡¯ HEA report, Dimensions of Quality. Instead, the best indicators of a good-quality education are measures of ¡°educational process¡± including class size, teaching staff, the effort students make and the quantity and quality of feedback they receive, says Gibbs.

Redbrick universities situated in northern cities where incomes are generally lower than in London will fear that they could be penalised because many of their graduates stay local. Those with a high proportion of students destined for nursing, teaching and other parts of the public sector, traditionally not high-salary jobs, may also feel hard done by if they are marked down on such a metric.

Universities with a high number of students from deprived backgrounds will also wonder if they will suffer, given that social class remains a major factor in earning patterns over a graduate¡¯s lifetime. The same argument may also be made for those with a high proportion of ethnic minority students, older students and women.

Staff at those universities that might expect to do best from a graduate earnings metric would also be uncomfortable with the indicator, believes Gordon Chesterman, director of the careers service at the University of Cambridge.

¡°A lot of departments at Cambridge and other research-led universities judge their success on the number of students who progress to a PhD, not how many start jobs on ?55,000 a year in investment banks,¡± Chesterman says. ¡°As a careers service, we would never dream of saying you should become a corporate lawyer simply because it¡¯s well paid. Students would see through it quickly and view us as agents of the university.¡±

Many would baulk at the idea of promoting certain careers, he adds. ¡°I was hearing from a colleague how one of his students had gained a first in chemistry, but was now teaching yoga, which isn¡¯t a very well-paid job ¨C he was very proud of that student,¡± says Chesterman, adding that the use of DLHE information in any future assessment would be particularly misguided as many careers took a number of years to yield a good salary. ¡°It¡¯s not just careers in the arts and creative industries ¨C I¡¯m thinking about High Court judges, Cabinet ministers and partners in law firms,¡± he says.

Roger Brown is emeritus professor of higher education policy at Liverpool Hope University and former head of the Quality Assurance Agency¡¯s predecessor, the Higher Education Quality Council. He describes the idea of including a metric on graduate earnings in the TEF as ¡°completely ludicrous ¨C like the whole idea of the framework ¨C because no allowance is made for the inputs, not only teaching quality but also socio-economic background and the school attended¡±.

That view is echoed by Geoffrey Alderman, professor of history and politics at the University of Buckingham.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

¡°Graduate earnings is a very dangerous metric,¡± he says. ¡°The salary derived from a liberal arts degree does not reflect in any way the quality of teaching received by students. I teach history because I hope to change the intellectual state of my students ¨C their future earnings is not something that enters into my mind.¡±

Jack Grove

<ÁñÁ«ÊÓƵ class="pane-title"> POSTSCRIPT:

Print headline: Elements of framework construction

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs
ADVERTISEMENT