ÁñÁ«ÊÓƵ

What is a ¡®low-quality¡¯ course?

<ÁñÁ«ÊÓƵ class="standfirst">The UK government is determined to make good on its manifesto pledge to crack down on substandard university programmes. But what is quality? Is it best measured by graduate earnings, learning gain, a national university curriculum ¨C or something else entirely? Anna McKie ponders the options 
March 19, 2020
Dog race
Source: Getty

The UK Conservative Party¡¯s platform for December¡¯s general election may have been dominated by Brexit, but leaving the European Union was not the only thing that the party¡¯s manifesto pledged to get done. Also listed was a commitment to tackling what the party deems ¡°low-quality courses¡± in UK universities.

Concerns about?standards in higher education?are not new but they have intensified since the Conservatives, in coalition with the Liberal Democrats, tripled England¡¯s tuition fee cap in 2012, and subsequently removed restrictions on how many students each institution can recruit. This, in turn, led to an explosion in unconditional offers and to accusations that some universities are pursuing a ¡°bums on seats¡± approach to recruitment, without due care to the educational value of what their students are receiving.

In February, for instance, reported that some students at the universities of Manchester, Nottingham and Lancaster were being turned away from overcrowded lecture theatres and asked to watch live streams of the proceedings from nearby coffee shops. But, inevitably, most of the concern is focused on post-92 universities. That attitude is encapsulated in comments by Nick Timothy, Theresa May¡¯s chief adviser during the first year of her premiership. In his inaugural? after being forced to quit in the wake of the 2017 general election, he recalled that on a recent trip to the barber, his ¡°hair was cut by a young man who told me he had graduated from Southampton Solent University with a degree in football studies¡­I doubted whether he thought his qualification was worth the debt he will carry as a millstone around his neck for 30 years.¡± Timothy¡¯s conclusion? That English higher education is a ¡°pointless Ponzi scheme¡± that is ¡°blighting young people¡¯s futures¡±.

Universities¡¯ defence of their standards is not helped by reports of rampant grade inflation. Between?2010-11 and 2016-17, for instance, the proportion awarded firsts increased by 11.6 percentage points, and a major report?in 2018, commissioned by Universities UK, found that most of the rise in the proportion of first-class degrees awarded between 2008 and 2017 was ¡°unexplained¡± by other factors, such as higher entry standards.?

ÁñÁ«ÊÓƵ

ADVERTISEMENT

All of this has culminated in?England¡¯s regulatory body, the Office for Students, being charged by the government with using its powers to prevent universities from receiving student loan funding for supposed ¡°poor-quality¡± courses ¨C and, ultimately, to shut down such courses altogether.

The larger problem, of course, is to define what counts as a ¡°quality¡± course. The UK already has a Quality Code for Higher Education, overseen by the Quality Assurance Agency, which, , enshrines ¡°a series of expectations, which clearly and succinctly express the outcomes providers should achieve in setting and maintaining the standards of their awards, and for managing the quality of their provision¡±.?However, it is evident that not everyone is convinced that the standards are being met ¨C or, if they are, that the standards are high enough. Hence the recent speculation about the potential establishment of a school-style ¡°Ofsted for universities¡±, with the power to send inspection teams into institutions.?

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Nor does the external examining system ¨C which is supposed to guarantee rigorous assessment standards ¨C command universal confidence.?Camille Kandiko Howson, associate professor of education at Imperial College London, says that one main problem with the system is that all the feedback is for the consumption of other academics: ¡°Academics feel really busy and overwhelmed but students don¡¯t feel like they are getting enough support and feedback,¡± she says. More fundamentally, however, she doesn¡¯t see ¡°how having someone who you may know, could be your friend, coming to review a course offers the necessary amount of rigour. I know there is work going on to train external examiners but I would argue that some of the problem is that system might no longer be fit for purpose.¡±

Indeed, for Iain Mansfield, the former head of education, skills, science and innovation at the Policy Exchange thinktank, who was recently appointed as special adviser to UK education secretary Gavin Williamson, grade inflation is the proof that the external examining system has ¡°clearly not worked¡±.

Greyhound
Source:?
Getty

Quite apart from issues of baseline institutional quality, what many employers, students and parents want is a way to assess universities¡¯ relative quality. Traditionally, people have had little to go on beyond institutional prestige and league tables. More recently, the teaching excellence framework was introduced by Jo Johnson as a means by which university teaching could be directly measured and compared. This has revealed that there is ¡°gold¡±-standard teaching across the sector, and, conversely, that some Russell Group universities¡¯ teaching offerings fall into the lowest ¡°bronze¡± category. A similar exercise is planned to begin shortly in Australia.

However, the TEF currently provides information only at the institutional level, and its extension to subjects is proving controversial. Moreover, its metrics ¨C primarily student retention, student satisfaction and graduate earnings ¨C have been criticised as not being good measures of teaching quality.

When the government itself talks about quality, it appears to be most interested in graduate earnings.?The longitudinal education outcomes project, initiated by former universities minister David Willetts, draws on tax data to reveal average earnings for each university course, and it has its equivalents in other countries such as Canada?and?New Zealand.?This chimes with the Timothy view that a low graduate salary is an indictment of the course, but that perception is highly contested.?Kandiko Howson laments the ¡°really unhelpful discourse¡± about graduate salaries given the fact that they are so dependent on employment sector and geographical region: "You¡¯re really just measuring the value of doctors¡¯ versus nurses¡¯ salaries, or whether graduates live in the north or south,¡± she says.?

Moreover, she warns, judging institutions on graduate earnings could have detrimental unintended consequences: ¡°We¡¯ve already seen institutions strategically close programmes like social work because they tend to bring in students with lower qualifications who don¡¯t get high salaries when they graduate. And yet we are also in a country with a social care crisis.¡±

Even Willetts thinks that the use of LEO data to determine which courses are worthy of student loan funding would be ¡°problematic¡±. Writing in Times Higher Education last year, the now chair of the Resolution Foundation pointed to problems such as the data¡¯s failure to distinguish between part-time and full-time work, or between different regions or employment sectors. Hence, ¡°a university that provides nurse and teacher training will inevitably appear to perform less well than one focused on financial services and City law firms.¡±

Last month, Julia Buckingham, president of?Universities UK, that the umbrella body has developed new indicators to assess the value of a university degree, in order to help the government ¡°broaden its current narrow definition of success based on salaries alone¡± in ways that ¡°better reflect what is important to students, parents, employers and society¡±.?The measures include?the proportion of a university¡¯s graduates that go on to set up their own businesses or work in essential public services or in sectors or regions with skills shortages.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

It is not only in England that graduate outcomes are the focus of discussions about university quality. According to Anthony Carnevale, research professor and director of Georgetown University¡¯s Center on Education and the Workforce, ¡°the appetite for a quality measure is growing and growing¡± in the US, too.

Currently, quality is assured by a system that involves a number of accreditation bodies operating at the programme level, according to Alex McCormick, associate professor of educational leadership and policy studies at?Indiana University (Bloomington). States are also ¡°major actors¡± in quality assurance, but tend to focus on simpler outcome measures, McCormick says, such as degree production in particular fields, graduation rates and success for traditionally under-represented demographics. But a push to see quality through the lens of graduate employment and earnings is ¡°emerging fairly fast and aggressively¡±, according to Carnevale.

One reason is that these factors are measurable. But another is that such measures are ¡°what the public wants¡±, Carnevale says. ¡°The first thing on the parents¡¯ mind after paying out $40,000 a year is whether or not their kid is going to get a job,¡± he notes ¨C although ¡°if you ask the kids, their first concern is having a career ¨C they are more likely to use that word?¨C and then their next is that they want to study things that interest them¡±.

He laments that graduate earnings are highly dependent on each graduate¡¯s socio-economic status, so measuring university quality on that basis would be ¡°another nail in the coffin of race and class justice¡±. But while he acknowledges that it would be better to measure what knowledge or skills students have acquired at university, doing so is fraught with difficulty.

The US has the Graduate Record Examinations (GRE), which aims to do something along those lines. However, it is only sat by those seeking admission to graduate schools. Moreover, its efficacy as a measure of graduate quality is far from undisputed. ¡°The notion is that general education will give you a whole set of problem-solving, critical thinking skills ¨C all the stuff that everyone talks about these days ¨C that will make you adaptable to the labour market. [But] we don¡¯t truly know how to measure that,¡± Carnevale says.

Imperial¡¯s Kandiko Howson sees some promise in England¡¯s recent National Mixed Methodology Learning Gain project, which focused on standardised tests. Although it ultimately concluded that a national cross-disciplinary measure of learning gain is not viable, Kandiko Howson thinks the project points to the viability of subject-based standards that take account of each discipline's teaching and validation approaches. She also thinks that the complexity of what students gain from higher education demands multiple measures at multiple points in time, ¡°as students go on trajectories that are non-linear¡±.

The Organisation for Economic Cooperation and Development launched its own learning gain project, known as Assessment of Higher Education Learning Outcomes (Ahelo), in 2015. That was ultimately undermined by resistance from elite institutions unwilling to put their reputations on the line.?But, for Andreas Schleicher, director for the Directorate of Education and Skills at the OECD, ¡°some form of validation of degrees is really important¡±. And one useful approach, he says, is to involve ¡°the users of credentials, most notably employers¡±.

In Europe, the ¡°¡± set out in the early 2000s to identify the knowledge and skills necessary for university graduates to be employable at European level in seven fields: mathematics, geology, business, history, educational sciences, chemistry and physics. However, according to Imperial¡¯s Kandiko Howson, it proved much easier to write descriptors of what a student should be doing than to measure whether they were actually doing it. ¡°So a lot of these projects get as far as writing the descriptors but not designing the test to ensure students have the skills,¡± she says.

Greyhounds racing
Source:?
Getty

Gavin Moodie, adjunct professor in the department of leadership, higher, and adult education at the University of Toronto, believes that a lot of the concern around standards in the UK stems from the big differences, in terms of reputations and resources, between traditional and newer universities.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

¡°I expect this tension to intensify until either England reduces the relatively steep stratification of its universities or discards the traditional expectation of [institutional] homogeneity,¡± he says. ¡°The issue arises less sharply, if at all, in Canada, not because of its monitoring of degree standards but because its universities have more similar resources and status.¡±

However, James C?t¨¦, professor of sociology at the University of Western Ontario, says that there is also an issue of declining standards in Canada, particularly as community colleges have been allowed to become universities. But it is an issue that ¡°people do not want to face head on¡±.

C?t¨¦ is in favour of ensuring degree standards via common exams. In his 2010 book, Lowering Higher Education, he writes that university programme exit exams, modelled on the exams used by professional institutions, should be implemented in each academic discipline to ensure that a minimum level of pro?ciency is attained by students at different universities ¡°ostensibly acquiring the same degree¡±. However, he believes that this idea is a non-starter in Canada¡¯s current educational climate, in which people ¡°naively believe in universal higher education¡±.

Yet it is not just in Canada that the idea of common exams has been raised. The idea of comparatively assessing institutions not so much on learning gain (which takes into account students¡¯ starting points) as on the absolute standard of their graduates is something of an old chestnut in the UK, too. The idea is uncontroversial for secondary schools, of course, which teach to national curricula and sit common GCSE and A-level exams (albeit that the various exam boards set different exams). Yet the idea of national exams?¨C and the direct comparability of graduates that they establish?¨C is?not something that universities have ever been willing to countenance. In 2009, for instance, Janet Beer, then vice-chancellor of Oxford Brookes University, and John Hood, vice-chancellor of the University of Oxford, were by a parliamentary select committee whether a 2:1 in history from their universities was worth the same. Neither leader was able to come up with an answer beyond Hood¡¯s ¡°it¡¯s a different student experience¡±: a response that one MP suggested would not pass muster in ¡°a GCSE essay¡±.

But if English universities are to be treated more like schools and if the government is interested in finding a way to objectively distinguish between course outcomes without falling foul of the objections to using earnings data,?might the idea¡¯s time have come?

It certainly isn¡¯t short of cautious support among educationalists.?Gervas Huxley, a lecturer in the School of Economics, Finance and Management at the University of Bristol, is ¡°sympathetic to the idea¡±, since employers ¡°should be able to ¨C but currently cannot ¨C compare students from different universities¡±. It is also ¡°particularly important that a hard-working bright student graduating from a low-ranked university should [be able to] compete on equal terms with her equally hard-working bright peer graduating from Oxford or the LSE¡±. Currently, institutional reputation is such a significant factor that ¡°talented students graduating from low-prestige universities face barriers in the graduate labour market that mediocre graduates from the prestigious universities do not encounter¡±.

Ben Styles, head of the Education Trials Unit at the National Foundation for Educational Research (NFER), agrees that the current means of comparing UK graduates is ¡°crying out for some improvement¡±. Employers ¡°have a vague notion about Oxbridge and the Russell Group, then the ¡®rest¡¯, and it¡¯s not very helpful. There could be courses within that rest that are really high performing compared to a course that has fallen by the wayside in a Russell Group university.¡± He also points out that schools benefit from the national exams system because it allows them to test the effectiveness of different teaching interventions. By contrast, it is very difficult to assess ¡°whether one undergraduate programme is better than another, or whether any type of [university] teaching is better¡±. Adopting national standards would provide university programme leads with hard evidence on their performance, and potentially to conclude: ¡°Well, actually, we¡¯re not performing as well as we should. What can we do to improve that?¡±

Nick Hillman, director of the Higher Education Policy Institute, notes the argument that the reason the universities of Oxford and Cambridge devote so many resources to their tutorial teaching system is that their constituent colleges compete for the best results in university-wide subject exams. However, while he thinks there is ¡°a strong case for exploring the idea [of national exams] in the early stages of a degree¡±, he is nervous about adopting the idea for final exams because it could lead to a homogenisation of university curricula that is ¡°contrary to the idea of academic autonomy.

¡°Exams are based on the curricula and I think curricula should be living things, especially in the universities where knowledge is being pushed forward all the time,¡± he says. ¡°It¡¯s completely valid for a university to focus, let¡¯s say in their economics degree, on one set of issues or one set of theories, and for another to focus on others.¡±

The issue of autonomy would also prevent any standardisation of university exams in the US, according to Indiana¡¯s McCormick. ¡°Most observers feel that autonomy has had a net benefit where quality is concerned,¡± he says, ¡°so the notion of national degree standards as the likely impetus for national curricula would not go far.¡±

But the NFER¡¯s Styles suggests that a workable compromise might see students sit a core paper common to many universities while also being examined on various other topics unique to their particular institution.

¡°You could even do it in more subtle ways than that,¡± he suggests. ¡°You could have common items ¨C individual questions ¨C that were peppered throughout papers. This could then be translated into a common paper score and could be used to then standardise the whole degree outcome.¡± However, he admits that this would be easier in some subjects than in others.?

Mansfield ¨C who, in a previous role as a civil servant, was the principal architect of the TEF ¨C also believes that autonomy is ¡°the real strength of the system¡± in the UK, but he is open to the idea of national comparisons on ¡°core elements¡± of particular subjects. ¡°You would have universities teach different options, but you might be looking at a core baseline to calibrate where the universities set their 2:1s, firsts, etc,¡± he suggests, speaking before his recent appointment as a governmental special advisor. ¡°So, if at one university 95 per cent of people were passing this core test and at another university 65 per cent were, yet they were giving out the same proportion of 2:1s and firsts, you would ask: ¡®Is that right?¡¯¡±

If the conclusion was that it wasn¡¯t right, ¡°there could be a case for regulatory intervention. The big thing in all of this is there is no point bringing in a test if you are not going to be tough on interventions.¡± However, he still believes that standards would be better improved by focusing on shutting courses with high dropout rates or poor graduate employment outcomes.

Meanwhile, Bristol¡¯s Huxley believes that universities could reasonably be asked to cede a little of their autonomy over curricula and exams to professional bodies or learned societies, such as the Royal Economics Society or Royal Geographical Society. He points out that there are already certain subjects where professional bodies have the ¡°limited but meaningful¡± ability to monitor standards. An example is medicine, where, although exams are set by the individual universities, the General Medical Council has considerable power to oversee syllabi and has some powers to oversee how the material is examined.?

However, all discussions of how to uphold degree standards inevitably come back to the question of what degrees are for. The problem, according to Natasha Jankowski, director of the National Institute for Learning Outcomes Assessment at the University of Illinois, is that educators don¡¯t agree on what the answer is.

¡°It differs between employers, accreditors and even our faculty when we talk about it,¡± she says. Hence, courses are designed without a clear sense of what they aim to achieve: ¡°There is a lot of ¡®I¡¯ll teach this course because I think the topic is interesting¡¯, without first thinking how it fits into the larger ends we¡¯re trying to get to. Students don¡¯t have a measure or a profile to say: ¡®This is what my degree is about: this is what I will know when I leave.¡¯¡±

One solution, she says, is offered by the Valid Assessment of Learning in Undergraduate Education (Value) , launched by the Association of American Colleges and Universities in 2009. These aim to assess ¡°whether and how well students are meeting graduation level achievement in learning outcomes that both employers and faculty consider essential¡±. The outcomes include critical thinking, written communication and ethical reasoning.

¡°They look at work, evaluate it with the rubrics, and judge whether it demonstrates those common values,¡± Jankowski explains. ¡°It is actually really useful information, which is wonderful, but it is incredibly time-intensive and it costs a lot.?If you want really meaningful data you have to put in the time or you get something really general that isn¡¯t helpful. You have to pick your trade-off. The search for one measure is hopeless.¡±

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Imperial¡¯s Kandiko Howson also accepts that any measure of degree standards has to involve multiple metrics ¨C not least to make it difficult for universities to game. ¡°But that¡¯s the beauty of higher education: that we don¡¯t just have a simple bottom line,¡± she concludes. ¡°It¡¯s a never-ending battle, but I think you can find a happy medium in the end.¡±

anna.mckie@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles

The advent of datasets linking graduates¡¯ income to their student records has fuelled calls for certain courses and universities to be excluded from public funding. But, ahead of England¡¯s Augar review of post-18 education, the minister who commissioned the longitudinal education outcomes project, David Willetts, warns against such abuses of the data

<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (6)
Some kind of qualification in animal studies might have discouraged your correspondent from using images of greyhound racing to illustrate this article.
A really good overview of the current position on this important subject with the conclusion being "There is more than one way of looking at / measuring this." Plus the usual conclusion "More research is needed". Perhaps what is needed is more "agreement" on what is "appropriate " rather than "needed". It's funny how internally for each subject they offer, Universities are able to offer grades from 3 to 1 but are not able to compare the quality of the same degree (be it history or maths) between Universities.
If ¡°The proof of the pudding is in the eating¡±, how good the pudding is at the finish should be measured by how many want to eat it and how much they are prepared to pay for the experience ; comes back to market worth and demand. As the article shows, there are drawbacks to this measure but perhaps the only one measure ... the most arithmeticslly measurable . Basil jide fadipe .
Well at least the guy with a degree in football studies was on the ball with a welll-rounded teamwork-related qualification, pitched at the right point for meeting his goals at a grass-roots level http://fooddeserts.org/images/000SportsFootball.htm
The three major indicators of low quality teaching in any context are: low quality teaching, very little learning and understanding on the part of students and very little or no applicability of course content to issues at the individual, group, community, national or international level/s or any combination of these. At the university level there are lecturers despite all the claims about teaching and teaching universities. This immediately weakens teaching quality. In those universities where students are really learning, it is usually because they have had some previous exposure to the subject matter of the course in their workplaces or by reading before starting their course of studies. In some instances, some lecturers may be explaining the subject matter properly. There are also instances where lecturers may give good real life examples of concepts and ideas that constitute the essence of the course. Each of these dimensions must therefore be evaluated with a survey questionnaire administered to all students when they attend their first class in specific courses. These same questionnaires must be re-administered to the same students at the end of the course before they write end-of-semester examinations. Responses to both questionnaires must be analyzed and compared with so other forms of assessment given for the particular course. Teaching quality has nothing to do with the number of students passing the course of the number of students graduating from a specific program. Other important factors are course structure that is the extent to which successive course topics build on previous course topics and whether or not course objectives are realistic and achievable. Real teachers will always remember the golden rule of teaching...if students or any student did not learn then I did not teach. Real teachers will never blame students for not learning they reflect on themselves and use a different teaching method on the next occasion.
¡° This has revealed that there is ¡°gold¡±-standard teaching across the sector¡± No, it hasn¡¯t. Anyone who¡¯s read more than a handful of TEF returns would know this isn¡¯t true. In my experience no employer cares about grades - they look at the person, not the qualification. If they filter by grade, I¡¯d advise my best graduates to steer well clear as it¡¯s unlikely they¡¯d be happy at such a place.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs
ADVERTISEMENT