ÁñÁ«ÊÓƵ

Are UK degree standards comparable?

<ÁñÁ«ÊÓƵ class="standfirst">Without a system for sharing assessment practice, it¡¯s nonsense to assume that a 2:1 is the same everywhere, says Chris Rust
November 13, 2014

The QAA admitted in 2007 that ¡®it cannot be assumed that similar standards have been achieved¡¯. Amazingly, this received little public attention

¡°Is a 2:1 in history at Oxford Brookes worth the same as a 2:1 in history at Oxford?¡±

Five years ago, this question was posed by a parliamentary select committee to the vice-chancellors of both of those universities. Their rambling and convoluted responses were considered so unsatisfactory by MPs on the Innovation, Universities, Science and Skills Committee ¨C which was conducting investigations for its report Students and Universities ¨C that they were accused of ¡°obfuscation¡±, and of giving an answer that ¡°would not pass a GCSE essay¡±. And the committee¡¯s final report included the damning conclusion: ¡°It is unacceptable for the sector to be in receipt of departmental spending of ?15 billion but be unable to answer a straightforward question about the relative standards of the degrees of students, which the taxpayer has paid for.¡±

The correct answer to the committee¡¯s question was, in fact, a very simple one: we just don¡¯t know. We do not have the necessary systems in place to tell us. The traditional reliance on the external examiner system to mediate standards within the system is misplaced, as a number of studies have shown. However experienced an individual examiner may be, their experience across the sector can only be limited and they have no opportunity to calibrate their standards within their disciplinary community. This was emphatically recognised by the Higher Education Academy¡¯s 2012 document, A Handbook for External Examining: ¡°The idea that a single external examiner could make a comparative judgment on the national, and indeed international, standard of a programme has always been flawed.¡±

ÁñÁ«ÊÓƵ

The naive outsider might think that assuring comparability of standards is surely the role of the Quality Assurance Agency for Higher Education ¨C the independent body set up to monitor standards across the UK sector ¨C and something addressed as a matter of course within its institutional review processes. But in 2007, two years before the select committee hearing, the QAA made the brave public admission, in a Quality Matters briefing paper, that: ¡°Focusing on the fairness of present degree classification arrangements and the extent to which they enable students¡¯ performance to be classified consistently within institutions and from institution to institution¡­The class of an honours degree awarded¡­does not only reflect the academic achievements of that student. It also reflects the marking practices inherent in the subject or subjects studied, and the rule or rules authorised by that institution for determining the classification of an honours degree.¡±

In other words, local and contextual assessment practices make it impossible to make objective comparisons. This should not, in fact, have come as a surprise: certainly not to anyone up to date with the research literature. For at least the previous 10 years, especially through the work of the Student Assessment and Classification Working Group, an informal body of academics and administrators who share an interest in assessment, a series of papers and studies had demonstrated the distorting effects of central university systems that treat all marks the same regardless of the nature of the assessment task or the subject discipline.

ÁñÁ«ÊÓƵ

It had been shown, for instance, that students consistently score better on coursework tasks than in examinations and in the more numerate disciplines than the arts and humanities or social sciences. Research had also shown that, given exactly the same set of assessment results, students at different institutions could end up with awards that vary by up to a degree classification simply because of the idiosyncrasies of the different institutions¡¯ algorithms.

Much of this had also been reflected in reports produced in 2004 and 2007 by a government-sponsored Universities UK working group chaired by Sir Bob Burgess, then vice-chancellor of the University of Leicester, that examined how student achievement should be measured. But sadly, the major recommendation of these reports ¨C the introduction of the Higher Education Achievement Report transcript, providing a more detailed account of what students have achieved during their studies ¨C is hardly the solution. Nor will moving to a US-style grade point average system (currently being piloted at a group of universities in concert with the Higher Education Academy) do anything, on its own, to bring about greater comparability of standards.

Feature illustration (13 November 2014)

The QAA¡¯s 2007 paper explicitly spelled out what all this variation in local and contextual factors meant in terms of comparability of standards across the sector: that it ¡°cannot be assumed that similar standards have been achieved¡± by students graduating with the same degree classification from different institutions, the same classification in different subjects from a particular institution or the same classification in the same subject from different institutions. Amazingly, however, this startling honesty received relatively little public attention and no obvious action was taken, either by the QAA or government, to address this major shortcoming.

And when the problem was again highlighted by the select committee in 2009, it was greeted with rather a muted and defensive (some might even say complacent) response, as if the respondents actually resented being challenged. The director-general of the Russell Group, Wendy Piatt, said in response to the committee¡¯s critical report that she was ¡°rather dismayed and surprised by this outburst¡±, while the government was ¡°disappointed that the committee has not reflected in its report the very strong and positive evidence about the UK higher education sector which was given during the inquiry¡±. So the prospects of any action being taken were already looking scant before the 2010 general election brought a change of government and ensured the issue would be largely forgotten by politicians ¨C if not by the press (and by The Daily Telegraph in particular, which has continued to regularly raise the question of degree standards, especially in relation to grade inflation).

It should be acknowledged that, since 2009, the QAA has been developing a UK Quality Code for Higher Education, which is much more demanding in its expectations of providers and in the lengthy lists of indicators that reviewers are required to look for in attempting to establish that ¡°threshold standards¡± are met. But at this year¡¯s QAA conference I heard serious doubts expressed over whether the still predominantly audit-style approach to review would provide sufficient appropriate data to make reliable judgements against many of the indicators. And even if it did, the judgements are still focused on an individual institution in isolation; the QAA does not appear to have given any consideration to how the indicators could be used to make comparisons between different institutions.

Yet it is not as if we don¡¯t know what we would have to do to address comparable standards. In fact, we have known for some time. Back in 1997, the Higher Education Quality Council, the forerunner of the QAA, recognised, in a document called Graduate Standards Programme: Assessment in Higher Education and the Role of ¡°Graduateness¡±, that ¡°consistent assessment decisions among assessors are the product of interactions over time, the internalisation of exemplars, and of inclusive networks. Written instructions, mark schemes and criteria, even when used with scrupulous care, cannot substitute for these.¡±

And it recommended that subject groups and professional networks should encourage the building of ¡°common understandings and approaches among academic peer groups¡± ¨C by maintaining ¡°expert¡± panels for validation, accreditation, external examining and assessing, for example. It also called for ¡°mechanisms to monitor changes in standards at other educational or occupational levels [as well as] internationally¡±. But when the QAA took over the council¡¯s functions in 1997, these excellent recommendations were apparently lost or forgotten.

ÁñÁ«ÊÓƵ

A decade later, in 2008, Paul Ramsden, who was then chief executive of the HEA, tried to resurrect the thrust of what the council had proposed. In a report on university teaching submitted to John Denham, the Secretary of State for Innovation, Universities and Skills at the time, he called for ¡°colleges of peers¡± to be set up to help establish common standards. As I argue in Higher Education in the UK and the US: Converging University Models in a Global Academic World? (2014), these groups of academics would work by ¡°looking at real examples of student work, and discussing each other¡¯s assessment decisions. Without the cultivation of such communities of assessment practice, discussions about standards can only be limited to conjecture and opinion.¡± But, once again, the call fell on deaf ears.

ÁñÁ«ÊÓƵ

It doesn¡¯t have to be like this. Australia, for example, seems to be taking the issue of comparability of standards very seriously. Commissioned by the Australian government in 2009-10, the Australian Learning and Teaching Council¡¯s Learning and Teaching Academic Standards project sought to establish national standards, starting with six broad discipline groups.

Feature illustration (13 November 2014)

It doesn¡¯t have to be like this. Australia, for example, seems to be taking the issue of comparability of standards very seriously

The discipline of accounting, further funded by a partnership between the professional accounting bodies and the Australian Business Deans Council, decided to continue to use a ¡°cultivated community approach¡± in establishing shared meanings of their standards. A follow-on project in 2011, Achievement Matters: External Peer Review of Accounting Learning Standards, brought together subject reviewers from 10 universities, along with a number of professional accountants. Independently, they sampled student work and submitted their judgement regarding which students met a benchmark standard. Consensus was then achieved through small and whole group discussion of the samples and checked by participants individually reviewing two new samples. In addition, reviewers considered the ability of the assessment task itself to allow students to demonstrate their attainment of the standards.

The academic participants also submitted assessment data for their own degrees so that, immediately following the workshop, two external, experienced academics double-blind peer reviewed the validity of the assessment task (the extent to which it measures what it was designed to measure) and a small random sample of actual student work, with individual results returned only to each participating university. Participating universities could use the results to satisfy external agencies about their standards and, more importantly, to improve their learning and assessment processes to ensure that students achieved the requisite standards.

This ¡°cultivated community¡± approach to setting discipline standards has also been extended into other disciplines aligned with business and accounting, and plans are afoot to continue it beyond this year¡¯s scheduled end of the project. It is also due to be discussed this week at the first national conference of Australia¡¯s newly established Peer Review of Assessment Network.

Why is it that the issue of standards is being seriously, and apparently successfully, addressed in Australia, while, despite all the evidence of a problem, the UK government, funding councils, UUK and the QAA are all still dragging their feet? Last month it emerged that quality assurance was being put out to tender (¡°Watchdog ¡®no match¡¯ for a sector in flux¡±, News, 9 October), yet it seems highly unlikely that any of the bodies that might successfully win the contract will address this issue any more seriously.

Simple inertia is one possible explanation. Another somewhat more sinister (and plausible) one is that for some ¨C maybe all ¨C in the sector, it is simply not in their interest to establish transparent relative standards. The government has a vested interest, especially when it comes to the lucrative overseas student market, in rejecting anything that might bring the standards of UK higher education into question.

The Russell Group, which is happy to make general, rather empty, sweeping statements such as ¡°the world class reputation of Russell Group universities depends on maintaining excellence¡±, benefit from sustaining the unsupported but commonly held belief among employers, parents and students that a 2:1 from one of its members is better than a 2:1 from others. Even institutions lower down the league tables, with more diverse intakes and greater numbers of less academically qualified entrants, arguably benefit from the status quo: if a rigorous system were developed that could establish common standards across the sector they might have to accept going for years in some subjects without any of their students getting a first ¨C with all the negative consequences for their reputation and recruitment that implies.

ÁñÁ«ÊÓƵ

But this conspiracy of silence surely can¡¯t go on. As ever greater numbers of ?9,000 fee-paying undergraduates come out of ever larger numbers of universities with first-class degrees, it won¡¯t only be The Daily Telegraph asking ever more loudly what those certificates are really worth. Won¡¯t students, parents and employers also start to question their value? Or can it be that no one really does care, or that no one cares enough?

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (6)
The QAA certainly can't do the job (as usual) because it can't judge the one thing that matters of above all others, the quality of the content of degrees. It is pie in the sky to imagine that any regulator or code of practice can achieve the aim of making all degrees equivalent. All it would do is create yet another set of ineffective box-tickers,
Yes, making comparisons on a macro scale (national/global) is evidently preposterous. However, I was involved, in the wake of the select committee in the QAA/UUK review of external examining, where this question of comparability and standards came up - again, but this time in the context of a particular discipline. This is from the report of that research and work from that period: "Nevertheless, this unease is a result, in part, of the seemingly conflicting duties of an External Examiner to offer both an overview of the application of assessment practice locally, and consider that practice within broader, national perspectives. Given that, as previously stated, the ¡°focus group considered subject benchmarks of no considerable value in judging the standard of the award¡±, we felt that it was difficult to define (and consequently apply) national standards outside of our participation in the various communities of practice that work together to evolve the definition ¡®good practice¡¯ within the discipline. A seemingly in-built assumption within the External Examining system, rendered visible by the types of question articulated on the various permutations of the External Examiner¡¯s statement form, is that all Institutions operate on something of a ¡®level-playing field¡¯, or perhaps a series of differential ¡®level playing fields¡¯. We did not think this assumption tenable, given the variation in levels of provision, cohort size, student constituencies, and other factors, across the subject. "
UK's socialist bent: lol
How right Chris Rust is to say that 'nor will moving to a US-style grade point average system (currently being piloted at a group of universities in concert with the Higher Education Academy) do anything, on its own, to bring about greater comparability of standards'. However, if a common GPA system were adopted across the sector, one of the current 'distorting effects' he identifies in his timely and cogent piece - 'the idiosyncrasies of the different institutions¡¯ algorithms' - would be eliminated. Harvey Woolf
Hi Robert. We're glad you're out of here too. Hugs. Incy
A timely article however, ¡®be careful what you wish for¡¯ was our first reaction. Second we are minded to offer an alternative view on merits of the QAA approach and we were curious to revisit the Oxford v Oxford Brookes episode. Expanding on each in turn: This article clearly offers a valuable contribution to the key debate on quality assurance in the UK higher education (HE) sector. However, whilst we are not experts in standards, our work as market strategist specialising in assessment across education does lead us to some different conclusions, particularly in relation to the customer and market implications. Directly ¡®upstream¡¯ of HE for example, general qualification such as A- levels illustrate the potential downsides of comparable ¡®standards¡¯ as a dominant organising principal. Many such as Alison Wolf have questioned the feasibility: ¡°We currently maintain the polite fiction that all A levels are equivalent (and so a given grade gets the same UCAS points whatever the subject) No-one believes this, but it doesn¡¯t matter because offers for most degrees are tied to specific grades in specific subjects¡± Wolf, A. (2003) An English Baccalaureate. Exactly what do we want it to do? Oxford Magazine, Eighth week, Trinity Term. However, despite this comparability of standards across subjects, time and institutions clearly remains a beguiling prospect for some and with A levels the system can be argued to have painted itself into a corner as a result. Whilst the IUSS Select Committee may have expressed frustrations over comparability of standards in Higher Education, the Education Committee¡¯s work around A levels, has seen opposing concerns surfacing- ¡®exam tail wagging the education dog¡¯, ¡®exam factories¡¯ ¡®teaching to the test¡¯ and so on largely without compensatory changes in perceptions about A level grade inflation. Current issues range from an inability to incorporate practical assessment into student grading, a regulatory order to mitigate ¡®harsher¡¯ grading of Modern Foreign Languages (MFL) a continuing upward trend in re-marking volumes and most significantly perhaps no published plan for digital assessments. In contrast it is in the context of the digital developments that the more liberal approach of the QAA appears to us to be paying dividends. At a recent conference on on-line and MOOCs for example, it was clear that many institutions had valued and taken significant advantage of the opportunity to trial and learn fresh approaches largely unencumbered by standards concerns in the near term. On-line developments are indicative of a broader change taking place across the full spectrum of education, and in our view there are a range of fundamental questions to be asked about the nature of ¡®quality¡¯ in the new era before fingers are pointed at particular bodies. It seems to us in particular that consumers and users of qualifications are an essential part of the mix but somewhat under-represented both in terms of their needs and also what they can contribute to the system. For example, leading gradate employers would be more than capable of differentiating between an Oxford 2.1 and Oxford Brookes 2.1 both in terms of their perceptions and their own assessment approaches . At the IUSS Select Committee when the respective vice chancellors were we asked the whether upper seconds in history from their respective universities were equivalent clearly smart people were caught out in the moment. Given the rapidly changing market scenario and impending review of quality assurance it would be unforgivable if serious thought were not now given to the question of quality and value and how the value proposition for HE is best articulated. Perhaps on another occasion their answer might have been that what is clearly important is continuing to deliver what key stakeholders value as well as investing in a sustainable platform for the future? 2.1s being the same is not the answer but neither probably is them being too far apart. Undoubtedly there is a need for a balanced approach. Failure to do this could mean standardised HE coming to an institution close to you sooner than you think. Within general qualifications, the system¡¯s pressure valve for ¡®standards¡¯ means each year many thousands of students fail to make the grade. Be careful what you wish for!
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs