When the UK government talks – as it often does – about its desire to crack down on what it terms “low quality courses”, it is really talking about courses with low graduate earnings. But a recent article in Times Higher Education suggested that a better alternative might be to introduce some form of national university curriculum, which would allow exam results to be used as a proxy for course quality, as in schools.
That would, in theory, allow us finally to rank all institutions and courses from good to bad, in the same way as we rank secondary schools. The Office for Students could then – if it so wished – simply regulate standards by defunding anything that fell below the arbitrary quality line.
In essence, those who would have a national curriculum for higher learning believe that providers cannot be trusted to gauge the quality of their output – indeed, that they are not even capable of knowing what a qualification in any given subject should consist of. They point to the rising proportion of degrees awarded a first or 2:1 and conclude that university staff are responding to market pressure to massage their figures.
They also believe that officials appointed by the market regulator (and hopefully advised by subject specialists) are best placed to determine what goes into curricula and assessments.
There are two main standards processes currently used in UK higher education, both of which already draw on subject expertise. The first one is peer review, via the system of external visiting. The other, used for some vocational qualifications – law, medicine, engineering and some of the physical sciences – relies on competences and standards set by professional bodies, which form part of the content of the course. These set minimum criteria for a validated course and are often accompanied by external examinations that must be passed before professional accreditation is awarded to the graduate.
Neither of these processes is perfect. Professional body recognition, for instance, can be a barrier to innovation and can be used to protect existing accredited providers from competition. It can also act to limit the number of professionals licensed to practice, maintaining high salaries but often at the expense of societal need, such as for medical practitioners during a pandemic. Meanwhile, external review can be perfunctory and perhaps relationships between reviewer and the host teams can become cosy. However, as with peer review of research, it does at least satisfy the need to check that assessment processes and the actual marks given to students are broadly similar across the sector, based on the premise that it is in nobody’s interest for the standard of degrees or degree grades to be unequal.
The current arrangements are preferable because higher education differs in so many ways from secondary education. Higher level study is infinitely specialised; you only need peruse the Ucas website to see how many different approaches there are to any given subject, and these approaches are informed by developments in both knowledge and current affairs.
Take the coronavirus. Numerous subjects will soon see significant churn in their curricula as we reach new understandings of everything from epidemiology and mental health to the sociology of place and space. It would be facile at best to go through the time-consuming process of establishing generic “bullet point” lists of approved content that must appear in every syllabus. Haven’t we got better things to do?
Even without the pandemic, a national curriculum makes no educational sense. While there is always a set “canon” of established notions in any given field, behind each of these are myriad ways of studying such phenomena. The First World War, for example, can be studied from economic, diplomatic, theoretical and social history perspectives. Different national and ethnic perspectives can shed important light on events as disparate as why the war started and why one side “won” and the other “lost”. New historical perspectives, often based on recently available archives or caches of letters, often lead to revisions of the long-accepted position. The main lesson undergraduate history provides is critical thinking, and you can’t demonstrate that with a list of facts.
So what is really going on here? We all know that the actual reason graduates of some courses do better than others is that employers are less likely to recruit from universities outside the pre-existing “elite”. Moreover, there are large regional and sectoral factors that impact graduate employment outcomes, regardless of course quality. That is why data from graduates’ tax receipts is such a bad metric of quality.
But a national curriculum would not make things any better. Adding another set of standards that have to be constantly updated and that also take no account of the complexity of the real world will add nothing to our understanding of the value of higher education, however that is defined.
Colin McCaig is professor of higher education policy at Sheffield Hallam University.