Deans at UK universities follow National Student Survey results much as investment managers follow the stock market. After all, the NSS not only affects universities’ share of the student market, but it can also result in an intervention from the Office for Students when it deems that standards – as it understands them – are unacceptable.
Universities set benchmarks for NSS scores within disciplines and, when there is a dip below the benchmarks, deans develop an intervention to raise the score the following year. Yet, disregarding the sampling error, they sometimes rely on a mere handful of comments left by students in the free response section of the survey, typically focusing on negative views.
The comments are assumed to be representative of something causal, without any further reflection or evidence-gathering. Evidence is used, but only the actual NSS score in the context of whatever intuitive intervention the dean has developed – and, more precisely, to motivate the intervention, rather than to test it properly.
Those intuitions are born from conversations with key?people in a faculty, but deans do not inspect the statistical structures of the data, and nor do they formally link intervention via a causal model. The next NSS result, derived from a new population of students, is the only marker of the intervention’s success or failure. This is in blatant defiance of the kinds of practice academics employ in their own research.
When they apply their own skills to NSS analysis, academics can expose assumptions and develop models that do not support the proposed interventions. Deans rarely thank them for it. Their responses vary from ignoring an analysis completely to challenging its legitimacy or even asserting implausibly that it supports the original plan.
Why would well-educated deans proceed so irrationally? The answer is that their rationality falls away when they are inducted into the mysteries of senior management. A different register and technical vocabulary is inculcated, such as student experience, student satisfaction, authentic assessment, employability and communities of practice. Even in meetings with rank-and-file academics, such terms are incanted, undefined and unexamined. This incantation is often named as a “strategy”, but, more accurately, it is a “belief map”. It is a credo to cement senior managers as a group, to create a unity of purpose.
Those academics with designs on attaining such a rank will willingly adopt the concepts and embrace the map, even though they too know that it is irrational. As in , senior management uses extraordinary beliefs as commitment devices.
The orthodoxy is presided over by the university executive and various , which represent sect-level differences in belief. The relative positions of institutions are deemed important, and where you do begin to see differences in register and concepts between universities you begin to see the fault lines that form sects. The activities and actions of deans, executives and mission groups are mostly internal to them, sealed to the outside world: not in a secretive manner, but functionally. There is no method to connect them to the practices of academic colleagues.
In principle, deans should welcome diversity of thought because it will only improve the academic product. But the development of a closed-language game around the NSS and related university activities has narrowed the lived by deans. Their purpose is to coordinate people to do their bidding in relation to arbitrarily understood performance indicators that affect their institutions’ league table positions and their own salaries.
Societies close when risks proliferate and there is insufficient time to develop specialisms to manage them. Deans have many pressures and must be seen to act – perhaps on a timescale that precludes better analysis. But the transition to credo can be countered. Just as universities have acted as a public good to buffer society against stochastic risk, so faculties can develop similar structures to buffer universities.
One solution to NSS performance is to develop academic research capability in this area, at faculty level, and hand policy generation to such dedicated groups. The items on the NSS all positively correlate with one another, making the scale hard to interpret at a level suitable for intervention. However, could be deployed to uncover latent variables. This would help to clarify thinking around possible causal accounts, but it would not confirm one. Covariance structures will not always remain stable across disciplinary cohorts, institutions or years.
Given this, different research strategies could be explored, in which student satisfaction is assessed through alternative means and, perhaps, manipulated in properly designed experiments. Or other data could be added to the analysis to try to increase predictive leverage – ideally with individual rather than cohort-level data. Simulations of the NSS could be run within populations to deliver on this.
None of these ideas are novel. They just require deans to reconnect with their academic training and direct their energies towards producing fallible, epistemically virtuous models and interventions that actually work.
The author is a UK academic.