Gordon Kirk suggests a strategy for improving the effectiveness and credibility of the external examining system
In the recent report by the Higher Education Quality Council, Learning from Audit, the external examining system comes in for some sharp criticism. The experience of 69 quality audits of higher education institutions reveals serious shortcomings.
The report notes: "There are no generally agreed or used standards, criteria or procedures to nominate, select, and appoint external examiners across universities"; "briefings to external examiners on their role vary from comprehensive to nugatory"; the impact of external examiners on the operation of programmes within universities "often varies considerably"; and some reports are "uninformative and unhelpful for assessing many aspects of the programme of study".
Not surprisingly, HEQC has commissioned a review of the external examining system and a major consultative exercise has been initiated. This article is a contribution to that public discussion. It attempts to highlight the central features of the external examiner role in the contemporary higher education context and ventures some suggestions on how the system might be made a more effective and credible mechanism of quality control.
The external examining system is widely acknowledged to be the means by which the higher education system satisfies itself that the work of individual institutions is in line with United Kingdom standards. It thereby reassures students, funding councils, the academic community itself, as well as the wider public, that degrees and other awards of higher education institutions attest to comparable standards of academic and professional achievement. It is the key quality control mechanism in higher education, performing a clearly distinguishable function from that of quality assessment and quality audit.
The central assumption underlying the external examining system in the UK is that standards are best assessed through a process of peer review. Communities of academic specialists develop shared understandings about the conduct of an academic activity and the criteria to be used in its evaluation. The external examining system involves the application of the same standards and criteria that academic specialists bring to the analysis of each other's work, to the scrutiny of students' work.
The role of the external examiner is complicated in a system of higher education characterised by diversity. We need to establish whether diversity of mission entails diversity of standards. If it does, precisely which standards are being protected by the external examining system? Is it the case that awards are not made in relation to a level of performance that falls below an acceptable threshold, whatever the mission of the institution? For some, of course, that minimalist position involves a serious threat to the "highest" standards. But how are these to be defined? Arguably, the future of the external examining system depends on an agreed view of the standards attested by awards in a diverse system.
The external examining system has developed a second justification: it is seen as a device to ensure that an institution's assessment system is fair. There is a need here to distinguish between the assessment of individual students' work on the one hand and institutional assessment apparatus on the other. The first of these is essential if students are to be reassured that their work has been scrutinised fairly and impartially: it is protection against bias and other forms of discrimination. Clearly, the external examining system must continue to perform this key function.
However, it is doubtful whether the external examining system should be relied upon to approve the formal assessment system within an institution. That seems an appropriate function of academic audit. Currently, academic audit is concerned with the systems and procedures which institutions devise to assure the quality of their educational provision. Systems of assessment form part of that provision and it is reasonable to expect institutions to provide, for example, a code of assessment practice, publicly available statements on the assessment schedule, appeals procedures, provision for anonymous assessment of students' work, and for checks on consistency of assessment by multiple "blind" grading and other means. If institut-ions of higher education were not in the habit of making arrange-ments of this kind, the requirements of the charters for higher education ought to entail a reconsideration of practice.
Furthermore, the Secretary of State has asked HEQC, the body responsible for managing the quality audit system nationally, to include institutions' responses to the higher education charters in the audit process.
A further area where clarification of purpose and practice is necessary concerns the role of the external examiner as course consultant. Many external examiners see their report on academic standards and assessment procedures as only part of a more wide-ranging analysis of a degree programme encompassing student and staff reaction to the course, the resources required by a programme, and much else besides. In such cases, the external examiner is envisaged as an external consultant, commenting with impunity on all aspects of a programme.
The roles of external examiner and course consultant are best kept distinct. Inevitably, in commenting on the standards achieved by students, an external examiner may be drawn into the consideration of such issues as the scope of the curriculum, or the emphasis given to particular fields of enquiry in a discipline, and feedback of this kind is likely to constitute significant evidence in the review of a programme.
However, the external examiner's primary responsibility must rest with the standards achieved rather than the design of the programme. Certainly, there is no justification for expecting the external examiner to report to the institution on how the course is received by students or by staff or by relevant professional constituencies. Institutions must be expected to obtain these perspectives through their standard evaluation procedures and it is pointless to expect the external examiner to duplicate this work or to spend valuable time in the process.
However the role of the external examiner is redefined, the changes will do little to strengthen confidence in the system unless that system is managed more effectively. Consideration might be given to placing responsibility for the management of the external examining system in a single national agency. HEQC is well placed to perform that function. It could be given responsibility for establishing national agreement on the criteria for appointment as an external examiner, as well as for appointing external examiners to institutions, for there is much to be gained by having external examiners identified and appointed by an independent body.
It would also be reasonable, in an attempt to place external examining on a proper footing, to provide training opportunities or some other form of induction to support external examiners in coming to terms with the demands of the role. In addition, national standards might be set with regard to the sampling of students' work, the format of reporting and other matters, in the interests of generating a national code of practice governing external examining.
Measures of the kind described may go some way to remedying the serious weaknesses that have been identified in the external examining system. If the higher education community is reluctant to move towards even that degree of self-regulation, it will not be well placed to respond to even more radical suggestions for securing comparability of standards in higher education that are apparently under consideration in ministerial circles.
Gordon Kirk is principal of Moray House Institute of Education, Heriot-Watt University.