Academic misconduct offences involving generative artificial intelligence (AI) have soared at many leading UK universities with some institutions recording up to a fifteenfold increase in suspected cases of cheating.
New figures obtained by Times Higher Education indicate that suspected cases of students illicitly using ChatGPT and other AI-assisted technologies in assessments skyrocketed in the past academic year while the number of penalties – from written warnings and grade reductions to the refusal of credits and failure of entire modules – has also increased dramatically.
At the University of Sheffield, there were 92 cases of suspected AI-related misconduct in 2023-24, for which 79 students were issued penalties, compared with just six suspected cases and six penalties in 2022-23, the year in which ChatGPT was launched. At Queen Mary, University of London, there were 89 suspected cases of AI cheating in 2023-24 – all of which led to penalties – compared with 10 suspected cases and nine penalties in the prior 12 months.
At the University of Glasgow there were 130 suspected cases of AI cheating in 2023-24 – with 78 penalties imposed so far and further investigations pending, compared with 36 suspected cases and 26 penalties in 2022-23.
THE’s data, obtained via Freedom of Information requests to all 24 Russell Group members, could also raise questions about the inconsistent approach of UK universities to implementing and enforcing AI-related misconduct rules, with some universities reporting only a handful of misconduct cases or claiming to have seen no suspected cheating at all.
LSE?said it had recorded 20 suspected cases of AI-related misconduct in 2023-24, and did not yet have data for penalties, compared with fewer than five suspected cases in 2022-23. Meanwhile, Queen’s University Belfast said there were “zero cases of suspected misconduct involving generative AI reported by university staff in both 2022-23 and 2023-24”.
Other institutions, such as the University of Southampton, said they did not record cases of suspected misconduct, and where misconduct was proven it did not identify specific cases involving AI. The universities of Birmingham and Exeter, as well as Imperial College London, took similar approaches, while the universities of Cardiff and Warwick said misconduct cases were handled at department or school level so it would be too onerous to collate the data centrally.
Thomas Lancaster, an academic integrity expert based at Imperial, where he is senior teaching fellow in computing, said the sector’s “patchy record-keeping relating to academic misconduct is nothing new” but “it is disappointing that more universities are not tracking this information [given] the ease of which GenAI access is now available to students”.
“But university policies regarding GenAI use are so varied and many universities have changed their approach during the past year or two,” continued Dr Lancaster, adding that “defining and detecting misuse of GenAI is also difficult”.
Can we detect AI-written content?
“I am concerned where universities have no records of cases at all. That does not mean there are no academic integrity breaches,” he added.
However, Michael Veale, associate professor in digital rights and regulation at UCL, said it was understandable there was not a consistent approach given the difficulty in calling out AI offences.
“If everything did go centrally to be resolved, and processes were overly centralised and homogenised, you’d also probably find it’d be even harder to report academic misconduct and have it dealt with. For example, it’s very hard to find colleagues with the time to sit on panels or adjudicate on complex cases, particularly when they may need area expertise to judge appropriately,” said Dr Veale.
Des Fitzgerald, professor of medical humanities and social sciences at University College Cork, who has spoken out about the growing use of generative AI by students, said he was also sympathetic to institutions and staff in grappling with generative AI misconduct because it is “generally not at all provable, even where somewhat detectable”.
“You might expect to see more strategic thinking and data-gathering happening around AI in assessment, but the reality is that this has all happened super quickly in terms of university rhythms of curriculum development and assessment,” said Professor Fitzgerald.
Instead, responsibility for the rise of AI-assisted cheating should lie with “the totally irresponsible and heedless ways these technologies have been released by the tech sector,” he said.
“The reality is that this is a whole different scale of problem than what typical plagiarism policies or procedures were meant to deal with – imagining you’re going to solve this via a plagiarism route, rather than a whole-scale rethinking of assessment, and certainly without a total rethinking of the usual take-home essay, is misguided,” said Professor Fitzgerald.
Noting how Ireland was now developing national policy for AI use in higher education, Dr Fitzgerald said there was a need for “strong regulatory and legislative attention from national governments”.
“It's just not reasonable to expect universities, and especially totally overburdened teaching and policy support staff, to resolve this alone, institution by institution, department by department.”