榴莲视频

It is too easy to falsely accuse a student of using AI: a cautionary tale

<榴莲视频 class="standfirst">Based solely on a Turnitin report, Emily was condemned for using ChatGPT to write her essay. Except that she hadn’t, writes Daniel Sokol
七月 10, 2023
A robot hand presses a computer space bar, symbolising ChatGPT
Source: iStock

Emily is an aspiring lawyer and a first-year undergraduate at a well-known London university. Like so many students, she suffers from anxiety and depression.

In April, after hours of reading and drafting, she submitted a 1,300-word essay about a film by Pedro Almodóvar, the great Spanish director. The following month, she logged on to the university system to check her mark. But there was none. Instead, her essay was flagged as “AI-generated”.

Puzzled, Emily immediately wrote to her course tutors asking for an explanation. She had heard of ChatGPT but never used it. The next day, the tutors replied that the matter would be “subject to investigation by a panel of examiners”.

As Emily waited for a verdict, her anxiety resurfaced with a vengeance. She became depressed and suffered panic attacks. Finally, in early June, an email from her department explained that Turnitin software indicated that 64?per cent of her essay had been generated by AI.

The email continued: “The markers consider this to be a serious offence and have awarded a mark of?0, with any reassessment capped at the pass mark. This offence has been recorded on your assessment record.”

Emily was distraught. The attached Turnitin report showed no similarities with other work, but she did not know how to challenge the verdict and she agonised about the impact it would have on her degree and her future career. Would she ever be able to work as a lawyer with this black mark on her record?

Her mother shared these concerns and contacted me, a lawyer with experience of representing students in university proceedings. We drafted a response arguing that it is contrary to natural justice (and the guidance of the Office of the Independent Adjudicator) to find students guilty without giving them an opportunity to defend themselves. We also asked for all the evidence against Emily, including the correct Turnitin report.

The department replied that “the penalty and note on your student record would only apply if you do not contest the allegation”. It also explained that “Turnitin have not yet launched their full AI product so we are unable to download the AI report”. Instead, screenshots of the matched text were attached, but only for part of the essay.

We thanked the department for its clarification that, contrary to its earlier statement, there had been no finding of any wrongdoing on Emily’s part, and we asked for the missing screenshots. We also inputted Emily’s essay into three AI content detectors, freely available online. One concluded that “this text is mainly written by a human”, Another deemed Emily’s text “very unlikely” to be “AI-generated” and a third stated straightforwardly that “this is human text”. However, these AI detectors are as opaque as Turnitin, and we were concerned that a university panel would dismiss them as untested.

We were also aware that many institutions, armed with a positive Turnitin report, place little weight on student denials. Notes and drafts of the essay are not determinative, a web browser’s history can be altered and a student could, besides, have accessed AI on a separate device. It is difficult to prove a negative.

We therefore approached Andrea Nini, a forensic linguistics expert at the University of Manchester with a specialism in authorship disputes. His 15-page report concluded that the linguistic similarities between the samples of Emily’s writing that we provided and the text allegedly written by the AI would be 178 times more likely if Emily had written the essay herself than if she hadn’t. We submitted the report to the adjudication committee, along with a detailed statement and Emily’s essay notes.

The outcome arrived a few days later: “Given the evidence provided, the Committee has confirmed that concerns relating to the use of AI for your assessment will be dropped. To confirm, no?notes have been added to your student record and no penalties have been applied to you.”


Campus resource: ChatGPT and the future of university assessment


Emily’s parents funded her defence. As they were of limited means, a reduction was applied to the legal costs, but they nonetheless spent about ?2,500 to support their daughter. In normal circumstances, the full cost would have reached upwards of ?4,000 if the case had progressed to a hearing, including the expert’s report and his attendance at the hearing.

The university would almost certainly refuse to refund any of the costs, on the grounds that Emily – despite her anxious state – could have defended herself alone or with the assistance of the students’ union. She did, in fact, contact the union at the outset, but was denied help as the process had not yet reached the appeal stage. She explained that the university had already put the offence on her record but never received a response.

Emily and her parents have asked us to share her story so other students wrongly accused of using AI could benefit from her experience. It is unrealistic to assume that students will know how to defend themselves effectively in such cases – meaning that, as things stand, those who can afford professional assistance are likely to achieve better outcomes than those who cannot.

To remedy this, universities must revisit their approach. They should ensure that their procedures are fair and that the evidence of wrongdoing is robust enough to justify launching proceedings that could change the lives of the accused.

Daniel Sokol is a former university lecturer and the lead barrister at Alpha Academic Appeals. He has represented both students and universities in litigation.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> Reader's comments (1)
Thank you for an excellent article. Universities need to updates their processes to prevent false accusations and deal effectively and speedily should errors, such as the one described in this article, be made.