榴莲视频

ChatGPT reveals the uncomfortable truth about graduate skills

<榴莲视频 class="standfirst">AI-written essays have exposed the unambitious and unoriginal thinking that universities require from undergraduates, says James Warren
二月 17, 2023
Source: istock

The development of AI bots with the ability to write plagiarism-free undergraduate assignments has thrown many academics into panic mode and created a flood of articles on the topic. However, all this anxiety about ChatGPT seems to be misdirected.

The problem here is not detecting and proving that submitted work is not a student’s own. Neither is it designing bespoke assignments that cannot be carried out by artificial intelligence. The scandal that should be grabbing the headlines is the fact that for a generation we have been training our undergraduates to be nothing more than AI bots themselves; this is why it is not possible to tell their work apart.

For a long time, quality-assurance speak in higher education has been dominated by the language of Bloom’s taxonomy. The higher levels of learning associated with “graduateness” have been simplistically termed evaluation, synthesis and analysis and have been measured by assessing a student’s ability to “compare and contrast” or to “discuss the advantages of…” or “analyse the impact of…”

But none of this requires original thought. It is more a case of sifting through other people’s learning, ideas and thoughts. The student doesn’t even need to understand any of the information they are regurgitating any more than the AI bot does.

Moreover, this collating task is only getting easier as the internet makes an almost infinite amount of predigested information available. Students and bots alike can now give the impression of evaluating and analysing because virtually every piece of evaluation and analysis has already been carried out by multiple authors. All the task entails is locating the information (which is now trivial), paraphrasing it and perhaps regurgitating it under exam conditions.

To be fair, it was not always so easy. Historically, the ability to rationally collate and summarise information in an original form of words was a high-level skill. And, of course, students will always need to learn to synthesise and critique knowledge; we still teach basic maths even though we have easy access to calculators.But just as mental arithmetic is longer a particularly marketable skill, neither will synthesis and critique be – particularly as, as with calculators, the machines, with their superior ability to wade through data, will be?considerably better at it.

During my career in higher education, I have encountered this problem in all the institutions I have interacted with, in the UK and abroad. There are few occasions on which undergraduates are required to truly demonstrate understanding – and those remaining occasions often result in disappointment.

One such example is in the final paragraphs of final-year projects, when students are asked: “How might your own research be improved?” Sadly, the standard response to this question is: “I need to repeat the study with more observations.” Although this may sometimes be a valid answer, it typically misses the point because often there was, in reality, no effect of A on B; any relationship between them is one of correlation rather than causation. More measures of this fact will not change it; it will simply result in greater confidence that A does not influence B. However, many students seem to think that if they had collected more data, then A would have miraculously altered the outcome B.

Such cases illustrate that students who otherwise appear to have all the cognitive abilities expected of a graduate have gaping holes in their skill sets when they are required to solve a problem whose answer is not already online multiple times.

This is, of course, not the student’s fault. It is our fault as academics. Perhaps I should have at least raised the point at external examiners’ meetings. But critical friends have boundaries, and this debate needs to occur at a higher level.

Advancements in AI offer us an opportunity to stop focusing on teaching how to solve problems that have already been answered and put more emphasis on how to recognise and tackle those problems remaining. We should relish that opportunity, not run scared from it.

John Warren is emeritus vice-chancellor of thePapua New Guinea University of Natural Resources and Environment.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> 相关文章
<榴莲视频 class="pane-title"> Reader's comments (4)
What is this author writing about? Graduates? Undergraduates? New? Old? I fail to see any logic here at all. Can any one help me?
Love it. Once you frame the issue this way and stop pushing the frustration of "but you absolutely must be able to critically analyse the literature," the real problems start to poke their noses out. It's the doing something with information that's important. The use of insight and intuition. If you can't think further than the information presented to put into a practical application, but our only metric of the alleged ability to do that is the assimilation and reorganisation of relevant information, are you really demonstrating problem solving skills? Or are you just showing what the bot could do?
The problem is that we have reduced everything to a checklist that has culminated in the introduction of learning outcomes. Quite a bit of an undergraduate degree is now seen to be about competencies rather than understanding. None of this existed when I was a student but we are then talking about another age. In the UK, the whole system has become a conveyor belt starting at school with a national curriculum (something we laughed about as existing in other countries when I was a schoolboy) and carrying on with a defined list of things that have to be achieved for a certain mark. I am not saying that marks should be arbitrarily decided but we have gone too far the other way.
We're not allowed to ask our students for ambitious or original thinking. Such thinking is harder, which leads to poor student feedback. This is the customer-centred education that we have embraced.