With the continuing development and accessibility of , the AI language engine, there has been understandable concern that this will undermine the process of essay writing and student learning. This reflects a debate that has been happening for several years in the field of English for academic purposes (EAP), which seeks to prepare and support students whose first language is not English in anglophone universities.
Online translation tools, such as Google Translate, have become spectacularly proficient in recent years at producing grammatically accurate output. Using?them can allow students to appear to demonstrate a proficiency in the language that they cannot produce unassisted. This is especially relevant in areas where EAP plays a gatekeeping role, since students are able to misrepresent their own understanding and competence. But the situation is not as bleak as some might fear, and some of the lessons that the EAP community has learned are directly relevant to GPT3.
THE Campus resource: ChatGPT and the rise of AI writers - how should higher education respond?
First of all, it is important for all involved to recognise the limits of the systems being used. Online translation has become impressive at writing at the level of the sentence, with consistent grammatical accuracy. However, for an academic writer, this is far from sufficient. For example, if text is fed into it that follows the rhetorical conventions of an overseas intellectual tradition – for example, being heavy with simile or full of confidence markers – these are retained when translated. Thus, features of anglosphere academic writing such as caution and impersonal language may be missing. Also, translation engines do not help with organisation or construction of argument. In effect, it only does part of the job.
When I played around with the GPT3 interface, I asked it to write an essay on avoiding plagiarism. The essay seemed informed, but it also missed some of the key features of academic writing. There were no citations or attributions. There was description but no analysis. Where you might expect a student to compare ideas, make connections or introduce drawbacks, the AI engine simply gave more description. It bounced along at the bottom of Bloom’s taxonomy, providing just-passable writing but without anything that could be defined as demonstrative of disciplined critical or analytical thought.
In EAP, we are now talking about “machine-translation literacies”. This involves an honest and open conversation with students about how the technology can help them, and where it cannot. It becomes one of a number of IT literacies that students develop in their time at university – and it would be fair to argue that AI literacies are also ripe for exploration.
Assessment is also affected. To me, it seems anachronistic to prepare students for an academic world where online translation does not exist. If we are preparing them to write essays and reports that can be supported by online translation, we should allow them to develop these competencies as part of the assessment process.
However, we should also be aware that there are frequent times at university when students need to produce language unassisted – from written exams to seminars to the lunch queue. Therefore, EAP assessment needs to make the distinction between supported and unsupported use. We need to assess students in contexts both where they can be assisted by the technology (such as coursework essays and presentations) and, crucially, where they cannot.
Similarly, we need to prepare students to thrive in a world where this AI exists, but we also need to ensure that they do not become dependent on it. Therefore, adaptations to assessment that recognise when they are and are not supported by AI are needed. Coursework essays can still have value, but they need to be supplemented by forms of assessment that cannot be enhanced by AI, such as assessed seminar discussions, critiques and reflections.
The final lesson that EAP has learned from this technology is that it is very hard to legislate against it. There is little consistent policy or regulation, and this has led to a patchwork of inconsistent regulations and advice?that may differ from department to department or even within departments themselves. Moreover, whatever the regulations may be, how do we police them?
More to the point, should we regulate translation engine use? that colleagues and I conducted with students in China suggests that while some are simply using the translation engines to avoid effort, many others are using them in a strategic and nuanced way. They are finding new ways to express complex ideas and checking that their output is well expressed.
The EAP profession, then, can help the students exploit this emergent technology to reach their goals rather than try to limit and control its use. I see no reason why this approach should not be extended to use of GPT3.
AI engines will only improve. As they do so, the potential for them to be misused by students grows. But, at the same time, the potential for this to support and enhance the development of students into global citizens is apparent. Adaptation is possible, but it needs careful consideration, realistic thinking and an understanding of what AI literacies can bring to the academic world.
In this way, we can prepare citizens who work with technology to enhance their intellectual skills rather than becoming diminished by their dependence on it.
Mike Groves?is director of the Centre for Academic English Studies at the Surrey International Institute, a joint initiative between the University of Surrey and Dongbei University of Finance and Economics, China.