Enough with the doomsday whining about ChatGPT. Enough with the Stone Age “do?it with pen and paper” , and enough with the techno-panoptic “we’re watching you as?you type your paper” creepiness. Let’s get real.
For all the whiz-bang amazingness of?ChatGPT, let’s be?really clear: LLMs (large language models) and “generative?AI” are , just like Excel spreadsheets, MRI?scanners and walking canes. They help humans do?specific tasks. It?just so?happens that we?feel comfortable with some tools, even?if, at?first, they seemed pretty darn frightening.
I am?not suggesting that ChatGPT (which I’ll use as a proxy and stand-in for all similar and forthcoming AI/LLM ) is?no?big deal for issues of teaching and learning in higher education. It?is. In?fact, the way I?see?it, we’re doomed. But at least let’s be clear about why we’re doomed. We have to be able to name the problem before we can begin to?fix?it. And then maybe, just maybe, we won’t be quite as doomed.
ChatGPT is a “”. It uses a massive amount of real-world data to recombine specific snippets of information into a coherent linguistic response. This has nothing to do with sentience, intelligence or soul. As the researchers who came up with this apt phrase note, language models “are not performing natural language understanding (NLU), and only have success in tasks that can be approached by manipulating linguistic form”.
The key here is “form”. Real sing and talk and curse. They do so by mimicking forms of human action (which we have taught them), and we in turn ascribe meaning to such forms of?action.
But to be clear, it is the parrot’s mimicry of human forms of action that causes us to ascribe meaning to such actions. Similarly, ChatGPT mimics human forms of action, namely, almost instantaneously producing seemingly coherent and logical written text. But this is just a?form of mimicry. The researchers state this clearly: “No?actual language understanding is taking place in LM-driven approaches…languages are systems of signs, i.e.?pairings of form and meaning. But the training data for LMs is only form; they do not have access to meaning.”
So what does this mean?
First, it means that ChatGPT’s output (the form) could easily pass my class. In?fact, it did pass! I?had 60 students in my Introduction to Education course last semester, so I?plugged the basic prompts of their final assignments into ChatGPT and did a?quick comparison. ChatGPT’s responses were better than those from 80?per cent of my students. I’d probably give it an?A? because the answers were clear, concise and coherent.
But second – and this is the key – passing my class with ChatGPT means nothing because it was just mimicry, with no?meaning. ChatGPT can pass the Turing test, but it doesn’t care. It’s just a?tool.
This realisation – of form with no meaning – offers us a way forward in attempting to outwit ChatGPT as well as, more importantly, embracing?it in higher education.
In terms of outwitting ChatGPT, let me first apologise for my earlier outburst about all your ridiculous whining. Sorry. Not?sorry. That’s because most responses to ChatGPT (even the smart ones, such as “” the output to be able to detect?it) confuse the symptoms (superior form) and the disease (no?meaning). If we are ever going to outwit ChatGPT, the key will be to see if and how our students change their meaning of what we are teaching. (That, by the way, is called “learning”.)
So here’s one obvious solution: benchmark student writing with initial and informal writing assignments so you have a baseline for comparison to future work. If you notice a major divergence between what they wrote initially and what they are writing now, you just have to ask them to explain their thinking. (Plagiarism software, by the way, should therefore have a self-plagiarism button to ascertain congruence of students’ writing over time. You’re welcome, Turnitin.)
Those of you who are paying attention (and are not parrots or robots) will immediately realise that such a solution is naive and unworkable because it is completely with the number of students most faculty teach, the mode in which we teach them, and the minuscule amount of time we devote?to getting to know our students, much less carefully read their submitted work. That is why I?stated at the beginning that we are doomed.
I say that we should instead follow Dr?Strangelove’s cue and embrace the doomsday machine. What all the naysayers are really squawking about is ChatGPT’s form: it’s clear, concise and seemingly coherent writing. So let’s meld form and meaning by, for example, requiring all students to use ChatGPT for their initial brainstorming and drafting, kind of like their very own personal?TA.
Students would need to turn in the outputs that ChatGPT spits out, and have a statement of what they used, and I?might end up grading their process (which outputs did you choose? Why? What did you modify? Why?) as much as their product. But one thing I?am sure about is that, if done well, 80?per cent of my students’ work will be much improved.
There is, by the way, nothing new here. Garry Kasparov (and many other chess grandmasters) quickly realised that using AI-powered chess engines their games – just like using Excel dramatically improves your ability to do inferential statistics; just like MRI scanners dramatically improve your ability to peer inside the body; just like a cane dramatically improves your ability to walk.
ChatGPT is, and is not, the end of the world. It all depends on how we use?it.
Dan Sarofian-Butin was founding dean of the Winston School of Education and Social Policy at Merrimack College in Andover, Massachusetts, where he is now a full professor.