ÁñÁ«ÊÓƵ

AI or not, students must still learn to think for themselves

<ÁñÁ«ÊÓƵ class="standfirst">Perhaps AI will be a useful tool. But our obsession with every shiny new object regardless of the harm it might do makes me worry, says Melinda Zook
June 13, 2024
Humans look over the shoulder of a robot at a computer, symbolising human independent thought
Source: iStock/demaerre

Resistance is futile. If, like me, you are pondering the future of liberal arts education in the age of AI, these thoughts might have run through your mind. As if our existence in humanities programmes wasn¡¯t challenging enough, a whole new world opened in November 2022 when Open AI released Chat GPT. Since then, there has been an explosion of large language models (LLMs) ready to do your research, summarise the evidence and write your papers. If you are an instructor trying to teach students how to locate and evaluate sources and to read and write critically, this new technology poses a problem, putting it gently.

Total prohibition of LLMs is not viable, however. After all, it is very likely that any writing done on a word processor will have some AI intervention, whether we are cognisant of it or not. Grammarly has integrated an LLM into its interface; Google¡¯s ¡°help me write¡± feature is an LLM, and Microsoft Word has been telling me how to write for some time (which is really annoying).


Campus resource: Enhance GenAI collaboration for future-proof research support


But I do insist that we make reflective, intelligent decisions about the impact that something like GPT 4 ¨C the much more powerful next iteration of ChatGPT ¨C will have on the intellectual development of our young people. After all, those of us reading this article ¨C not to mention the programmers who built the LLMs ¨C had the opportunity to learn how to think and write critically without mechanical help: an ability that is needed in every profession and is part of what it means to be a thinking human being. The brain is like a muscle; it requires exercise. I fear that before we even know if the use of these LMMs helps or harms student learning, we will see a flurry of highly questionable decision-making in universities.

Many educators will say we must ¡°follow the science¡±. But when it comes to putting that into practice, they opt time and again for the latest technological innovation over what students need. Numerous studies over the last 25 years show that we learn better and comprehend more when we read print on a page, rather than pixels on a screen (and this is also true for writing on paper over typing). We are far more likely to remember what we have read when we see it, touch it and are not distracted by hyperlinks and all the other magical things we can do with a screen. Yet in the last 10 years we have seen a rush to put iPads in the hands of children as young as five. And we have allowed students to bring their phones to class even though studies have shown that their phones¡¯ mere presence lowers a student¡¯s comprehension.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

In intellectual, scientific and creative work in particular, future workers will still need to be able to draw on a body of knowledge, read closely and critically, organise their thoughts, find evidence and communicate ideas. Even in professions that will be assisted by some form of AI, humans with a solid understanding of the data and the ability to evaluate evidence will still be needed. This kind of intellectual work is not the work of an empty head.

College is when young people are given the opportunity to develop these skills through the guidance of skilled faculty, such as those teaching for Purdue University¡¯s Cornerstone integrated liberal arts programme ¨C a two-semester, first-year sequence that aims to develop students¡¯ communication and creative thinking skills, broaden their outlooks and cultivate their minds through exposure to transformative texts. If students ask GPT 4 to do their work for them, whether summarising a novel they were supposed to read or writing the essay they were supposed to write, they are not creating the kind of attention and memory necessary for deep thought.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

There is something else, too, that students need to learn for future citizenship ¨C something?LLMs cannot replicate. That is the ability to understand, tolerate and even appreciate subtlety, ambiguity and nuance. Few things in this world are simply right or wrong, good or evil, but the digital world of fast answers makes us susceptible to propaganda, hyperbole, dogma and venom. If we offload intellectual work to algorithms, our susceptibility is only likely to increase, with dire consequences for our political culture.

Or, alternatively, if we all slavishly absorb what the machines tell us, we?might create one homogenised culture that blurs diversity and identity. To my mind, that would be a catastrophe.

So let¡¯s make smart decisions. If used responsibly by those who already know how to think critically, generative AI might be one tool in a large toolkit ¨C like calculators for mathematicians, as the oft-repeated analogy goes. I hope so. But from what I have seen of our American obsession with every new shiny object regardless of the harm it might do, I worry.

My plea is simply this: insofar as general education is concerned, let¡¯s continue to teach our students the joy of finding their own voices, their individuality and their creativity. Let¡¯s continue to ask them to read actual books, find and evaluate evidence on their own and write their own work.

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Because, yes, in the future, they will probably work with AI. But if we don¡¯t teach them to think for themselves, the risk is that they one day find themselves working for AI.

is Germaine Seelye Oesterle professor of history and director of the Cornerstone programme at Purdue University.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (4)
Although broadly agree. the analogy with calculators was never credible. calculators are as useful for doing mathematics as running shoes are useful for fish to evade sharks. There are some tasks for which calculators are sometimes useful in arithmetic. There may be tasks for which AI is eventually useful - tasks for which a university education is not required.
Do not oppose or dichotomize AI and "thinking." Used well, AI can help students, and professors, to think better for themselves. Do not oppose use of AI and higher education either. Teach the best of AI on all levels
I recently was sent an e-mail promoting a paid for AI tool for generating feedback. How likely is it that we end up in a situation of AI being used to mark AI generated work.
Interesting discussion and echoes some of my thoughts. Here is something that I wrote in a paper ("A Diagnostic View on Information Technology", ACM-SEN 17(4), Oct, 1992): "The adoption of IT or for that matter Science and Technology, should be viewed as an instrument in lessening the burden of humankind in improving its 'functional effectiveness'. It shall not be considered as a solution to the problems of society, as the latter concept has the potential to lead people to a state of 'analytical lethargy', by suppressing analytical thinking." The key term I coined in that era is, 'analytical lethargy'. I very much see that is getting sadly ingrained in current cohorts.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs
ADVERTISEMENT