榴莲视频

Thank you ChatGPT for exposing the banality of undergraduate essays

<榴莲视频 class="standfirst">Eerie AI simulations of academic writing show why student essays must return to their more imaginative and freewheeling roots, says Colm O’Shea
二月 23, 2023

While the learned texts of ancient Greek and Roman writers or epistles from the early Christian church fathers are sometimes regarded as essays, the modern form originated with Michel de Montaigne.

Wealthy and erudite, at some point the 16th-century French philosopher regarded his substantial library sceptically and asked: “What if everything I?think I?know is bullshit?” He developed the essai, a profoundly personal and idiosyncratic project with one overarching goal: to see the world anew. Titles such as “Of Thumbs”, “Of a Monstrous Child” and “Of Cannibals” convey his broad and bizarre mental terrain. Each essay rotated an idea, scrutinising it from all sides and seeking fresh insight as surprising as if authored by another consciousness.

Compare Montaigne’s disciplined playfulness with the industrialisation of the academic essay. A cottage industry of advisers shepherd high-schoolers through their college application essays. Once matriculated, a disheartening proportion of students plead with professors to provide a correct template to emulate – or resort to plagiarism or essay mills to leap this arbitrary hurdle on their way to an imagined future where they’ll never have to use writing as an aid to learning or reflection ever again.

This was before ChatGPT rendered plagiarism and essay mills as redundant as blacksmithing. The crisis that the college essay faces, then – the crisis facing all those who teach it, or teach through it – is not rooted in AI, but the advent of ChatGPT may clarify it better than anything else.

The crisis stems from a larger, older problem in formal education. For too long, there has been an undue focus on convergent thinking – in other words, testing students on getting “correct” answers to problems with a set solution. College applications are generally evaluated in two broad categories: knowledge base and cognitive aptitude. Standardised testing reveals a student’s basic grasp of their discipline – which is important – but it ignores another relevant domain: divergent (or lateral) thinking. ( are nascent but promising.)

Although not identical to creativity per se, divergent thinking is an important precursor to creative work. It is also, by definition, antithetical to standardisation. It proceeds via mechanisms such as deep pattern recognition and analogy (verbal, visual, mathematical) that software such as ChatGPT, which gleans a “gist” from dizzyingly large datasets, is not good at. An old word for this species of thinking was “wit” (a surprising fusion or inspired connection between two unlike things), and while it may seem quirky or whimsical, it’s anything but trifling. Dedre Gentner, a cognitive scientist and authority on , explains that vividly explaining something to yourself or others cultivates the capacity for abstraction and uncovers novel connections between different fields. For Gentner, the ability to generate accurate metaphors or analogies may be a superior proxy for creative intelligence than IQ. Scientific breakthroughs often rely on glimpsing an imaginative analogy between two unlike things.

Along similar lines, K.?H Kim, professor of creativity and innovation at William & Mary, that the obsessive focus of both Asian and Western educational systems on convergent thinking is slowing innovation across the arts and sciences. (This focus on intellectual conformity may have a spillover effect on political thought, but that’s an article for another time.)

The college essay ideally involves the writer establishing an intellectual game, complete with obstacles to trip them up and shake off their complacency about the subject at hand. What we call “voice” is a recognition of a mind brightening in response to the challenge it has set itself, and being aware that it could be wrong. By contrast, ChatGPT demonstrates the worst version: an echo chamber, a neat summation of critical consensus. Consider Harry Frankfurt’s philosophical essay “”, in which he distinguishes between lying (falsity) and the spouting of convincing-sounding claims to which no careful thought has been given (phoniness). Whereas the liar needs an accurate model of the truth to actively hide it from others, the bullshitter needs no such awareness. In fact, a bullshitter can spew true statements all day long; what makes them bullshit, in Frankfurt’s view, is not their truth or falsity, but the heedless manner in which they’ve been arrived at. “By virtue of this,” he writes, “bullshit is a greater enemy of the truth than lies are.”?

ChatGPT is the apogee of Frankfurt’s bullshit artist. Using a large language model to cobble together things humans are likely to say about a subject, an eerie simulation of comprehension emerges, but one utterly divorced from insight about the real world.

One of ChatGPT’s most striking aspects is how well it mimics the glib, bloodless prose that characterises so much academic writing. Stephen Marche’s Atlantic essay, “”, generated much discussion in my writing department and surely others. One throwaway line worried me deeply. Explaining why he would give the AI-generated sample text he’s shown us a B+, he writes: “The passage reads like filler, but so do most student essays.”

Bullshit has plagued us since long before ChatGPT, but need we greet it with such jaded resignation?

At the risk of sounding hopelessly idealistic, let me say no. An engaged academic?could return the essay to its proper Montaignian heritage: a divergent and creative exploration of possibilities. This requires some overhauls, such as movement away from huge lecture halls where the only contact point between students and professors is a hastily written (and hastily graded) essay. Smaller student-to-teacher ratios restore the viva voce of dialectic, between student and teacher and between students themselves.

AI will keep evolving. Machine learning will yield millions of “novel solutions” in a variety of fields. Currently there are two AI extremes: convergence with no novelty, and extreme divergence with no sense of “appropriateness”, to borrow Dean Keith Simonton’s definition of creativity as originality x appropriateness. “Appropriateness” is domain-specific, but it implies a vast set of Wittgensteinian “language games”, the depth and breadth of which can only increase as our culture becomes more complex. This deep set of “games” is too subtle, sub-rational and rapidly shifting for AI to grasp through mining our text alone.

In an ideal future, education may prioritise cultivating curiosity, creativity and sensitivity across all learning domains, in students of all ages. It’s an exciting and overdue project. This doesn’t mean turning our backs on acquiring knowledge, but it does entail a renewed focus on “playing” with our ideas, and metacognitive practices around how and why we learn what we do.

The ultimate game for sentient beings is to surprise themselves by how inspired their answers can be when they’re invited to ask questions on their terms, and pursue what strange answers emerge. It’s infinitely preferable to an imitation of comprehension, whether that be from an artificial bullshitter or an organic one.

Colm O’Shea is clinical associate professor with the expository writing programme at New York University.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> 相关大学
<榴莲视频 class="pane-title"> Reader's comments (3)
Teaching ethics to computer scientists, I tell them that I am more interested in their arguments than their conclusions: I don't mind if their opinions differ from mine but I do expect a well-reasoned explanation of how they came to form those views, the supporting evidence they present for why they think their opinion is the correct one.
Thank you. Love this essay. I train professionals who work with students on college admissions and applications. My company (WowWritingWorkshop.com) uses a 10-step approach that puts the student in the driver's seat and the helper/coach/tutor/consultant in the backseat (where they belong). Inside this business of college admissions, the talk is crazy. Everyone is worried about this bot, cheating, etc. Cheating is not the issue here. Ethical essay coaching is. Admissions will know if an essay does not match an application, whether written by a bot, a parent, a teacher, or tutor. ChatGPT is just another distraction, the latest shiny object. People need to learn to not get so distracted by shiny objects of any sort.
I don't disagree with the diagnosis here, but I'm unsure of the solution. We have to ask ourselves, how many of our students come equipped to be able to do this (irrespective of whether "wit" is innate or comes about through experience). Is this some that more than a small fraction of student will be able to do? Is it something that every or most professors are able to do? Many of us can be productive researches without ever really needing to be particularly startlingly original. Is it something that can be explicitly taught? Even if it is, I doubt many of us have a clue about how to go about teaching it.