Entering a year in which ChatGPT has been shown to?be a?formidable disruptive threat to its curriculum, Harvard University has put a?priority on?trying to?make its offerings “AI-proof”. The verdict so far from its dean of?undergraduate education: there’s still a?way to?go.
“I’m finding that the transition is?more uneven than I?would have guessed,” Amanda Claybaugh, a?professor of?English, said of?her efforts to?prevent Harvard students making AI-powered sprints through their coursework.
“Some of our faculty have already reimagined their teaching entirely, while others still haven’t even tried ChatGPT.”
Campus collection: AI transformers like ChatGPT are here, so what next?
That divide reflects the suddenness with which ChatGPT and similar online systems have made it possible for students worldwide to upload classroom assignments to AI tools that can produce competent and even quality essays.
Harvard got an especially stark warning this summer when one of its undergraduates, Maya Bodnick, ran an experiment in which she gave ChatGPT-generated essays to seven Harvard professors and teaching assistants – matching most of her freshman year in social science and humanities – and found that the papers earned an average grade of?3.57 on a?four-point scale.
The result might partly reflect grade inflation at Harvard, but it also suggests that AI-generated essays “can probably get passing grades in liberal arts classes at most universities around the country”, Ms Bodnick says in .
Professor Claybaugh worked with academics this summer on ways to counteract student use of ChatGPT-type technologies – suggesting strategies to professors, but not mandating any. “I?trust my colleagues to make the choices that are best for their subject matter and their students,” she said.
Along with taking the formal step of prohibiting their students from using AI systems, some Harvard faculty are planning to reduce or eliminate the use of essays written outside the classroom. It’s unlikely that faculty can rely solely on software that claims to detect AI use by students, because those systems are?not reliable, Professor Claybaugh said. “Instead, we need to adapt our assignments so that they remain meaningful in the age of AI,” she said.
The more enduring solutions will likely involve both relatively newer teaching approaches such as active learning and flipped classrooms, where in-class discussion is prioritised, and greater emphasis on?the process of writing or problem-solving “rather than simply evaluating the student’s finished product”, Professor Claybaugh said.
Ms Bodnick agreed that her professors had few good options for trying to work with?AI. For now, she accepted that the professors would need to?base most of their grades on students’ classroom participation and in-class exams. “Which feels really terrible,” she said, “because you definitely have students producing worse-quality work if they can’t spend time on it on their own, or consult more resources.
“Unfortunately, there’s going to have to be some pretty draconian ways to just completely make sure that students aren’t using the technology.”
Professor Claybaugh begins the academic year nervously observing the range of responses among faculty to a clear and widespread need to revise practices: “Some early and eagerly; some not at all,” she said. But, she predicted, “the advent of generative AI will push more of them to do so more quickly”.
And to be clear, Professor Claybaugh said, some of the same variations in adoption rates can be seen among students. “Some use generative AI frequently and comfortably, some tried it and found it unhelpful, and some have never even tried it at all,” she said. “I’m guessing that historians of technology would tell us that it is ever thus.”