Last week I received a reviewer comment on my (rejected) paper. “厂tay away from trendy progressive jargon like ‘lived experience’, ‘interrogating’, etc,” it?recommended, adding: “Just talk like a?human being.”
The comment came from a reviewer at a?top-tier US?psychology journal, where one would expect the editors to?safeguard authors from such upsetting remarks. I?don’t mind admitting that I?was offended by?the final comment, even if?most editors and reviewers are English-speaking monoglots less sensitive to?language-related issues.
With such hurtful comments abounding, it is no surprise that online courses teaching early career researchers how to use generative AI for academic writing are now oversubscribed.
The promise of these tools to improve language is appealing to non-native academics for two reasons: first, because they are cheaper than language-proofing services. At Wiley, for example, a standard English language editing service for an 8,000-word article costs about $1,000 (?790). In comparison, an all-access subscription to AI-based academic tools is $25 a?month.
Second, AI bots can easily be trained to write in the style of a target journal audience, even in the researcher’s own writing style.
That gives me, as a non-native researcher, new possibilities: I?now have the option not only to use a language software program (or bother native English speaking friends) but also to train AI bots to fix my grammar. What is there not to like?
If you think that generative AI is going to democratise access to science, you are wrong. In reality, these approaches perpetuate the language discrimination problem in academia by putting the burden on individual researchers.
Yes, the basic version of ChatGPT is free, but to get a higher-quality output one needs to subscribe. The bot speeds up the writing time, but it takes time to edit the text and check for inaccuracies. So non-native researchers still have to invest extra time and more dollars to fix their papers than their native colleagues.
A recent , which went viral on academic Twitter, touched on this problem. The study, which surveyed 908 researchers in environmental sciences from different countries, found that academics who had English as a second language had to expend much more time and cognitive effort to complete scientific activities. About a third of early career scientists had turned down opportunities to attend conferences because of language barriers. The additional admin load and mental burden were greatest among younger scientists and amounted to discrimination in some contexts.
One way of addressing the problem is to provide non-native speakers with extra resources. After transferring to a university in Norway, I discovered that Norwegian universities provide dedicated budgets for language-checking services to support their researchers. Even PhD students have their own budgets allocated for English proofing. Language checking is a task delegated to language experts, leaving the researcher to focus only on what they wrote, not how they wrote it.
Norwegian researchers might publish their papers using language services, but this won’t necessarily help them become better writers in the long term. The crucial thinking process of writing an article should always be reserved for humans. In her upcoming book Naomi Baron highlights that the value of AI is to augment writing, not to automate?it. When humans are allowed to write in their native languages, they bring forth distinctive stylistic expressions, metaphors and unique quirks that often enrich the writing style.
When rich countries foot the bill for language services for their researchers, they risk increasing the language discrimination problem. Globally, it leaves other non-native researchers at the back of the scientific queue. To level the playing field, we need to change the context itself. As all social justice researchers know, language discrimination is not about changing individual skills or practices; it requires dismantling systemic structures that perpetuate inequality.
My suggestion is that we collectively agree to use AI to address language discrimination. What if all higher education staff were allowed to write in their preferred languages and specialised bots could translate it for readers? What if the millions of dollars that go towards open-access publishing got invested by academic journals into creating AI bots that support non-native academics?
Imagine all writing and reading of academic articles were automatically translated from any language, formatted in the target journal’s style. There could be specialised bots for authors, reviewers and editors.
With more non-native speakers using AI bots for their scientific writing, we cannot allow language discrimination in academia to continue. Those crass comments from my reviewer that I “did not sound like a human” might eventually prove helpful, but only if they help academia to consider the unacceptable hierarchies on which scholarly publishing is currently based.
Natalia Kucirkova is professor of early childhood and development at the University of Stavanger in Norway, and professor of reading and children’s development at the Open University.