Since the launch of ChatGPT at the end of November last year, and articles on the possibilities and perils of the sophisticated chatbot for teaching and pedagogy have become a dime a dozen. But its effect on research is likely to be no less profound.
In some social science disciplines, such as information management, ChatGPT already acts in effect as a co-author. Researchers are excitedly jumping on the bandwagon, using ChatGPT to render the research process more ¡°efficient¡±. In my opinion, however, the use of ChatGPT in research can create at least three adverse outcomes.
The first is that using the technology to compile literature reviews will impoverish our own analytical skills and theoretical imagination. When we write our own literature reviews, we read for understanding: we seek to know more than we did before through the power of our own minds. This involves a willingness to overcome the initial inequality of understanding that can exist between reader and author (such as when a PhD student reads a monograph in preparation for a first-year report). And the effort enables us to see and make new theoretical connections in our work.
But ChatGPT can¡¯t understand the literature: it can only predict what the statistical likelihood is of the next word being ¡°a¡± rather than ¡°b¡±. Hence, the literature reviews it produces merely offer up food for thought that is past its best-before date given that the training data are not necessarily current. This is why some have described ChatGPT¡¯s knowledge production as occurring ¡°within the box¡±, rather than outside it.
ÁñÁ«ÊÓƵ
Being able to understand the current literature and to harness the imagination is crucial for linking observed phenomena with theoretical explanations or understanding for improved future practice. The risk is that an over-reliance on ChatGPT will deskill the mental sphere, leaving us poorly equipped when we need solutions to novel, difficult problems.
The second problem with the use of ChatGPT in social science research is that it changes the mode of theorising. The technology processes data through computation and formal rationality rather than through judgement and substantive rationality. Thus, when it is applied to theorising, it embodies an assumption that the world is based on abstract and formal procedures, rules and laws that are universally applicable. This is an outlook that Max Weber argued is detrimental to social life.
ÁñÁ«ÊÓƵ
Such a detriment might arise, for instance, when human ?or socially developed norms and practices for regulating conflicting interests undergo fundamental changes when judgement is substituted by reckoning in decision-making. Thus, morality becomes rather mechanical, prompting situations in which ¡°decisions are made without regard for people¡±, to quote Weber.
Thus, in the computational approach, morality is considered a universally applicable phenomenon that can be expressed through computation. By contrast, a mode of theorising based on judgement that is sensitive to the local, social and historical context of phenomena tends to appreciate that values are negotiated, renegotiated or even contested over time.
This concern is exacerbated by the fact that ChatGPT has been to reproduce discriminatory associations concerning gender, race, ethnicity and disability issues due to biased training data. As Brian Cantwell Smith argued in his 2019 book The Promise of Artificial Intelligence: Reckoning and Judgment, if we are ¡°unduly impressed by reckoning prowess¡±, there is a risk that ¡°we will shift our expectations on human mental activity in a reckoning direction¡±. My argument is that this observation also applies to theorising as a human mental activity.
The third problem with using ChatGPT in research is that it distorts the conditions for a fair and truly competitive marketplace for the best ideas. While scientifically valuable, publications also matter for career progression and status. But the difficulty of obtaining permanent posts generates a strong temptation to skip the hard thinking and writing work that normally goes into writing well-crafted papers in pursuit of a longer list of papers to put on a CV.
ÁñÁ«ÊÓƵ
I am entirely unimpressed by attempts to assuage concerns by arguing that ChatGPT will only ever be a research ¡°tool¡±, with human authors remaining in charge. At the end of the day, even if use of ChatGPT is transparently declared, it is difficult to tease out the human¡¯s and machine¡¯s relative contributions. Some years ago, through perseverance and dedication to reading for understanding, my co-authors and myself managed to turn a 15-page rejection letter for an essay into an acceptance letter. That experience is ours, and it is a reminder that academic success is more rewarding if we can appreciate the effort that went into it.
I realise that my concerns are probably shared only?by a minority of researchers. The rapid adoption of ChatGPT in research makes it abundantly clear that it is here to stay. Yet it is important to understand that this development is likely to impoverish rather than enrich our theoretical and analytical skills in future: just look at the fact that intelligence levels in the general population are decreasing as the use of technology increases.
The risk is that we, as researchers, ultimately lose the ability to explain and understand the social world. Who wants to be a turkey who votes for this bleak Christmas? Don¡¯t count me in.
Dirk Lindebaum is a senior professor in management and organisation at the Grenoble ?cole de Management. This article is based on a forthcoming paper in the Journal of Management Inquiry.
ÁñÁ«ÊÓƵ
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login