ÁñÁ«ÊÓƵ

A driving test analogy may help avoid a car crash on AI-assisted assessment

<ÁñÁ«ÊÓƵ class="standfirst">If AI can perform or simulate a competency being assessed, is it really appropriate for students to use it, asks Andrew West
August 20, 2023
A car crash
Source: iStock

Driving tests are, for many, a traumatic experience, but we all appreciate the need for them. As far as possible, we want these tests to validly and reliably assess a candidate¡¯s competence to get behind the wheel without endangering other road users.

Does that entail that we would never grant a full driving licence to someone taking the test in a self-driving car? This is a question that governments will have to face soon. Self-driving cars are a technological development that we cannot simply ignore. They appear set to replace human-driven cars just as human-driven cars once replaced horse-drawn carriages. The genie is out of the bottle, so it is no use banning future drivers taking their tests in self-driving cars.

Yet my initial reaction ¨C and I doubt that I am alone ¨C is that we wouldn¡¯t even consider granting a full driving licence in such a situation. The licence is evidence that the individual has actually demonstrated particular competences, which are not evidenced when a driver uses a self-driving car. This is true regardless of how enthusiastically our society adopts new technologies or whether we consider such developments to be inevitable. Granting licences to test takers in self-driving cars would undermine the meaning and value of a driving licence.

The parallels with using generative AI in university assessment are obvious. So under what circumstances would we award a degree or credit to someone who completes their assessment using the likes of ChatGPT? And is our reasoning consistent?

ÁñÁ«ÊÓƵ

ADVERTISEMENT

One response may be that we are explicitly assessing a student¡¯s ability to use particular technologies and that these will increasingly involve AI tools. But many, if not most, of the competencies that we assess do not refer to the use of technology (in the programme in which I teach, the use of technology is addressed in only one of 10 learning outcomes). Ought we, then, to conclude that, as in the driving licence scenario, granting credit when AI is used would in most cases be entirely inappropriate?

Perhaps we could suggest that strictness is justified in driving tests because the potential consequences of incompetence (fatal car accidents) are far more significant than what might happen if an accountant is awarded a degree based mostly on AI use. However, there can be grave consequences for disciplinary incompetence, too. In relation to accounting, for example, we need only recall Enron, WorldCom, the Global Financial Crisis, Bernie Madoff and more recently, and .

ÁñÁ«ÊÓƵ

ADVERTISEMENT

Yet we have no problem with drivers taking their tests using cars that have power steering, ABS systems and any number of other technological aids to driving. Why single out AI? The answer is that technological enhancements differ in ways that are important. For instance, licences are qualified when people take tests in cars with automatic transmission: licence holders are not allowed to drive cars with manual gears. While some technologies, such as ABS brakes, are aids that do not significantly affect the demonstration of core competencies, others, such as automatic transmission, do affect it.

Returning to higher education, this suggests a need to draw a distinction between technological aids that assist students and those that perform tasks on their behalf (a distinction that may indeed be quite blurry). Yet even when tools such as a grammar checker are assisting students, their use is surely only appropriate in a summative assessment if we are not assessing the competence to which the tool contributes. That is, just as we would not allow an automatic car to be used if a driver is being assessed on their ability to change gears, the use of grammar-checking is not appropriate if the student is being assessed on their ability to formulate grammatically correct sentences.

If this sounds harsh, it is perhaps because I am talking specifically about summative assessment. Technological tools (including AI) can clearly be appropriate and extremely beneficial when used formatively, just as a novice driver might benefit from watching how a self-driving car ¡°behaves¡± or from observing when an automatic transmission system shifts gear.

The challenge posed by generative AI is that, unlike many other tools, it can increasingly perform entire tasks. It is more than just a technological aid. It is more than ABS braking or automatic transmission, it is the self-driving car. In this context, we could do with some guiding principles. Is the following too ambitious?

ÁñÁ«ÊÓƵ

ADVERTISEMENT

To the extent that AI can perform or simulate the competency being assessed, it is not appropriate for students to use it to complete an assessment task.

is associate professor in the School of Accountancy at Queensland University of Technology.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (2)
No it is not appropriate to use AI in assessment as 1) AI is not yet trustworthy and may never be. The assessment is needed to demonstrate that the person could detect inconsistencies or flaws in any AI-supported task 2) the person may need to intervene if the AI fails. Autopilots have been a boon in aviation, yet we still train pilots in case the flight enters a scenario outside of the parameters where the autopilot is safe, or where the pilot needs to take over. there are plenty of recorded incidents where this has happened. Use AI as a tool IF you deem it useful - and assess peoples' ability to use that tool as you would any other But do not create assessment that assume that rely on AI. Assess the ability of the student themselves to undrtake the task.
I was interested in this article as I think that AI does need to be explored further as a teaching tool. However, I got stumped on the example of taking a driver's license test on an automatic car does not qualify you to drive a manual transmission. In the United States, if you pass the driver's license exam with an automatic car you are permitted to drive a manual transmission. However, if you are attempting to get licensed for a CDL, it does matter if you test on an automatic vehicle.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs
ADVERTISEMENT