ÁñÁ«ÊÓƵ

Continuous assessment ¡®may be more stressful¡¯ for students

<ÁñÁ«ÊÓƵ class="standfirst">Transition away from end-of-course exams may also increase staff workloads, says report
February 10, 2020
Source: Getty

Universities must monitor the impact on student stress and staff workload as they shift away from ¡°high-stakes¡± exams and towards using technology to conduct ¡°continuous¡± assessment, a report says.

A paper published by Jisc, UK higher education¡¯s main technology body, says digital tools offer ¡°a?host of opportunities for students to capture and reflect on evidence of their learning, to use and share formative feedback and to record progress¡±, adding that it ¡°may be more effective to assess learners continually throughout their course instead of through a final exam¡±.

predicts that learning analytics systems ¨C which track student performance and engagement with educational materials ¨C ¡°might make some ¡®stop and test¡¯ assessment points redundant¡±, and claims that annual assessment cycles ¡°might be replaced by assessment on demand, whereby students can evidence their learning when they feel ready¡±.

However, the report warns that there is ¡°a danger that continual, low-level assessment may prove to be more stressful for students¡±. And while automation of assessment may reduce workload for staff, it says, some lecturers ¡°may also be prepared to experience a small increase in workload in order to transition to a better continual assessment-focused approach that can provide a more authentic assessment experience and put less stress on students¡±.

ÁñÁ«ÊÓƵ

Andy McGregor, Jisc¡¯s director of edtech, said the trend would need to be watched closely.

¡°Technology has to be implemented carefully so students don¡¯t feel they are constantly being monitored or constantly being assessed, and academics don¡¯t feel they constantly have to set assessments and are constantly having to mark,¡± he said. ¡°I?think technology offers solutions there, but it also offers risks; that¡¯s why technology is never the answer on its own.

ÁñÁ«ÊÓƵ

¡°[We also need] good assessment, good education design [and] good pedagogy. If all of those things are thought about, then technology can be used well.¡±

The Jisc report calls for UK universities to move faster to embed technology into assessment or face ¡°being rapidly left behind¡±. It observes that although Newcastle University is one of the UK¡¯s leaders in this field, conducting about one in 10 exams digitally, several institutions in the Netherlands and Norway are close to 100?per cent.

Digital evaluation will help to address the ¡°growing disconnect¡± between how students are assessed and the value they can take from it, according to the report, which says continuous assessment would teach students to learn and adapt rather than simply absorb information.

Mr McGregor highlighted innovations such as online writing coaches, which help students learn by practising, and peer-to-peer assessment tools, which help students learn by asking questions and posting comments.

ÁñÁ«ÊÓƵ

The report says technology can help to create more ¡°authentic¡± assessments ¨C for example, developing a website ¨C and universities need to strike a balance between the use of artificial intelligence tools that offer increasingly sophisticated instant feedback and human marking.

The report also calls on universities to adopt authorship detection and biometric authentication tools to prevent cheating.

anna.mckie@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.
<ÁñÁ«ÊÓƵ class="pane-title"> Related articles

Most universities still rely on exams and assessed essays to grade their students. But as the fourth industrial revolution, employability and student satisfaction all rise up the agenda, many experts are suggesting that assessment needs to much more closely resemble real-world tasks. Anna McKie marks the arguments   

23 May
<ÁñÁ«ÊÓƵ class="pane-title"> Reader's comments (4)
So, " some lecturers ¡°may also be prepared to experience a small increase in workload...". That's hilarious. On-line assessment can be a huge amount of extra work for academics. Here are four specific instances. First, this is due to shifting responsibility for, for example grade entry, formerly done by administrative staff, to academics. This has the illusion of reducing costs in the administration, by shifting it in an unseen way to the academic staff group. If this was costed properly it would make no sense but it isn't, because the academics 'just do it'. Business consultants call this 'squeezing the balloon'. There is the illusion of removing cost, but it just pops out again somewhere else. The price paid is stress and additional hours of work. The second example relates to online group assessment. Many online systems require (academic) staff to create groups by 'dragging and dropping' student names from a spreadsheet into the online system. It sounds easy, but what if you have a module with 400 students? Is that a good use of academic time? The third instance relates to online marking. It would be useful to see some research on the health impact of academics spending large amounts of time at a computer reading work online. The employer will say, well, of course, one should take rest breaks. When given only a few days to mark a large amount of work this may not be possible. The fourth example relates to video assessments. A wonderfully creative idea but has anyone ever looked at the impact of watching and assessing fifty ten minute videos?
Research has shown that effective learning is stressful and effortful - so what is the aim here? To reduce the amount of learning in HE or to reduce the workload associated with effective learning? There is evidence to suggest that foreign students as well as local students are finding our HE not delivering value for money - reducing workload/stress might reduce the amount of learning outcomes achieved and exacerbate this problem further. Just because something is stressful or has high workload, doesn't mean that it should be removed/reduced. What is the core concern here should be what are reasonable learning outcomes to achieve given the time and level of study - compared to good universities outside of the UK, students in UK HE are achieving and expected to achieve less and less with each passing year with this mollycoddling mentality - 'Oh they are stressed, let's have them learn/achieve less'.
Written assessments that students can write at home (or plagiarise) are forbidden in many universities outside of the UK. In the UK, one can get complaints from student cohorts because they have 2 deadlines within the same week. How can this expensive mollycoddled HE be considered "superior" is beyond me.
I run two types of online assessment. One is end of module and the other is continuous. The continuous one is monthly online exams done as in class tests. And and all of them have one third of material that is previously assessed content so that students don't simply forget or stop reading about material that they have previously been tested on. It is only 30 questions and is automatically marked. The students can also look over their answers to see which ones they got wrong and which ones they got right and what the correct answers were. But they can't write down any of those answers and therefore the ability to get feedback is more so that they can tailor their revision than it is to memorize what the correct response should have been. These questions are not limited to only MCQs either, as they include all manner of interactive questions. This whole model is completely effortless for the academics, and gives the students the ability to build up their marks across the academic year with the final result being their best five of the six that they do. The second does require marking, what is a more authentic exam than traditional essay based ones. It is a single unseen question at the end of the module that is done under control conditions and on a computer with access to limited online resources for providing evidence of their arguments. The assessment itself is is more authentic because there aren't really any jobs in the scientific field that require going to a basement with a pen and a piece of paper and no access to the outside world (the traditional paper-based exam method of 3 questions out of 6 and simply to vomit knowledge onto paper with little consideration for properly answering the question or demonstrating any skill at writing). And the ability to edit the document that they submit gives them the chance to reconsider what they say and how they say it and how they structure their work. Arguably the marking for these takes longer than traditional paper-based exam as one cannot simply put ticks on a page and the student never see it, because they are able to get their feedback as if they would any other piece of coursework due to submitting it through blackboard at the end of the exam. But personally I provide feedback as audio snippets entered in the document, in-line, as I read through it to provide that contextual conversation that students often want when they don't understand the brief comments made on the work in prose. Colleagues of mine do you find the marking with these somewhat difficult due to the additional time it takes. But this is largely because they still do feedback by typing comments in, which takes them a lot longer in order to provide high enough quality for the feedback to be useful. I find both forms of assessment incredibly useful for students. And the fact that there is zero admin and marking required for the much larger module in the first year means that I naturally have more time available to put more time into the marking of the other slightly smaller module in the final year. There are 250 students on the module in the first year, and 100 students on the module in the final year. But it still only takes me 15 minutes to mark a 1500 to 2000 word exam essay by doing audio feedback, and that includes about a dozen audio clips of 30 seconds or so each.
<ÁñÁ«ÊÓƵ class="pane-title"> Sponsored
<ÁñÁ«ÊÓƵ class="pane-title"> Featured jobs