榴莲视频

对学生的持续性测评“可能带来更大压力”

<榴莲视频 class="standfirst">报告称,从结课考试模式过渡也可能增加教职工负担
二月 10, 2020
Source: Getty

点击阅读英文原文


一项报告称,随着大学将重点从“期末考试”转向采用技术进行“持续性”测评的模式,必须监测这种转变对学生压力和教职工负担的影响。

由英国高等教育领域主要技术机构闯颈蝉肠发表的一项研究称,数字工具“为学生提供了很多机会来捕捉和反思他们的学习证据、使用和分享形成性评价并记录进度”,并补充说,在整个课程中“持续性测评而不是通过期末考试来评价学习者可能更有效”。

《学习测评的未来》(?)报告预测,用教材跟踪学生表现和参与度的学习分析系统“可能会使某些‘停下来考试’的测试点变得多余”,并称年度测评周期“可能会被按需测评所取代,从而让学生能在自己准备好时选择证明自己的学习”。

但是,该报告警告称,“持续不断的小型测评可能会给学生带来更大的压力”。它说,虽然自动化测评可以减少教职工的工作量,但一些讲师“也可能准备略微增加工作量,以便过渡到更好的、以持续测评为重点的教学方法,从而提供更实践性的测评体验并减轻学生压力”。

该机构教育技术部门主管安迪·麦克格雷格(Andy McGregor)表示,这一趋势需要被密切关注。

“新技术必须被谨慎实施,这样才能让学生不感到自己受到不间断的监测或评估,教师们也不会觉得必须不断进行测试和打分。”他说,“我认为技术能够提供解决方案,但同时也会带来风险;这就是为什么永远无法单凭技术解决问题。”

“ [我们还需要]良好的评估,良好的教学设计[以及]良好的教学方法。如果所有这些因素都考虑在内,那么技术就可以很好地被利用。”

这份报告要求英国大学尽快采取行动,将数字技术纳入测评范围,否则将面临“迅速落后”的局面。调查发现,尽管已将10%考试数字化的纽卡斯尔大学(Newcastle University)是英国该领域的领先机构之一,但荷兰和挪威的几所院校的测评数字化程度已接近100%。

麦克格雷格先生着重介绍了诸如在线写作指导和点对点评估工具等创新技术,前者帮助学生通过练习受益,而后者让学生能够从提问和发表评论中学习。

该报告提出,技术可以帮助实现更多“实践性”测评,例如开发一个网站,而大学则需要在采用提供越来越复杂的即时反馈和人工标记的人工智能工具时注意平衡。

报告还呼吁大学采用作者身份检测和生物特征认证工具来防止作弊。

anna.mckie@timeshighereducation.com

本文由Jing Liu为泰晤士高等教育翻译。

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
<榴莲视频 class="pane-title"> Reader's comments (4)
So, " some lecturers “may also be prepared to experience a small increase in workload...". That's hilarious. On-line assessment can be a huge amount of extra work for academics. Here are four specific instances. First, this is due to shifting responsibility for, for example grade entry, formerly done by administrative staff, to academics. This has the illusion of reducing costs in the administration, by shifting it in an unseen way to the academic staff group. If this was costed properly it would make no sense but it isn't, because the academics 'just do it'. Business consultants call this 'squeezing the balloon'. There is the illusion of removing cost, but it just pops out again somewhere else. The price paid is stress and additional hours of work. The second example relates to online group assessment. Many online systems require (academic) staff to create groups by 'dragging and dropping' student names from a spreadsheet into the online system. It sounds easy, but what if you have a module with 400 students? Is that a good use of academic time? The third instance relates to online marking. It would be useful to see some research on the health impact of academics spending large amounts of time at a computer reading work online. The employer will say, well, of course, one should take rest breaks. When given only a few days to mark a large amount of work this may not be possible. The fourth example relates to video assessments. A wonderfully creative idea but has anyone ever looked at the impact of watching and assessing fifty ten minute videos?
Research has shown that effective learning is stressful and effortful - so what is the aim here? To reduce the amount of learning in HE or to reduce the workload associated with effective learning? There is evidence to suggest that foreign students as well as local students are finding our HE not delivering value for money - reducing workload/stress might reduce the amount of learning outcomes achieved and exacerbate this problem further. Just because something is stressful or has high workload, doesn't mean that it should be removed/reduced. What is the core concern here should be what are reasonable learning outcomes to achieve given the time and level of study - compared to good universities outside of the UK, students in UK HE are achieving and expected to achieve less and less with each passing year with this mollycoddling mentality - 'Oh they are stressed, let's have them learn/achieve less'.
Written assessments that students can write at home (or plagiarise) are forbidden in many universities outside of the UK. In the UK, one can get complaints from student cohorts because they have 2 deadlines within the same week. How can this expensive mollycoddled HE be considered "superior" is beyond me.
I run two types of online assessment. One is end of module and the other is continuous. The continuous one is monthly online exams done as in class tests. And and all of them have one third of material that is previously assessed content so that students don't simply forget or stop reading about material that they have previously been tested on. It is only 30 questions and is automatically marked. The students can also look over their answers to see which ones they got wrong and which ones they got right and what the correct answers were. But they can't write down any of those answers and therefore the ability to get feedback is more so that they can tailor their revision than it is to memorize what the correct response should have been. These questions are not limited to only MCQs either, as they include all manner of interactive questions. This whole model is completely effortless for the academics, and gives the students the ability to build up their marks across the academic year with the final result being their best five of the six that they do. The second does require marking, what is a more authentic exam than traditional essay based ones. It is a single unseen question at the end of the module that is done under control conditions and on a computer with access to limited online resources for providing evidence of their arguments. The assessment itself is is more authentic because there aren't really any jobs in the scientific field that require going to a basement with a pen and a piece of paper and no access to the outside world (the traditional paper-based exam method of 3 questions out of 6 and simply to vomit knowledge onto paper with little consideration for properly answering the question or demonstrating any skill at writing). And the ability to edit the document that they submit gives them the chance to reconsider what they say and how they say it and how they structure their work. Arguably the marking for these takes longer than traditional paper-based exam as one cannot simply put ticks on a page and the student never see it, because they are able to get their feedback as if they would any other piece of coursework due to submitting it through blackboard at the end of the exam. But personally I provide feedback as audio snippets entered in the document, in-line, as I read through it to provide that contextual conversation that students often want when they don't understand the brief comments made on the work in prose. Colleagues of mine do you find the marking with these somewhat difficult due to the additional time it takes. But this is largely because they still do feedback by typing comments in, which takes them a lot longer in order to provide high enough quality for the feedback to be useful. I find both forms of assessment incredibly useful for students. And the fact that there is zero admin and marking required for the much larger module in the first year means that I naturally have more time available to put more time into the marking of the other slightly smaller module in the final year. There are 250 students on the module in the first year, and 100 students on the module in the final year. But it still only takes me 15 minutes to mark a 1500 to 2000 word exam essay by doing audio feedback, and that includes about a dozen audio clips of 30 seconds or so each.