Against a backdrop of myriad changes in the higher education world in recent years, the UK has also witnessed shifts in how assessments are designed and implemented across higher education institutions. Most universities are taking steps to diversify the ways in which students¡¯ progress is measured and increase the inclusiveness and accessibility of assessments. Unfortunately, a significant factor determining students¡¯ successful completion of an assessment is their understanding of what it is measuring and what the marker expects of them. Such understanding is part of students¡¯ ¡°assessment literacy¡±. But do they really understand the way that academics will mark and assess their work?
Over the past two years, a team of academics and students at the University of Reading¡¯s Institute of Education (IoE) has undertaken a project to capture students¡¯ understanding of assessment criteria and explore ways in which this can be developed. The main aim was to investigate the extent to which students understand the yardstick used to assess their work. Understanding how academics will measure success is a key component of students¡¯ assessment literacy, which in turn is integral to their recognition of the link between assessment and learning. Such understanding can equip students with the knowledge needed to make better use of assessment criteria when developing draft work prior to submission and to interpret marks and feedback given to their work.
The terminology commonly used in assessment criteria can be problematic for undergraduate students transitioning from secondary education, or for students from diverse academic backgrounds, such as?those for whom English is not their first language and mature students who are returning to education. Through this work, we wanted to support academics to reflect on their programme¡¯s assessment criteria and encourage them to make their marking and feedback more accessible to their students.
Our first step was to work with more than 300 students across all IoE undergraduate and postgraduate programmes. Students were asked to examine their programme¡¯s assessment rubric and to circle any terms that they found confusing. The team then collated this feedback and created a summary table for each programme, highlighting and ranking those terms that students had identified as confusing.
ÁñÁ«ÊÓƵ
Two broad categories of problematic terms were identified: linguistic terms, such as ¡°recapitulation¡± and ¡°pertinent¡±, which could feasibly be replaced with easier-to-understand vocabulary; and conceptual/technical terms, such as ¡°research ethics¡± and ¡°methodology¡±, which are integral to the discipline and therefore could not be replaced. Concerning the former, the team recommended alternative vocabulary for some of the most problematic terms for each programme. For example,?we suggested replacing?¡°recapitulation¡± with ¡°summarising or restating the key points¡±. For conceptual/technical terms, the team recommended that module leaders keep them in the criteria but ensure that these terms are clearly explained and discussed with students.
In the second phase of the project, the team created an IoE-wide glossary of some of the most common assessment terms (such as ¡°analytical thinking¡±, ¡°argument¡±, ¡°critical thinking¡±, ¡°define¡±, and ¡°justify¡±), which contains definitions and discipline-specific strong and weak examples of each of these terms. Explanations are also provided for each example. Students from across the programmes were asked to read and comment on draft versions of the glossary to ensure the clarity of the examples. The final glossary was made available to all students on the university¡¯s virtual learning environment for use during preparation of their assignments.
ÁñÁ«ÊÓƵ
Throughout this project, students were actively involved as partners. We embraced the view that, as key stakeholders in teaching and learning, they should be given opportunities to participate in the development of teaching, learning, and assessment tools. Such participation can facilitate students¡¯ intrinsic motivation, active engagement and deeper approaches to learning.
The findings showed that terms used in assessment criteria can be confusing not only for first-year undergraduate students but students at all levels. Further,?they highlighted the potential mismatch between our own perceptions of the clarity of the criteria and those of students.
As academics (and hence markers), we are so familiar with the assessment criteria we use that sometimes we may forget that some students may not share this understanding. By collaborating with students and taking their feedback into account to revise our practice, we aim to provide a more inclusive learning environment and promote higher academic achievement for all.
If you are teaching a module or leading a programme and would like to replicate what we did to capture and develop students¡¯ assessment literacy at your institution, please contact n.trakulphadetkrai@reading.ac.uk.
ÁñÁ«ÊÓƵ
Print headline: Making measuring more meaningful
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±á·¡¡¯²õ university and college rankings analysis
Already registered or a current subscriber? Login