Ƶ

Logo

Why you should write feedback to your students before they’ve submitted

Starting at the end seems counterintuitive, but anticipating student strengths and weaknesses and automating your responses comes into its own for large cohorts

Andy Grayson's avatar
Nottingham Trent University
22 Apr 2022
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
Working back to front, by writing feedback before students have submitted, can help when teaching large cohorts

You may also like

Making feedback effective for your students and efficient for you
3 minute read
Advice on delivering efficient and effective feedback that helps students learn and

When I set about writing this piece, I drafted the final paragraph first. I always do. I want to know where I’m trying to get to when I set off on any kind of thought (or, indeed, real) journey. I need to know where I want to land.

To some, it’s counterintuitive to start at the end of a thing, and students find it a novel idea. “Write the conclusion of your essay first,” I urge them. And when I lead on assessment planning, I urge colleagues to do the same. “Write your feedback to your students first,” I want to say. “Yes, I know they haven’t done the work yet. Yes, I know it’s months away

The way we assess students’ progress should be embedded in a comprehensive plan that is built around a clear understanding of what it is we want them to learn. In my experience, the more strategic the approach to assessment within a teaching team, the better the outcomes for the students.

If we are clear from the outset about what we want our students to learn, and if we have experience in supporting previous students to do this learning, then we will be familiar with the strengths and weaknesses we usually encounter along the way. This familiarity is encapsulated in the number of times we find ourselves writing the same things as feedback on, for example, students’ essays.

Teachers have long used pre-constructed text to deliver feedback. So, the basic idea of this piece is not new. Many tactics are available to enable markers to select feedback items from a bank of resources that have already been written. When effectively enabled, this allows useful things to be said to each student without having to craft bespoke text in every case.

I want to go a little further and suggest that these things can be done in more algorithmic ways, and that, crucially, this allows us to work at scale. The key is to know what kinds of strengths and weaknesses we are likely to encounter across the cohort of learners. If we can anticipate these, we can create ways to attach the right kind of feedback, automatically, to any given student’s work.

<Ƶ>Calls to action 

For this tactic to be successful, the feedback must be high level and strategic. In the course of marking something as complex as an essay, we simply cannot have automated ways of correcting specific errors that might enter into the work. Indeed, the kind of feedback that this strategy enables is more the “call to action” type. By this I mean the “revise your understanding of theory X” type of feedback rather than feedback articulating specific limitations of this student’s understanding of theory X.

Strategic “call to action” feedback is appropriate for tasks that aim to assess high-level learning. It puts the ball in the student’s court, saying: “Work at developing your knowledge of this” instead of trying to do that work for them. The student will have opportunities to ask for further help if needed. If those opportunities are not there, then something more fundamental is wrong with the teaching strategy that cannot be corrected by any amount of feedback.

<Ƶ>How it can work

The mechanics of automation are easy to achieve. A front-end digital form is completed for each piece of assessed work. Markers assign a rating to each of the grading criteria and perhaps enter certain standard codes (which denote routine comments such as “work on your referencing”, “ensure you provide evidence wherever possible”, “you have developed a really strong argument”, etc.)

These ratings and codes are uploaded automatically to a database (or spreadsheet) and predetermined formulae are used to generate advice that is genuinely contingent upon the performance of each student. It is then a relatively trivial matter to send this out in bespoke emails, by means of a standard set-up. Links to learning and enrichment resources that are relevant to each individual are included to encourage them to do something with the feedback.

At the simplest level it might look like this:

  • Student A. Upper second grade: You did very well on X area. Here’s a link to further reading that you might find of interest.
  • Student B. Third class grade: Well done for passing in X area, but there is some evidence of misunderstanding. Revise the set material from week Z of the module.

When a team starts working on their feedback in this way, we find that the main constraint on the advice that can be constructed is their capacity to imagine. Is it as good, in an absolute sense, as a tutor writing bespoke feedback to every student? Of course not. But it is highly effective when it comes to making the best use of available teaching resources.

<Ƶ>Context

I’m not suggesting that this algorithmic approach be used in relation to all types of assessment tasks, merely that it has its place. It is particularly useful in the case of essay-based exams, and it comes into its own when we need to do these things at scale. In this piece, I am thinking about the specific challenges of working with large cohorts of students.

It is important to note that this does not entail adding more to the long list of things tutors have to do. Instead, it requires a reorganisation of when things are done and front-loads some of the effort. Indeed, in our experience, whatever extra work this approach adds upfront is balanced by savings of time and effort at the back end of the process.

Interestingly, it’s at this back end, when the marking has just been done, that teachers are best placed to plan how the feedback might be better shaped next time round. It only takes a few iterations of this cycle to end up with a set of marking criteria and associated feedback, made fully available to students, that are highly sensitised to the assessment needs of a unit of learning. And that is to everyone’s benefit.

In one important respect, this is rocket science. I assume that when designing a rocket, one plans, first of all, where it is intended to land. In most other respects, these ideas are simply good common sense. But, as is often the case with common sense, the sense turns out to be not as common as we might like.

Andy Grayson is an associate professor in psychology at Nottingham Trent University. He has worked in higher education for more than 30 years and provides leadership on learning, teaching and assessment.

If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, .

Loading...
<Ƶ id="you-may-also-like" class="css-sfp6vx">You may also like
sticky sign up

Register for free

and unlock a host of features on the THE site