Exam Feedback for Students

 

Exam Feedback for Students

Don’t get stuck on the ‘if’. Work on the ‘how’.

This post was published on Wonkhe

Photo by Akshay Chauhan

Photo by Akshay Chauhan

You spend a year studying this stuff, followed by weeks of progressively heightening stress revising it all.

On the morning of the exam, you gather half an hour before start time with hundreds of other students. You then endure two to three hours of agony (some of it physical – who writes anything by hand anymore) before the invigilator tells you to put down your pen.

Six weeks later you’re told you got 58. Great. Now what?

Breakfast of champions

This scenario is not uncommon for some types of exams, especially those that adopt a traditional “answer any of the following essay questions” type of format. And when it happens, it is not acceptable. A student shouldn’t study a subject, submit themselves for assessment in it, and end up with so little to go on with regards to where their learning has taken them.

They will hopefully have been getting feedback through the module, and they may receive some sort of generic “cohort” level feedback on how students, in general, fared on the exam. But they really should be able to get more information about how they, as an individual, have done – and about how they might do better in the future. And they should get that information “by default” rather than having to request it as an “extra”.

Too costly?

The main reason that is given for not routinely providing individual feedback to students has to do with resources. Add a mere five minutes onto the marking of every essay-based exam in UK universities and you would incur a very high cost indeed – not to mention the challenge of processing everything in time for examination boards to allow students to progress or graduate during the months of June and July.

However, there are ways of providing highly effective individual feedback to students on any kind of assessment – including essay-based exams – not by doing more, but by doing “different”.

For example, exam markers are required to write short, criteria-driven notes about each script they mark. This is absolutely essential because there must be an audit trail for the academic judgments that are made in this regard. It’s a very simple task – hardly rocket science – to replace such notes with ratings on the relevant criteria.

Giving 4 out of 5 on “knowledge of the area” really can be as informative as writing “very good knowledge of the area”. And it takes much less time to input that rating by tapping a button on a phone app, or some such, than it does to write or type it in a set of notes somewhere.

Working at scale

Once we have such ratings on the relevant marking criteria, together with a grade for each essay perhaps selected from a drop-down menu, we have everything we need to process the information in ways that can easily be communicated to each individual student.

Mail merge is an ideal way of doing this. A pre-prepared template into which the scores for each essay, the ratings on each criterion for each essay, and the overall grade (automatically calculated by whatever database these data are being fed into, in parallel, by multiple markers) are pulled in as values for fields can deliver very meaningful feedback to every individual student, no matter how big the cohort, at the push of a “send mail merge” button.

The pre-prepared template can incorporate all sorts of information. It can be addressed individually to the student (there is no need to “pretend” that it isn’t a mail merge process – students fully understand how these things need to be done). And in my experience, students pay much more attention even to the general cohort level feedback when that feedback is itself part of a communication addressed to them, personally, and accompanied by specific information about how they have fared on each of the criteria for each of their answers.

Indeed, the individual performance data helps them see how the general feedback might in fact apply to them.

Being inventive

To be honest, the places that this approach can take us, in terms of the provision of useful feedback, are limited only by our capacity to imagine. So long as the markers agree on the academic judgements that are being made (of course) and on how these are to be notated, then all sorts of information can be conveyed back to the student, at no extra cost to marking time.

And in order to achieve this? All that is needed is a proper amount of preparation within the teaching and marking team in advance of the assessment. And that, in any case, is essential for good, fair, effective assessment.

And when all the marking has been done, the further processing of grades and so forth is fully automated. No need for a module leader to transcribe numbers from front-sheets of exams into an electronic form, perhaps while calculating an overall grade from the component marks for each answer. That task, in itself, can take hours for each exam. And it’s error-prone. The approach I’m advocating actually reduces error and saves time!

No type of assessment should be exempt from the obligation to provide feedback to the individual learner. Such feedback is an imperative. And it needn’t be costly. It’s just a case of working inventively on the “how” rather than getting stuck on the “if”.

. . .