Web site logical path: [www.psy.gla.ac.uk] [~steve] [rap] [this page]
This elicits much more discussion about my feedback than I ever managed before; and their answers show that a) they have processed the feedback, and b) what actions they intend to take.
(It also shows me that they took lessons from the document as a whole, not just the bit addressed to them individually. And it shows me when my comments haven't been enough for a student: so I can remedy that on the spot.)
It makes them realise what they have learned from this feedback, and hence that they have learned from feedback. This is likely to improve the NSS ratings they give about feedback. More importantly, it is likely to give them the conscious idea that learning from feedback is part of learning on the course: and so is worth seeking out, worth reading when it is available, and that it is worth formulating conclusions for actions from it.
It suggests that it is not enough for a teacher to provide feedback: there is also the vital step of the learner interpreting the feedback into actions. And that often, this requires some extra prompt.
Yet disappointingly, not a lot of discussion happened.
Thus my motivations for trying something new, and this kind of tactic in particular, were:
All valued the oral discussion around the feedback process as greatly as the personal written feedback except one person who gave the discussion a 4 and the written a 5.
As to writing out answers on my prompt sheet: 2 were neutral (but not negative) as to whether it was worth the effort while 3 were "definitely worth it". Similarly the 2 found writing it much less valuable than the other components of the process, while 3 saw it as similarly valuable.
Given that they highly valued the discussion, but that I didn't have the skill to create one otherwise in the past, I'm now convinced that getting them to fill in the proforma was definitely important even for the students who didn't think it was valuable in itself, at least given my level of tutoring skills. I think why discussion about feedback has been hard to achieve is that by the time it is written and they have time for a handback session, the assignment has receded from their minds. They have to do something active to bring it back, and just reading a page or two of comments doesn't do that by itself.
(It is possible that actually when it came to the next assignment, they did do things differently. But changing their conscious view of the value of feedback has in itself some positive consequences, which won't happen without a prompt like this: our scores on NSS, and more importantly, the likelihood of the student bothering to seek out and read feedback available to them.)
Some of them said how useful it was to get not just negative comments, but positive comments that told them what to ensure they repeated. I've always done this, but this is the first time this was said: I interpret it as evidence of real processing of my feedback.
As before, or perhaps more so, they said it was worth filling in the form. One commented that it made her actually process the feedback, implying that normally she wouldn't have done so.
The discussion, as we went round the table hearing what each student had put as an answer to each question in turn, brought out marked differences between students on some aspects which was interesting for all of us.
But in fact in this case, one student couldn't come and I reluctantly emailed her both the feedback and the prompt sheet to fill in, and booked an appointment. However she turned up willingly, had filled in the prompt sheet, and found the process just as useful as the rest. So it may be that the prompt sheet exercise reloads the issues into their mind, and that in future I could have both the feedback and the prompt sheet picked up by them, and a meeting shortly afterwards to discuss them would work well.
Given any noticeable gap between finishing the work and receiving the feedback (more than 1-2 days), a student won't remember much about their work. How useful then can feedback be? Do they need to re-read their whole essay when reading the feedback? For the 4,000 word pieces I have done this with, that might take them 30 mins. This is a substantial extra piece of work: have staff planned for this in the course design? yet how else would feedback make much sense?
At the very least, they need to have a copy of their work in front of them for reference when processing the feedback. If the department has taken their only printed copy, how likely is this to occur? For these reasons, I think it important to arrange for copies of their work to be there; and perhaps because they just don't have the concept of "feedback reception and processing" as a learning task, they have shown little memory for bringing a copy (in constrast to their good organisation during a current assigned task). One possibility which I've done is to get them to submit a digital copy; I print a spare which I use while marking, and can mark up for those things like spelling errors which are best done by comments on the script rather than in a separate place/document. I can then bring those to the meeting, hand them back for the few extra comments on them, while they double as a reference copy during the meeting.
Doing the written feedback for large numbers is best done by a "comment bank": (digitally) write out good comments on the most common issues, and refer individual students to these comments (e.g. by ticking them on individual copies of the sheet). Even with 5 students, the overlap of issues is enough to give a substantial benefit from a pool of comments; and comment banks have worked well in many large classes.
Obviously having them all fill in prompt sheets is not a problem regardless of numbers.
How to manage discussion?
For comparison, these are the questions in Frank's & Hanscomb's feedback viva:
The criterion of teaching success here is: Whether and what action the learner takes as a result. This method (prompting processing of feedback) both prompts learners to draw conclusions about how to act, and gives you evidence that they have at least formulated those decisions about action.
Feedback vivas also achieve all this, and furthermore they are designed to accomplish additional goals such as opening a dialogue for other purposes (e.g. pastoral, personal development) with each student, and giving them the feeling of personal attention from staff. The suggestion here (prompted feedback processing) is likely to be cheaper in staff time to implement. It may even be that no meeting is required — this remains to be investigated — although since discussion of feedback is important, the meeting may be necessary anyway.
One symptom which this tactic overcomes at least temporarily is that students otherwise often just glance at the mark, and if it is tolerable, then conclude that this issue does not merit further action (and don't look at the feedback comments). Actually we do that with many things in our lives: it is rational and necessary self-regulation of effort. This line of thinking is explored in Draper (2009); and of course in question 1 on my prompt sheet. Listening to the various student answers to this in the discussion is very illuminating.
This tactic seems effective for feedback in essay-based disciplines. How to address the slogan's challenge in other disciplines / with other kinds of feedback? Various bits of research such as Mastery Learning, and Eric Yao's success in dramatically raising his pass rate, suggest that another tactic works equally well in the context of MCQ type test feedback. If plenty of low-stakes quizzes etc. are provided, students spontaneously use these scores to estimate how well they understand elements of the course, and adjust their effort to remedy insufficient understanding. This has shown up in large increases in pass rates, and in the handful of student interviews on Eric's course that I've done. Basically, if quizzes etc. are done on small sections of the course e.g. fortnightly, then the score from the quiz is formative in that it flags a small area of knowledge as needing more work (no explanatory comments from tutors are needed for this, though welcome for other reasons). One student I interviewed ended up with an 'A', but didn't show the characteristics of sure-fire A students (being driven, coming much better prepared in knowledge, ...). He said things such as how when he missed a lecture he found from the next quiz that he didn't understand the material specified in the published learning objectives so well, and so he made more effort to attend. Eric's course has multiple ways for students to discover what areas they did and didn't have a good grasp of:
The perception I have of both these kinds of attempts at addressing the slogan, is that students don't resent doing the activities; and once they have made the inferences about future actions, then they act on them with no further prompting. But they wouldn't have made the inferences without the activities, and wouldn't have organised and done these reflection-prompting activities by themselves (e.g. set themself quizzes, or written out their conclusions from a tutor's written feedback). So this isn't about applying threats and rewards, but generating information, and getting them to notice it, about their degree of mastery.
Cath Ellis has independently developed a comparable technique for detailed marks feedback (where each student gets a mark not only overall, but for each of five marking criteria). Again using a prompt sheet, she sees the class the day after they receive their individual marks feedback; puts up the distributions of marks for the class from which each student can read off their approximate ranking (normative scaling). Her trial showed that they then draw the deductions about which aspect (criterion) is limiting their overall mark, and form the intention to work on it.
One issue is that even for a very simple bit of feedback, we expect multiple actions by the learner and so in a study would need to test for all of them. E.g. you correct a misspelling in a document. Ideally, the recipient would:
Any study of the effect of feedback must take into account the fact that a lot of learning occurs simply from doing (practising) the task, even without any feedback at all from outside the learner; or with only a summative judgement (e.g. a mark, a grade) that may be enough to send the learner back for an extra round of self-critiquing (cf. Hunt, 1982). This probably means we need to compare learners who:
When I look at the kind of information learners need to improve, there seem to me to be three quite different kinds: studies should look at each separately.
Finally is another kind of issue: latent learning. When my students have sometimes asserted that they don't see how they could apply comments about one essay to some different future essay, it is possible they are correct. We can't really abstract lessons from a single case. Only when we see the second case are we likely to be able to see what is common and what is different, and so what might be transferred. But that doesn't mean we didn't learn from the first case something that will change how we perform on the second case. It could be we remember the first case, but can only draw inferences (for action) from it after we have seen the second case (e.g. our second attempt at a new kind of writing ask). This means again, we need studies that measure how performance is eventually affected, and should not necessarily believe directly what students say about their own learning from feedback.
Web site logical path:
[Top of this page]