Last changed 14 Feb 2013 ............... Length about 4,000 words (26,000 bytes).
(Document started on 28 Jan 2012.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/rap/fprompt.html. You may copy it. How to refer to it.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [rap] [this page]

Prompted student processing of feedback

By Steve Draper,   Department of Psychology,   University of Glasgow.

Summary: Brief outline of the practice, and indications of success

Following a suggestion from David Nicol, I now get my tutees to write brief answers to a set of questions I give them about written feedback from me, mostly about what they will do because of it. I summon them to a meeting; hand over my written feedback: after they have read that, then I get them to fill in my prompt sheet, which takes about 5 mins.; and then I go round the table asking each one what they have written.

This elicits much more discussion about my feedback than I ever managed before; and their answers show that a) they have processed the feedback, and b) what actions they intend to take.

(It also shows me that they took lessons from the document as a whole, not just the bit addressed to them individually. And it shows me when my comments haven't been enough for a student: so I can remedy that on the spot.)

It makes them realise what they have learned from this feedback, and hence that they have learned from feedback. This is likely to improve the NSS ratings they give about feedback. More importantly, it is likely to give them the conscious idea that learning from feedback is part of learning on the course: and so is worth seeking out, worth reading when it is available, and that it is worth formulating conclusions for actions from it.

It suggests that it is not enough for a teacher to provide feedback: there is also the vital step of the learner interpreting the feedback into actions. And that often, this requires some extra prompt.

Motivation

Here's some of the things I was already doing in my personal practice with my tutorial groups (of about 6 students):

Yet disappointingly, not a lot of discussion happened.

Thus my motivations for trying something new, and this kind of tactic in particular, were:

  1. I had failed to get good discussion about returned feedback to happen, and wanted it to.
  2. Learners (my tutees anyway) seemed just not to be thinking about the feedback, even though they turned up to meetings and read the feedback. Their memory of their original work had faded from both their memory and their to-do list, and reading even extensive feedback was not enough to make them think about it actively.
  3. I wanted to address a new slogan:

Slogan

There is no point in giving feedback to a learner unless the learner acts on it: does something concrete and differently because of it.

Evidence / evaluation of success

First trial

I got the first (small) group I tried it on to fill in a tiny evaluation form, which supports my perception of the worth of this. Here's my interpretation of the data from that.

All valued the oral discussion around the feedback process as greatly as the personal written feedback except one person who gave the discussion a 4 and the written a 5.

As to writing out answers on my prompt sheet: 2 were neutral (but not negative) as to whether it was worth the effort while 3 were "definitely worth it". Similarly the 2 found writing it much less valuable than the other components of the process, while 3 saw it as similarly valuable.

Given that they highly valued the discussion, but that I didn't have the skill to create one otherwise in the past, I'm now convinced that getting them to fill in the proforma was definitely important even for the students who didn't think it was valuable in itself, at least given my level of tutoring skills. I think why discussion about feedback has been hard to achieve is that by the time it is written and they have time for a handback session, the assignment has receded from their minds. They have to do something active to bring it back, and just reading a page or two of comments doesn't do that by itself.

Second trial

Before I started using the prompt sheets, even very good students would say after receiving my feedback things like: that's interesting but I don't think it will be relevant to my next assignment which will be marked by someone else. Now, they don't say that, and have little trouble filling in on the sheet things they will do differently in the light of the feedback.

(It is possible that actually when it came to the next assignment, they did do things differently. But changing their conscious view of the value of feedback has in itself some positive consequences, which won't happen without a prompt like this: our scores on NSS, and more importantly, the likelihood of the student bothering to seek out and read feedback available to them.)

Some of them said how useful it was to get not just negative comments, but positive comments that told them what to ensure they repeated. I've always done this, but this is the first time this was said: I interpret it as evidence of real processing of my feedback.

As before, or perhaps more so, they said it was worth filling in the form. One commented that it made her actually process the feedback, implying that normally she wouldn't have done so.

The discussion, as we went round the table hearing what each student had put as an answer to each question in turn, brought out marked differences between students on some aspects which was interesting for all of us.

What can I conclude?

The activity (prompted processing of feedback) is:

Current recipe

Further discussion of the procedure / practice

A major underlying issue has been that by the time written feedback is ready, the students are focussed on some other deadline, another piece of work. This is true of better students at least as much. They all look at the mark rather than the formative feedback, probably in the spirit (so it seems, given what they said when I asked about this) in the spirit of "Is that the fire alarm going off? [a very bad mark] No? OK, back to my current work." This shows up in the difficulty in negotiating a date for a feedback discussion, and the way they turned up without a copy of their own work, despite my request. This situation led me over some years towards witholding feedback until they see me face to face, and so on.

But in fact in this case, one student couldn't come and I reluctantly emailed her both the feedback and the prompt sheet to fill in, and booked an appointment. However she turned up willingly, had filled in the prompt sheet, and found the process just as useful as the rest. So it may be that the prompt sheet exercise reloads the issues into their mind, and that in future I could have both the feedback and the prompt sheet picked up by them, and a meeting shortly afterwards to discuss them would work well.

Given any noticeable gap between finishing the work and receiving the feedback (more than 1-2 days), a student won't remember much about their work. How useful then can feedback be? Do they need to re-read their whole essay when reading the feedback? For the 4,000 word pieces I have done this with, that might take them 30 mins. This is a substantial extra piece of work: have staff planned for this in the course design? yet how else would feedback make much sense?

At the very least, they need to have a copy of their work in front of them for reference when processing the feedback. If the department has taken their only printed copy, how likely is this to occur? For these reasons, I think it important to arrange for copies of their work to be there; and perhaps because they just don't have the concept of "feedback reception and processing" as a learning task, they have shown little memory for bringing a copy (in constrast to their good organisation during a current assigned task). One possibility which I've done is to get them to submit a digital copy; I print a spare which I use while marking, and can mark up for those things like spelling errors which are best done by comments on the script rather than in a separate place/document. I can then bring those to the meeting, hand them back for the few extra comments on them, while they double as a reference copy during the meeting.

Read the feedback and fill in the prompt sheet in advance?

I did this with one student who still seemed to find it valuable, so it might work and save face to face time. But it may be important to keep the gap between filling in the sheet and the meeting short (only a day or two) in order for the discussion to work well?

Roll out to big numbers?

I've just done it for a group of 5 or 6. What about a tutor with 50 scripts to mark, or 200?

Doing the written feedback for large numbers is best done by a "comment bank": (digitally) write out good comments on the most common issues, and refer individual students to these comments (e.g. by ticking them on individual copies of the sheet). Even with 5 students, the overlap of issues is enough to give a substantial benefit from a pool of comments; and comment banks have worked well in many large classes.

Obviously having them all fill in prompt sheets is not a problem regardless of numbers.

How to manage discussion?

The prompt questions

My questions were these: (here they also are as a DOC and as a PDF).
  1. You were keen to know what mark I had given you.
    1. Why is that important to you?
    2. What will you do differently because of the mark? (or what would you have done differently if the mark had been a lot different?)
  2. If you had to re-edit this essay, then how would you apply my feedback to do this, if at all?
  3. How will you apply my feedback to writing your next essay?
  4. How will you apply my feedback to critiquing other students' essays in future?
  5. Re-phrase (each of) my comments on your essay in your own words: what do they mean, what did they apply to what future actions do they imply?
  6. Is the feedback I wrote at all useful to you personally, as far as you can tell now?

For comparison, these are the questions in Frank's & Hanscomb's feedback viva:

Conclusion

We could see this learning activity as one response to the principle that feedback is no use unless learners act concretely on it. This requires actual mental work: it doesn't happen automatically, like watching a TV programme. So the prompts are structured reflection, processing of the feedback. They get that mental work to happen.

The criterion of teaching success here is: Whether and what action the learner takes as a result. This method (prompting processing of feedback) both prompts learners to draw conclusions about how to act, and gives you evidence that they have at least formulated those decisions about action.

Feedback vivas also achieve all this, and furthermore they are designed to accomplish additional goals such as opening a dialogue for other purposes (e.g. pastoral, personal development) with each student, and giving them the feeling of personal attention from staff. The suggestion here (prompted feedback processing) is likely to be cheaper in staff time to implement. It may even be that no meeting is required — this remains to be investigated — although since discussion of feedback is important, the meeting may be necessary anyway.

One symptom which this tactic overcomes at least temporarily is that students otherwise often just glance at the mark, and if it is tolerable, then conclude that this issue does not merit further action (and don't look at the feedback comments). Actually we do that with many things in our lives: it is rational and necessary self-regulation of effort. This line of thinking is explored in Draper (2009); and of course in question 1 on my prompt sheet. Listening to the various student answers to this in the discussion is very illuminating.

This tactic seems effective for feedback in essay-based disciplines. How to address the slogan's challenge in other disciplines / with other kinds of feedback? Various bits of research such as Mastery Learning, and Eric Yao's success in dramatically raising his pass rate, suggest that another tactic works equally well in the context of MCQ type test feedback. If plenty of low-stakes quizzes etc. are provided, students spontaneously use these scores to estimate how well they understand elements of the course, and adjust their effort to remedy insufficient understanding. This has shown up in large increases in pass rates, and in the handful of student interviews on Eric's course that I've done. Basically, if quizzes etc. are done on small sections of the course e.g. fortnightly, then the score from the quiz is formative in that it flags a small area of knowledge as needing more work (no explanatory comments from tutors are needed for this, though welcome for other reasons). One student I interviewed ended up with an 'A', but didn't show the characteristics of sure-fire A students (being driven, coming much better prepared in knowledge, ...). He said things such as how when he missed a lecture he found from the next quiz that he didn't understand the material specified in the published learning objectives so well, and so he made more effort to attend. Eric's course has multiple ways for students to discover what areas they did and didn't have a good grasp of:

The perception I have of both these kinds of attempts at addressing the slogan, is that students don't resent doing the activities; and once they have made the inferences about future actions, then they act on them with no further prompting. But they wouldn't have made the inferences without the activities, and wouldn't have organised and done these reflection-prompting activities by themselves (e.g. set themself quizzes, or written out their conclusions from a tutor's written feedback). So this isn't about applying threats and rewards, but generating information, and getting them to notice it, about their degree of mastery.

Companion research

This technique suggests that it is not enough for a teacher to provide feedback: there is also the vital step of the learner interpreting the feedback into actions. And that often, this requires some extra prompt. The above technique provides that prompt for open-ended comments.

Cath Ellis has independently developed a comparable technique for detailed marks feedback (where each student gets a mark not only overall, but for each of five marking criteria). Again using a prompt sheet, she sees the class the day after they receive their individual marks feedback; puts up the distributions of marks for the class from which each student can read off their approximate ranking (normative scaling). Her trial showed that they then draw the deductions about which aspect (criterion) is limiting their overall mark, and form the intention to work on it.

Postscript: research on changed learner behaviour

From a practical viewpoint, prompting student processing of feedback seems a practical success, which I'll always use now and recommend to others. However I haven't collected any direct evidence of students actually acting differently because of feedback, which would be interesting from a research point of view (as opposed to an improved practice viewpoint). Here is a discussion looking forward to this, and raising the issues that inhibit confident inferences about it from the promising indirect evidence from my two trials.

One issue is that even for a very simple bit of feedback, we expect multiple actions by the learner and so in a study would need to test for all of them. E.g. you correct a misspelling in a document. Ideally, the recipient would:

Any study of the effect of feedback must take into account the fact that a lot of learning occurs simply from doing (practising) the task, even without any feedback at all from outside the learner; or with only a summative judgement (e.g. a mark, a grade) that may be enough to send the learner back for an extra round of self-critiquing (cf. Hunt, 1982). This probably means we need to compare learners who:

  1. have had no practice,
  2. have had the practice only,
  3. have had the practice and/but only got marks back each time,
  4. and those with both practice and formative comments.
This would show the additive contribution, if any, that "feedback" makes over and above solo practice.

When I look at the kind of information learners need to improve, there seem to me to be three quite different kinds: studies should look at each separately.

Feedback may, or may not, contribute to each of these; and contributions to each need to be measured separately.

Finally is another kind of issue: latent learning. When my students have sometimes asserted that they don't see how they could apply comments about one essay to some different future essay, it is possible they are correct. We can't really abstract lessons from a single case. Only when we see the second case are we likely to be able to see what is common and what is different, and so what might be transferred. But that doesn't mean we didn't learn from the first case something that will change how we perform on the second case. It could be we remember the first case, but can only draw inferences (for action) from it after we have seen the second case (e.g. our second attempt at a new kind of writing ask). This means again, we need studies that measure how performance is eventually affected, and should not necessarily believe directly what students say about their own learning from feedback.

References

  • Draper,S.W. (2009) "What are learners actually regulating when given feedback?" British Journal of Educational Technology vol.40 no.2 pp.306-315   DOI: 10.1111/j.1467-8535.2008.00930.x

  • Hunt, D. (1982) "Effects of human self-assessment responding on learning" Journal of Applied Psychology vol.67 no.1 pp.75-82.

  • MACE = Making Assessment Count Evaluation Jisc project. See also this JISC page.

  • John Kleeman blog

    Web site logical path: [www.psy.gla.ac.uk] [~steve] [rap] [this page]
    [Top of this page]