A Technical Memo
Stephen W. Draper
Department of Psychology
University of Glasgow
Glasgow G12 8QQ U.K.
WWW URL: http://www.psy.gla.ac.uk/~steve
It is motivated by three issues:
I propose 5 types of levels of feedback. Only the fifth seems to require a human tutor, and not always even then. In practice the single biggest source of feedback is the learner's own internalised self-judgement.
Below is a sequence of types of feedback that a learner might get. Feedback is only one of the reasons for interaction, as opposed to having independent roles for learners and teachers. Most feedback probably can be done without real interaction between learners and teachers; although the last type often does require a human and expert teacher.
This is a separate item because the last item concerned only correct principles, but this one concerns misconceptions, and in general negative reasons why apparent connections of this activity with other principles are mistaken. Thus the previous one is self-contained, and context-free; while this one is open-ended and depends on the learner's prior knowledge. This is only rarely needed — when the learner has not just made a slip or mistake but is in the grip of a rooted misconception — but crucial when it is.
This is the only type of feedback that cannot be easily canned and automated; and even so, common misconceptions can be, should be, and often are addressed in paper and computerised teaching materials.
Not only can you then "get away" with giving only the lowest level of feedback, but it is usually best to do this because it prompts the learner to re-process the rules and use them correctly to generate both the right answer and its justification, as Geoff Isaacs reminds us (see below).
So essentially the same point relates to:
Explanations are about bridging the gap between concepts and practice. So if the pedagogical task is a physical one, then explanation is usually about "reflective" links up to the conceptual level that can justify physical procedures. However it is conceivable that you could try to explain a personal-level procedure in terms of personal concepts and experience (e.g. "I always do that to ones that look like this"): but this phenomenological approach is by definition hard to express and will lack explanatory power from the hearer's viewpoint unless they can recognise their own experience in your description of your experience. Because of this, most explanation tasks link to the public level, and thus may bridge the Laurillard-levels.
Behaviour is when you just do the actions (without recalculating or justifying the plan) e.g. catch a ball; prediction is asking only for a predicted outcome (state, goal) e.g. where the ball will go, and could also be done without conceptual underpinning; explanation is about the plan generation, about the principles that can justify an action procedure e.g. why the ball went where it did. Causes of divergence between the three types of pedagogical task are multiple: a) the split between the Laurillard-levels: personal observational knowledge may not be integrated with conceptual knowledge; b) a learner might have the right concepts but just be unskilled at articulating them, at the descriptive language i.e. need practice at that pedagogical task rather than need new knowledge at the conceptual level. Explanation must normally come from the public level, behaviour must normally come from the private level; but prediction could come from either: from calculation or from observation of past cases.
Learners generate a lot of feedback internally, by internal judgements of success. Our own internal criteria for understanding and adequate argument and explanation do it. For instance, we often know we have typed the wrong letters or thrown a poor ball before we see the result. At the conceptual level, most people learn a lot from writing a paper or essay without any feedback from anyone else at all. That is because they have internalised many of the relevant standards and can judge the quality of their own output quite well. In fact this internalised standard is one of the main aims of learning and teaching, whether or not it gets explicitly mentioned. A given individual may have a better internalisation of some of the types of information above than others e.g. be better at judging that the essay is poor than at identifying why it is poor.
A human teacher is often used to provide feedback of all types: indeed that is one of their chief functions.
My intermediate category of information source, the "environment", is itself very diverse. At one end it includes seeing directly whether you missed the target and by how much, and at the other a sophisticated machine may give you a lot of diagnosis (e.g. error messages and debugger displays in programming environments, doppler readouts of how hard you hit a ball, etc.).
The reason for distinguishing a human teacher from the otherwise heterogenous category of "environment" is simply because of the practical importance of the question of whether all feedback could be given by computers, dispensing with the need for teachers to perform this function. It is easy to see how, in principle, to automate types 1-4 of feedback information, but type 5 is probably very difficult or impossible to automate because it depends on (mistaken) links by the learner between the subject matter and some other knowledge: and that is too open-ended a set to predict easily. (However it is possible and probably desirable to automate the most common cases of type 5: common misconceptions.)
Her discussion makes it clear that her distinction is between feedback at the personal experience level vs. feedback at the level of descriptions. Again, this is nothing to do with necessity, nor is it to do with the source of the information (as we can now make machines give feedback at the level of descriptions). I therefore reject her terminology and definitions. But her real point here is important, as her discussion makes clear. It concerns how getting feedback at one of the levels about experience at the other level raises extra problems for the learner (although ultimately it may be good if they can master all the connections): it is "decontextualised" or "not situated". This is to do with my discussion above, on how the levels usually interact over behaviour vs. explanation. In conclusion, I would say that her real point is the issue of whether the feedback information arrives in terms of the level of descriptions or the (sensori-motor) level of personal experience. Thus although she discusses it as an issue of the source of the information, it is actually an issue of the type of information.
Mere positive reinforcement is not enough and not needed (neither sufficient nor necessary): success if there is a definite answer is enough. If not, then you shouldn't just encourage but say which bits are good and why. This is implicitly covered in the above scheme. Success as well as failure should be reported, and diagnosis should say (type 3 feedback) which bits of the learner output are good as well as which are bad. Simply saying one bit is bad can imply that it is all bad, and if this is false then the wrong information is conveyed. If there is any doubt, then the feedback should say "it is all fine except ...". This is necessary to convey the type 2 information about what the correct solution is.
However there are important problems and exceptions here. Frequently in any design — a maths exercise, a computer program, a structured argument in an essay — a mistake leads to many consequent "errors" in the sense of parts that are not part of a a correct solution, yet which follow logically given the mistake. Clearly the right feedback is to identify the key leading mistake, and then be silent about the consequences (which will be retracted when the lead mistake is corrected). If the learner solution is represented as a tree (a top down decision tree), then feedback should identify the highest node that differs from the correct solution and comment only on that, not on its subordinate nodes. Only if there are errors in separate parts of the tree should more than one mistake be reported on.
However I've come to feel that, while the first two characteristics ought to be present always, it's a genuine pedagogical (?andragogical?) decision as to whether to provide the third, when to provide it and, of course, in what form (for example, when do you give some directions — "read chapter 2 in the text — and when ask pointed questions — "Can you think of any reasons, other than the one you included, as to why the Romans behaved in this way?").
In some cases, for example, you, the teacher, might want students to work out corrective action for themselves (no corrective feedback), or you might want them to try before you give corrective feedback if necessary (delayed corrective feedback — the "when" issue).
The Teaching and Educational Development Institute (T.E.D.I.)
Feedback is a technical concept in control theory. It is wider than just how it is implemented for OpAmps. Put another way: feedback is a technical concept in control theory. Its application to amplifiers is in some ways too simplified for us to get the most out of it as a metaphor for learning and teaching. Here is an expansion that I think better fits our discussion on feedback types in education.
I understand it is possible to automate this, using a pink noise source and electronics that can make the corrections. The point here though is that it is still a feedback system, but one that uses not just electric feedback but a microphone to include the room's acoustics in the loop; and this allows me to identify these different system components of the general feedback loop.
Now here we can see that only the learner can do the last stage of making the
correction. We were discussing the cases where the teacher generates the
feedback signal. The issue is about the cross-over between teacher and
learner, which might be before, during or after generating the correction
signal. E.g. the teacher might generate the correction signal: tell the
learner exactly what they should have done. Or alternatively, they may just
say something minimal about there being an error, leaving it to the learner to
extract what the right answer was, where the difference is, and how the input
should be modified as a consequence. A complete list of the functions that
have to be performed in this part of the loop by some combination of teacher
and learner are:
**Pick up the "output" and interpret it (cf. using a microphone to turn the sound in the room back into an electrical signal, reading an essay and interpreting what it says (not just what the student may have meant but didn't actually write))
**Supply a standard against which to judge it (cf. ideal flat frequency response)
**Calculate where the differences are
**Go from the differences to corrective actions. In the case of setting up the graphic equaliser, these corrective actions are adjusting the settings, NOT just adjusting the input signal to the amplifier. This is analogous to a learner adjusting their conceptions, NOT just changing what they wrote in the exercise.
I guess my view is, that we wish learners to end up with the whole feedback system internalised so that they themselves provide all those functions, using internal standards against which they compare each of their performances and can correct them. A generalised teaching strategy here is that of "scaffolding": of doing a lot of the functions in the feedback loop for them at first, and progressively withdrawing that support. Besides being the best for the learner, it also is much quicker and easier for the teacher just to provide the feedback signal (how it sounds to another person) and/or how it measures against target standards. These are also functions we continue to benefit from others providing to some extent even when we are quite skilled, just as a hifi system benefits from measurements of how it sounds in some new room.
From: John Farquhar <jxf18@PSU.EDU>
To: Multiple recipients of list ITFORUM <ITFORUM@UGA.CC.UGA.EDU>
In 1994, I performed an extensive literature review in this area.
Bangert-Drowns, Kulik, Kulik, and Morgan (1991) provided the most complete analysis, reviewing 53 studies that compared some form of immediate feedback to delayed feedback in test-like events. The studies covered a variety of instructional applications from classroom quizzes to programmed materials and CAL. Their general conclusion was that in typical classroom settings immediate feedback is more effective. Bangert-Drowns et. al. also found small to moderate gains for the use of immediate feedback over delayed feedback in CAL materials, however, the number of CAL studies in their analysis was small.
Despite these studies, it seems reasonable (or at least consistent with cognitive theory) that a delay in feedback during instructional situations and under other specified conditions can be beneficial.
Merrill, Reiser, Ranney, and Trafton (1992) describe an interesting study on the actions of human tutors. According to Merrill, et. al. human tutors tend not to provide immediate feedback for every step of a complex problem. Instead, they modulate the timing of their responses based upon the importance of the error performed.
My own research has investigated the timing of feedback within an Intelligent Simulation (Farquhar & Regian, 1994; Farquhar, 1995). My LOADER program is capable of determining the criticality of an error, then providing either immediate or delayed feedback based upon that error type. I have found that the effectiveness of delayed feedback under these circumstances is dependent on the type of feedback provided (and probably the present level of skill of the learner as well). I also believe the effectiveness of delayed feedback is dependent upon the type of knowledge (with procedural knowledge being more appropriate).
My present hypothesis (which, to my knowledge, has not been tested) is that the effectiveness of delayed over immediate feedback in CAL depends upon the type of knowledge (declarative or procedural), type of feedback (notification or elaborative), type of error (critical or non-critical), and present learner skill level (low or high) with delayed feedback being more effective under the conditions of procedural knowledge, elaborative feedback, non-critical errors, and low skill level.
For feedback, some of the studies are more than 10 years old, but because many textbooks and teachers have not moved away from behaviorism (not especially applicable to higher order human cognition), most people need to start more than 10 years back. I still hear (much too often) that technology-based learning is great because learners get immediate feedback, and developers provide immediate feedback even when they shouldn't because they have literally NO idea how feedback works for learners. Toolbook by Asymetrix was one of the absolute worst because their immediate feedback for their multiple guess items was instant and their idea of 'delayed' feedback was at the end of each question - pfui. Clueless programmers can make it harder for people to learn; it's important that instructional feedback is developed correctly.
See work by Dr. Raymond Kulhavy at Arizona State. He demonstrated in the late 1960s that feedback does NOT function as reinforcement (as was commonly believed then, and for some reason this myth still survives, even in some textbooks); that instructional feedback should, in some cases, be delayed rather than immediate, and that response certitude plays an important role in the effectiveness of some types of feedback. Later Dr. Kulhavy and Dr. William Stock wrote the articles detailing the feedback model. The article describing the model also describes appropriate feedback timing and level of detail. That article describes the confusion that persists for some around the difference between instructional feedback and behavioral reinforcement and notes that even some college textbooks are perpetuating the myths and superstitions around human information processing and learning theory.
JennyLynn Werner, Ph.D.
Kulhavy, R.W. & Anderson, R.C. (1972) "Delay-retention effect with multiple- choice tests" Journal of Educational Psychology 63: 505-512.
Kulhavy, R.W. (1977). "Feedback in written instruction" Review of Educational Research 47(1): 211-232.
Kulhavy, R.W. & Stock, W.A. (1989). "Feedback in written instruction: The place of response certitude" Educational Psychology Review 1(4): 279-308.
c) to teach/support L's ability to self-assess on these issues. => so doing student self- and peer-assessment should be central. They must learn to internalise and do this judgement process.
Criticism without remedies being proposed or obvious is of little use: alternative, better actions should be proposed. (You can't give less that a mark of 100% unless you show, preferably by example, what a better answer would be.)
The tactic of always stating the best and worst feature of a piece of work (or of each aspect of a piece of work) addresses a lot of issues: of not being merely negative, of focussing attention on specifics.
There are five levels (or seven sublevels) or types of feedback. A general principle is to give the lowest level sufficient to "unstick" the learner.
The biggest single source of feedback is from the learner themselves: from their internalised judgements. That is why solo practice (e.g. at music or maths or programming) is so useful. In fact a major educational aim is to equip the learner in each topic to be their own feedback: to be able to check their own calculations and to judge how well written their own essays are. In maths we learn how to check our answers by an independent method or estimate. In computer science we learn how to design test procedures for our own code. In English we learn how to critique our own and others' writing.
When external feedback is needed — for convenience and particularly at the start of learning a topic — most types an easily be provided without a human teacher (e.g. answers in the back of the textbook; explanations of answers on demand in software). The main need for human teachers for giving feedback is:
Thus action is not necessary for all learning; and external feedback is not necessary for all learning from action. So feedback is not strictly essential for learning; but it is widely and pervasively important and it may be sensible to plan for it explicitly as an independent aim.
Farquhar, J.D. & Regian, J.W. (1994). The type and timing of feedback within an intelligent console-operations tutor. Paper presented at the 1994 Conference of the Human Factors and Ergonomics Society.
Farquhar, John D. (1995) A Summary Of Research With The Console-Operations February 3, 1995.
Laurillard, D. (1993) Rethinking university teaching: A framework for the effective use of educational technology (Routledge: London).
Merrill, D.C., Reiser, B.J., Ranney, M., & Trafton, J.G. (1992). Effective tutoring techniques: A comparison of human tutors and intelligent tutoring systems. The Journal of the Learning Sciences, 2(3), 277-305.
David Nicol & Debra Macfarlane-Dick (2004) "Rethinking Formative Assessment in HE: a theoretical model and seven principles of good feedback practice" http://www.heacademy.ac.uk/assessment/ASS051D_SENLEF_model.doc
David Nicol & Debra Macfarlane-Dick (2006) "Formative assessment and self-regulated learning: A model and seven principles of good feedback practice" Studies in Higher Education vol.31 no.2 pp.xx [Accepted for publication: copies from the first author.]
"Evaluation and Improvement of Academic Learning" D. Royce Sadler (1983) Journal of Higher Education Vol.54 pages 60-79.