Last changed 10 April 2005 ............... Length about 7,000 words (43,000 bytes).
This is a WWW document by Steve Draper, installed at You may copy it. How to refer to it.


A Technical Memo
Stephen W. Draper
Department of Psychology
University of Glasgow
Glasgow G12 8QQ U.K.

Contents (click to jump to a section)


This document collects my notes on the concept of feedback in learning, including some email messages from others and pointers to bits of the literature.

It is motivated by three issues:

I propose 5 types of levels of feedback. Only the fifth seems to require a human tutor, and not always even then. In practice the single biggest source of feedback is the learner's own internalised self-judgement.

The concept of feedback: Feedback, interaction, discussion

There is a danger of conflating three distinct though related things: feedback, interaction, discussion. Feedback is extra information an entity gets (only) as a result of its acting. Interaction concerns effects (often but not always information) an entity gets from interacting with (acting on and being acted upon by) another entity. Miyake's "constructive interaction" is about how interacting with another person changes a person's ideas for the better, but not through any straightforward transmission, sharing or coincidence of concepts. That is, the private effects and benefits are not identical with agreements and joint action. Discussion may refer to loose verbal interaction; but may be taken to refer to the jointly constructed and agreed outcome of verbal interaction (which can be different from the private effects and conclusions of the participants).

Below is a sequence of types of feedback that a learner might get. Feedback is only one of the reasons for interaction, as opposed to having independent roles for learners and teachers. Most feedback probably can be done without real interaction between learners and teachers; although the last type often does require a human and expert teacher.

Types or levels of feedback

  1. A mark or grade or success/fail classification of outcome. This reports on whether there was a difference between the learner's performance and the desired preformance, but not what that difference consisted of. This is (only) information on success or failure. E.g. just getting a mark back, throwing a ball over a wall and hearing whether it hit the hoop or target, but not in which direction the failure was.
  2. The right answer: a description or specification of the desired outcome. This implies that the learner receives information on the learner's output, the correct output, and the difference. E.g. the answer in the back of the book; seeing where the ball went compared to the target. Note that there are really two bits of information here which could possibly be given or witheld independently:
    (2a) what the right answer was, and
    (2b) what the student's output and its consequences were: e.g. if throwing a ball over the wall or shooting, you don't automatically see your result, nor if you are given a prediction task do you necessarily see the output of the simulation.
  3. Procedural or surface explanation of the right answer. "Showing them" e.g. correcting the student's program code, or English: a demonstration or copy or description of the actions required; sufficient for execution, but not necessarily understanding (although very often in fact it is enough).
    3b. Diagnosis of which part of the learner action (input) was wrong. Computer language compilers do most of their work in just reporting on the line number; and in fact human programming tutors do this a lot too. Software critiquing techniques can do this automatically. Note that if you just offer one diagnosis (error message) this doesn't tell the learner about the correctness or otherwise of all the rest of what they did.
  4. Explanation of what makes the right answer correct: of why it is the right answer. I.e. the principles and relationships that matter. Can be canned in a CAL test easily.
    4b. Diagnosis of (identifying) which principle or constraint was violated. E.g. saying to the programming learner "but what if the input data value was negative?", or to a story writer "you never showed why the two characters should fall in love" or "you didn't explain the terms you used".
  5. Explanation of what's wrong about the learner's answer. Resolving the occasional paradox that grips a learner when they can see why the right answer is justified but also believe their own different answer is equally justified.

    This is a separate item because the last item concerned only correct principles, but this one concerns misconceptions, and in general negative reasons why apparent connections of this activity with other principles are mistaken. Thus the previous one is self-contained, and context-free; while this one is open-ended and depends on the learner's prior knowledge. This is only rarely needed — when the learner has not just made a slip or mistake but is in the grip of a rooted misconception — but crucial when it is.

    This is the only type of feedback that cannot be easily canned and automated; and even so, common misconceptions can be, should be, and often are addressed in paper and computerised teaching materials.

How little can you give?

That is, when can you get away with just giving the cheapest kind of feedback e.g. type 1: success/failure? When they can fill in the rest themselves. This is true when it was a slip, or a mistake in the sense of generating the wrong plan but having the knowledge to generate the right one. Someone on ITForum said when there is a definite right procedure (which they already know).

Not only can you then "get away" with giving only the lowest level of feedback, but it is usually best to do this because it prompts the learner to re-process the rules and use them correctly to generate both the right answer and its justification, as Geoff Isaacs reminds us (see below).

Other categories of information

Goals, plans, actions Declarative/procedural

The above is essentially about goals, plans, actions (states and values, rationales linking goals and actions, procedures) i.e. about setting up correct task-action mappings, and also the underlying generative processes allowing a learner to generate new action sequences and to justify them. Clearly both general information and specific feedback may be about any of these independently. It probably also fits deep vs. shallow learning, as leaning towards general generative principles vs. narrow procedures. The distinction between slips and mistakes also fits, but should be extended to slips, mistakes, misconceptions (error in execution of a correct procedure, error in generating the procedure even though knowledge is correct, error in knowledge).

So essentially the same point relates to:

  1. Goals, plans, actions (in theories of planning and action)
  2. Declarative vs. procedural knowledge
  3. Slips, mistakes, misconceptions, ignorance
  4. Shallow vs. deep learning

Task type: the 2 Laurillard-levels

Feedback may be about the level of concepts, or of personal experience. Laurillard distinguishes the two levels (public concepts, private experience and action). Clearly you can have (pedagogical) tasks at each level; and the feedback classification should apply to either level. At the conceptual level, the tasks are to do with communicating about concepts. Clearly you get slips etc. here, as well as with physical procedures at the personal level. If the pedagogical task is a conceptual i.e. descriptive one, then the slips etc. are about surface issues in communication: correct terminology, removing ambiguity etc. Shallow learning here is about too little linking to real concepts, and too much learning of the task procedures at this upper Laurillard-level e.g. repeating what you heard, rather than processing the meaning.

Explanations are about bridging the gap between concepts and practice. So if the pedagogical task is a physical one, then explanation is usually about "reflective" links up to the conceptual level that can justify physical procedures. However it is conceivable that you could try to explain a personal-level procedure in terms of personal concepts and experience (e.g. "I always do that to ones that look like this"): but this phenomenological approach is by definition hard to express and will lack explanatory power from the hearer's viewpoint unless they can recognise their own experience in your description of your experience. Because of this, most explanation tasks link to the public level, and thus may bridge the Laurillard-levels.

Behaviour, prediction, explanation

Elsewhere I find it important to distinguish between the type of learner task examined: behaviour, prediction, explanation. Learners quite often (e.g. in Newtonian physics) have been shown to exhibit different implicit knowledge, depending on which of these types of task is tested. Provided we bear in mind the interaction of Laurillard-levels with pedagogical task, we can map this on to the goal/plan/action set. Strictly speaking, both goal/plan/action and behaviour, prediction, explanation can occur strictly within one of the two Laurillard-levels; but the usual practical definitions applied by educators and researchers will cross over e.g. behaviour tasks will be defined as those requiring physical action, and explanation tasks will be coded in terms of their reference to public conceptual descriptions.

Behaviour is when you just do the actions (without recalculating or justifying the plan) e.g. catch a ball; prediction is asking only for a predicted outcome (state, goal) e.g. where the ball will go, and could also be done without conceptual underpinning; explanation is about the plan generation, about the principles that can justify an action procedure e.g. why the ball went where it did. Causes of divergence between the three types of pedagogical task are multiple: a) the split between the Laurillard-levels: personal observational knowledge may not be integrated with conceptual knowledge; b) a learner might have the right concepts but just be unskilled at articulating them, at the descriptive language i.e. need practice at that pedagogical task rather than need new knowledge at the conceptual level. Explanation must normally come from the public level, behaviour must normally come from the private level; but prediction could come from either: from calculation or from observation of past cases.


The above applies most obviously to feedback on things like maths examples and computer programs, but it also applies to feedback on essays. With essays, to convey type 2 information (on what the correct output is) you need to supply a model answer and also convey what are the features that make it a model answer e.g. publish the marking scheme with a marks breakdown. In fact it may work better to publish not a model answer but a good (not perfect) actual student answer, together with a full critique i.e. how it is good but not perfect. The learners will then be able to compare their own answers with it i.e. do quite a lot of the feedback process themselves, given this information.

Sources of the (feedback) information

The information supplied by feedback may come from various possible sources. In my view there are three main categories of possible source:
  1. The learner themselves
  2. The environment
  3. A human teacher

Learners generate a lot of feedback internally, by internal judgements of success. Our own internal criteria for understanding and adequate argument and explanation do it. For instance, we often know we have typed the wrong letters or thrown a poor ball before we see the result. At the conceptual level, most people learn a lot from writing a paper or essay without any feedback from anyone else at all. That is because they have internalised many of the relevant standards and can judge the quality of their own output quite well. In fact this internalised standard is one of the main aims of learning and teaching, whether or not it gets explicitly mentioned. A given individual may have a better internalisation of some of the types of information above than others e.g. be better at judging that the essay is poor than at identifying why it is poor.

A human teacher is often used to provide feedback of all types: indeed that is one of their chief functions.

My intermediate category of information source, the "environment", is itself very diverse. At one end it includes seeing directly whether you missed the target and by how much, and at the other a sophisticated machine may give you a lot of diagnosis (e.g. error messages and debugger displays in programming environments, doppler readouts of how hard you hit a ball, etc.).

The reason for distinguishing a human teacher from the otherwise heterogenous category of "environment" is simply because of the practical importance of the question of whether all feedback could be given by computers, dispensing with the need for teachers to perform this function. It is easy to see how, in principle, to automate types 1-4 of feedback information, but type 5 is probably very difficult or impossible to automate because it depends on (mistaken) links by the learner between the subject matter and some other knowledge: and that is too open-ended a set to predict easily. (However it is possible and probably desirable to automate the most common cases of type 5: common misconceptions.)


Laurillard distinguishes between intrinsic and extrinsic feedback By intrinsic feedback she means "as a necessary consequence of the action" (p.62). But I don't see there is any particular necessity in any situation. If you are blind, "necessary" feedback will not be available. If you are throwing balls at a hoop over a wall, you will only get feedback if you raise yourself to see over the wall. If you throw darts at a board, you often have to walk forward to see which side of a boundary wire the dart landed: so is that necessary and intrinsic feedback? Similarly, the same code gives different feedback depending on the programming environment: which then gives you inescapable error messages.

Her discussion makes it clear that her distinction is between feedback at the personal experience level vs. feedback at the level of descriptions. Again, this is nothing to do with necessity, nor is it to do with the source of the information (as we can now make machines give feedback at the level of descriptions). I therefore reject her terminology and definitions. But her real point here is important, as her discussion makes clear. It concerns how getting feedback at one of the levels about experience at the other level raises extra problems for the learner (although ultimately it may be good if they can master all the connections): it is "decontextualised" or "not situated". This is to do with my discussion above, on how the levels usually interact over behaviour vs. explanation. In conclusion, I would say that her real point is the issue of whether the feedback information arrives in terms of the level of descriptions or the (sensori-motor) level of personal experience. Thus although she discusses it as an issue of the source of the information, it is actually an issue of the type of information.

Positive feedback

The above scheme based on types of information and sources of the information does not explicitly distinguish positive and negative feedback. It is worth discussing positive feedback, which is important because a) often feedback schemes neglect it b) it can be very important to learner morale (confidence, pleasure), which itself can be a powerful determinant of learning outcomes.

Mere positive reinforcement is not enough and not needed (neither sufficient nor necessary): success if there is a definite answer is enough. If not, then you shouldn't just encourage but say which bits are good and why. This is implicitly covered in the above scheme. Success as well as failure should be reported, and diagnosis should say (type 3 feedback) which bits of the learner output are good as well as which are bad. Simply saying one bit is bad can imply that it is all bad, and if this is false then the wrong information is conveyed. If there is any doubt, then the feedback should say "it is all fine except ...". This is necessary to convey the type 2 information about what the correct solution is.

However there are important problems and exceptions here. Frequently in any design — a maths exercise, a computer program, a structured argument in an essay — a mistake leads to many consequent "errors" in the sense of parts that are not part of a a correct solution, yet which follow logically given the mistake. Clearly the right feedback is to identify the key leading mistake, and then be silent about the consequences (which will be retracted when the lead mistake is corrected). If the learner solution is represented as a tree (a top down decision tree), then feedback should identify the highest node that differs from the correct solution and comment only on that, not on its subordinate nodes. Only if there are errors in separate parts of the tree should more than one mistake be reported on.

"Good feedback" by Geoff Isaacs

Previously I've felt there are three characteristics of "good" feedback; loosely put, "good" feedback tells you:

However I've come to feel that, while the first two characteristics ought to be present always, it's a genuine pedagogical (?andragogical?) decision as to whether to provide the third, when to provide it and, of course, in what form (for example, when do you give some directions — "read chapter 2 in the text — and when ask pointed questions — "Can you think of any reasons, other than the one you included, as to why the Romans behaved in this way?").

In some cases, for example, you, the teacher, might want students to work out corrective action for themselves (no corrective feedback), or you might want them to try before you give corrective feedback if necessary (delayed corrective feedback — the "when" issue).

Geoff Isaacs
The Teaching and Educational Development Institute (T.E.D.I.)

The feedback metaphor from electronics

[Message to itforum around 5 may 1997]

Feedback is a technical concept in control theory. It is wider than just how it is implemented for OpAmps. Put another way: feedback is a technical concept in control theory. Its application to amplifiers is in some ways too simplified for us to get the most out of it as a metaphor for learning and teaching. Here is an expansion that I think better fits our discussion on feedback types in education.

Amplifier example

Schematically, I would consider the feedback loop as having:
*Input (e.g. from a tape or CD deck)
*The forward part (the amplifier, say)
* The feedback signal (part of the output, taken from the amplifier output)
* Generating the correction signal (subtracting the feedback signal from the amplifier input; or equivalently, taking the negative of the feedback signal and adding it to the amplifier input)
* Making the correction (in the amplifier example, just using the resulting combined signal as the amplifier input).

Graphic equaliser example

Instead of considering just the amplifier, consider hifi systems with so-called graphic equalisers: i.e. lots of separate sliders each controlling a small segment of the frequency spectrum. In simple such systems, a human adjusts all those sliders. In this case:
*Input (e.g. from a tape or CD deck)
* Forward part: the equaliser plus amplifier plus loudspeakers plus acoustic properties of the environment
* Feedback signal: the sound detected either by the human ear or a separate microphone. The point is, that the room the system is used in strongly modifies the overall performance by the room's own resonances. Listening to the combined effect gives the data to use in adjusting the equaliser.
*Generate the correction signal. Either the person judges what a flat response is e.g. "too much in the third band", or a visual display of the sound spectrum picked up gives this information about how the current result deviates from the ideal flat (uniform) frequency response.
*Make the correction: move those sliders

I understand it is possible to automate this, using a pink noise source and electronics that can make the corrections. The point here though is that it is still a feedback system, but one that uses not just electric feedback but a microphone to include the room's acoustics in the loop; and this allows me to identify these different system components of the general feedback loop.

Education example

Applied to feedback to learners:
*Input: the primary exposition e.g. textbook, lecture
*Forward part: student's re-expression e.g. writes an essay, gives an answer to pedagogic/interview questions by teacher
*Feedback signal. This is what we were discussing: what this should be.
*Generating the correction signal.
*Make the correction: the learner corrects what they did e.g. does the exercise again; but more importantly makes the internal correction to their "knowledge" cf. adjusting all those little sliders on the graphic equaliser.

Now here we can see that only the learner can do the last stage of making the correction. We were discussing the cases where the teacher generates the feedback signal. The issue is about the cross-over between teacher and learner, which might be before, during or after generating the correction signal. E.g. the teacher might generate the correction signal: tell the learner exactly what they should have done. Or alternatively, they may just say something minimal about there being an error, leaving it to the learner to extract what the right answer was, where the difference is, and how the input should be modified as a consequence. A complete list of the functions that have to be performed in this part of the loop by some combination of teacher and learner are:
**Pick up the "output" and interpret it (cf. using a microphone to turn the sound in the room back into an electrical signal, reading an essay and interpreting what it says (not just what the student may have meant but didn't actually write))
**Supply a standard against which to judge it (cf. ideal flat frequency response)
**Calculate where the differences are
**Go from the differences to corrective actions. In the case of setting up the graphic equaliser, these corrective actions are adjusting the settings, NOT just adjusting the input signal to the amplifier. This is analogous to a learner adjusting their conceptions, NOT just changing what they wrote in the exercise.

I guess my view is, that we wish learners to end up with the whole feedback system internalised so that they themselves provide all those functions, using internal standards against which they compare each of their performances and can correct them. A generalised teaching strategy here is that of "scaffolding": of doing a lot of the functions in the feedback loop for them at first, and progressively withdrawing that support. Besides being the best for the learner, it also is much quicker and easier for the teacher just to provide the feedback signal (how it sounds to another person) and/or how it measures against target standards. These are also functions we continue to benefit from others providing to some extent even when we are quite skilled, just as a hifi system benefits from measurements of how it sounds in some new room.

Feedback timing

Here are two messages to ITFORUM on feedback timing:

From: John Farquhar <jxf18@PSU.EDU>
To: Multiple recipients of list ITFORUM <ITFORUM@UGA.CC.UGA.EDU>

In 1994, I performed an extensive literature review in this area.

Bangert-Drowns, Kulik, Kulik, and Morgan (1991) provided the most complete analysis, reviewing 53 studies that compared some form of immediate feedback to delayed feedback in test-like events. The studies covered a variety of instructional applications from classroom quizzes to programmed materials and CAL. Their general conclusion was that in typical classroom settings immediate feedback is more effective. Bangert-Drowns et. al. also found small to moderate gains for the use of immediate feedback over delayed feedback in CAL materials, however, the number of CAL studies in their analysis was small.

Despite these studies, it seems reasonable (or at least consistent with cognitive theory) that a delay in feedback during instructional situations and under other specified conditions can be beneficial.

Merrill, Reiser, Ranney, and Trafton (1992) describe an interesting study on the actions of human tutors. According to Merrill, et. al. human tutors tend not to provide immediate feedback for every step of a complex problem. Instead, they modulate the timing of their responses based upon the importance of the error performed.

My own research has investigated the timing of feedback within an Intelligent Simulation (Farquhar & Regian, 1994; Farquhar, 1995). My LOADER program is capable of determining the criticality of an error, then providing either immediate or delayed feedback based upon that error type. I have found that the effectiveness of delayed feedback under these circumstances is dependent on the type of feedback provided (and probably the present level of skill of the learner as well). I also believe the effectiveness of delayed feedback is dependent upon the type of knowledge (with procedural knowledge being more appropriate).

My present hypothesis (which, to my knowledge, has not been tested) is that the effectiveness of delayed over immediate feedback in CAL depends upon the type of knowledge (declarative or procedural), type of feedback (notification or elaborative), type of error (critical or non-critical), and present learner skill level (low or high) with delayed feedback being more effective under the conditions of procedural knowledge, elaborative feedback, non-critical errors, and low skill level.

From: "JennyLynn Werner, Ph.D." Mon Oct 11 16:51:29 1999

For feedback, some of the studies are more than 10 years old, but because many textbooks and teachers have not moved away from behaviorism (not especially applicable to higher order human cognition), most people need to start more than 10 years back. I still hear (much too often) that technology-based learning is great because learners get immediate feedback, and developers provide immediate feedback even when they shouldn't because they have literally NO idea how feedback works for learners. Toolbook by Asymetrix was one of the absolute worst because their immediate feedback for their multiple guess items was instant and their idea of 'delayed' feedback was at the end of each question - pfui. Clueless programmers can make it harder for people to learn; it's important that instructional feedback is developed correctly.

See work by Dr. Raymond Kulhavy at Arizona State. He demonstrated in the late 1960s that feedback does NOT function as reinforcement (as was commonly believed then, and for some reason this myth still survives, even in some textbooks); that instructional feedback should, in some cases, be delayed rather than immediate, and that response certitude plays an important role in the effectiveness of some types of feedback. Later Dr. Kulhavy and Dr. William Stock wrote the articles detailing the feedback model. The article describing the model also describes appropriate feedback timing and level of detail. That article describes the confusion that persists for some around the difference between instructional feedback and behavioral reinforcement and notes that even some college textbooks are perpetuating the myths and superstitions around human information processing and learning theory.

JennyLynn Werner, Ph.D.
Instructional/Performance Technologist
SixSigma Performance

Kulhavy, R.W. & Anderson, R.C. (1972) "Delay-retention effect with multiple- choice tests" Journal of Educational Psychology 63: 505-512.

Kulhavy, R.W. (1977). "Feedback in written instruction" Review of Educational Research 47(1): 211-232.

Kulhavy, R.W. & Stock, W.A. (1989). "Feedback in written instruction: The place of response certitude" Educational Psychology Review 1(4): 279-308.

Notes from Nicol seminar

David Nicol gave a seminar here at TLS on 23 March 1999. It stimulated some ideas.


  1. Get set of tutors for a class to discuss/develop a common feedback form and guidelines. Redo this every year: it's the process as much as the form, and the thing is to get the set of tutors to pull together.
  2. Get students to work on how to use feedback, what they want, how to request it. And to self-assess, and compare their judgements with the tutor's of their own work.
  3. Self- and peer-assessment exercises (see below for why). But only after training the learners on giving feedback.
  4. Try to get students intrinsic exteeranl feedack on their work i.e. independent of the tutor's e.g. for maths problems the right answer, for programming, their program's behaviour.
    4.2 Thus for essays in the psychology dept. every student does several critical reviews on personally chosen topics. When these are connected to the syllabus they could be useful revision material for other students. So the idea would be to organise other students using them, and the first student then answering questions about them, and so experiencing how effective the written work had been (for other students).


  1. Put the feedback in electronic form (email, whatever): this facilitates all the ideas below, makes dissemination easy, avoids any costs when individuals lose their copies, etc.
  2. All students get comments for all other students
  3. Common comments given out on a sheet a) problems b) remedies (how to do it right); individual feedback is pointers to items on the sheet; AND ask / allow students to pick which items apply to them (they are often better at this than the T).
  4. Sample Qs,As,feedback. I.e. distribute some past student work (with comments) as examples.

  5. Get feedback into the classroom sessions. Only this is timely; and it works for both L and T. Angela Cross. E.g. at end of each session, get Ls to write down a) the main point of the session b) the main outstanding question for them.



By "feedback" I mean here formative information given to learners.

There are five levels (or seven sublevels) or types of feedback. A general principle is to give the lowest level sufficient to "unstick" the learner.

The biggest single source of feedback is from the learner themselves: from their internalised judgements. That is why solo practice (e.g. at music or maths or programming) is so useful. In fact a major educational aim is to equip the learner in each topic to be their own feedback: to be able to check their own calculations and to judge how well written their own essays are. In maths we learn how to check our answers by an independent method or estimate. In computer science we learn how to design test procedures for our own code. In English we learn how to critique our own and others' writing.

When external feedback is needed — for convenience and particularly at the start of learning a topic — most types an easily be provided without a human teacher (e.g. answers in the back of the textbook; explanations of answers on demand in software). The main need for human teachers for giving feedback is:

Is feedback necessary for learning?

  1. Part of learning is not caused by the learner's actions, but by the teacher or other outside causes. Since feedback is defined as information resulting from one's actions, feedback is not necessary for these important kinds of learning. Not everyone agrees, but in models like Laurillard's, the teacher initiates half the activities. Thus learning without feedback occurs in these cases.
  2. Laurillard p.61ff. "Action without feedback is completely unproductive for a learner." This is dangerously misleading. It is true, but very often feedback is supplied by the learner themselves. It is not true that learners must have an external source of feedback to learn from their own actions e.g. writing an essay. So learners can often learn from their own actions without external feedack by using their own judgement of the results.

Thus action is not necessary for all learning; and external feedback is not necessary for all learning from action. So feedback is not strictly essential for learning; but it is widely and pervasively important and it may be sensible to plan for it explicitly as an independent aim.

Wider feedback issues


Bangert-Drowns, R.L., Kulik, C.C., Kulik, J.A., & Morgan, M. (1991). "The instructional effect of feedback in test-like events" Review of Educational Research 61(2), 213-238.

Farquhar, J.D. & Regian, J.W. (1994). The type and timing of feedback within an intelligent console-operations tutor. Paper presented at the 1994 Conference of the Human Factors and Ergonomics Society.

Farquhar, John D. (1995) A Summary Of Research With The Console-Operations February 3, 1995.

Laurillard, D. (1993) Rethinking university teaching: A framework for the effective use of educational technology (Routledge: London).

Merrill, D.C., Reiser, B.J., Ranney, M., & Trafton, J.G. (1992). Effective tutoring techniques: A comparison of human tutors and intelligent tutoring systems. The Journal of the Learning Sciences, 2(3), 277-305.

David Nicol & Debra Macfarlane-Dick (2004) "Rethinking Formative Assessment in HE: a theoretical model and seven principles of good feedback practice"

David Nicol & Debra Macfarlane-Dick (2006) "Formative assessment and self-regulated learning: A model and seven principles of good feedback practice" Studies in Higher Education vol.31 no.2 pp.xx [Accepted for publication: copies from the first author.]

"Evaluation and Improvement of Academic Learning" D. Royce Sadler (1983) Journal of Higher Education Vol.54 pages 60-79.