Web site logical path: [www.psy.gla.ac.uk] [~steve] [rap] [this page]
This is an entry page into pages on A&F (assessment and feedback) in HE. This began with my involvement with the REAP project (April 2005 - July 2007); and with followup work.
Oh well, back to the bad old assumptions ...
MP3 Work flow support. Beryl Plimmer. Camtasia: Sound plus markup Two lessons?: techno support that eases the marker's load pays immediate dividends in delivered value to students. Markup may be important partly because of that, but also intrinsically when pointing is important to effective (and economical) communication about the issues with a piece of work.
Gwyneth Hughes had a
JISC project on 'assessment careers' i.e. on
ipsative feedback.
She now has a book on it:
When teachers mark a piece of student work, we often involuntarily perceive or
attribute characteristics to the student to do with the way they did it:
procrastination, showing contempt for the marker, showing undue
conscientiousness, .... The educational issue is: should (HE) teachers pay any
attention to this? On the one hand, if we want to focus on students' learning
we should pay attention to results not to their personal habits such as the
clothes they wear, whether they work at night or day, whether they are tall,
or bald. On the other hand, employers explicitly ask us to comment in letters
of reference on some of these attributes (e.g. diligence, self-starting,
timeliness, sickness absences, ...); and most programmes in fact give
directive study skill advice. If we were to give feedback based on these
attributions, it would be a new dimension to feedback on student work, but a
logical continuation of the role of study advisor.
Peter Elbow points out that we have not one but two
critical modes or voices: in one (judgemental, authoritative, "constructive")
we tell the learner what they should have done; in the other, we tell them
what we felt when we read their work (describing our personal feelings and
interpretations). It phrases nothing as an attribute, presupposed true, of the
student; and everything as a feeling or problem of the reader/tutor e.g. "your
emailing back revisions in response to every bit of feedback given makes me
think you are conscientious"; or "I was excited by your introduction, but felt
lost by the end of section 2". The latter "Reader-Based Feedback" mode a) is forced on us when we don't know what the author intended to communicate, and so cannot be constructive and concrete; (b) is much less affronting for authors (students) sensitive to criticism. For teaching creative writing, most students identify their writing with their core identity and are ultra-sensitive; in hard sciences, there is usually an external objective standard of correctness, so comments are mostly seen as impersonal and checkable by the student. Academic essay writing is in between. I also have a short discussion of my view of Elbow's "Reader-Based Feedback" within this page.
The reference for Reader-Based Feedback is probably
Elbow, Peter (1973/ 1998) Writing without Teachers
(New York: Oxford UP)
An impure, but "constructive", variation on this insists on giving a clear judgement instead of only an observation for the recipient to process, and instruction instead of leaving the recipent to decide on action; while still having part of the quality of RBF. One schema for such a response is: "When A happens, I feel B because C. What I imagine is D [show you understand what is behind their behaviour]. What I would prefer is E."
The idea here is NOT to offer techniques for assessment BUT to provide a clear statement about the conflicting criteria which any assessment must satisfy or compromise over. This is necessary to have any rational thought, let alone discussion, about choices in decideing on assessment design. Most of the literature is lacking this.
a) Criteria / requirements / dimensions of merit / aims / constraints:
all of which independently apply to any assessment design.
List EXPLICITLY the key criteria that have to be considered, both the
naively aspirational educational slogans, and also the unspeakable but real
constraints. What is hard about redesigning assessment is that there isn't one
thing you want to improve, but how to optimise, or at least
satisfice (reach an acceptibility threshold), multiple
requirements that often conflict. This is made much harder by some not
being written down in public, and so not discussed rationally by staff. (There
is a provisional list below.)
b) Metrics: For each of these criteria give a measurement scale that shows teachers the degree to which it is satisfied. E.g. if you want to raise the NSS score, then the NSS subscale is the measure (and could be administered every semester by a course team). If you want to improve learning, then you must show (for instance) grade rises year on year to demonstrate whether or not you succeeded.
c) Marks: I will also occasionally mention the marks or grades given to students as the result of an assessment activity, to point out what they would (logically) mean if they were to represent that educational aim (criterion) for the assessment design.
Reword? At a simple level, the whole of the Maths presentation at the workshop was about the large demonstrated learning benefits from persuading students to actually do some maths work every week. Conversely, students generally report learning a lot from doing their final year project, although we don't measure this. Their whole redesign addresses this criterion.
Metric:
The metric for satisfying this design criterion/aim is how much the student
learns from the activity, pre-to-post.
Mark: essentially this measures attendance (engaging in the
learning activity with reasonable sincerity), if it aligns with this aim.
[2.2] But the often neglected further issue is: to which use is it put? As argued in Draper (2009a), egocentric academics hold whole conferences on A&F while presupposing that the only use is to improve the technical knowledge of the learner. Each type of learner use of assessment and feedback is in fact an independent criterion for designing an assessment, so that it produces that information. Thus this one sub-criterion of providing informaton useful to the learner in fact produces six alternative independent criteria, all desirable.
Draper,S.W. (2009a)
"What are learners actually regulating when given feedback?"
British Journal of Educational Technology vol.40 no.2 pp.306-315
doi:10.1111/j.1467-8535.2008.00930.x
Draper,S.W. (2009b) "Catalytic assessment: understanding how MCQs and EVS can
foster deep learning"
British Journal of Educational Technology vol.40 no.2 pp.285-293
doi:10.1111/j.1467-8535.2008.00920.x
One list of learner uses follows.
Perhaps feedback doesn't make a difference to the amount of learning. Teachers should have communicated it in advance, so feedback not necessary; learners should know how to check and remediate their own learning, and not rely on being told this.
F-Prompting seems to be SO important, transformative of whether students learn from feedback. The main problem seems to be that our students mostly do not have any concept of learing from our written feedback: it doesn't occur to them to actively use it.
Web site logical path:
[www.psy.gla.ac.uk]
[~steve]
[rap]
[this page]
[Top of this page]