Last changed 26 Jul 2006 ............... Length about 900 words (7,000 bytes).
(Document started on 25 Jul 2006.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/rap/eval.html. You may copy it. How to refer to it.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [rap] [this page]

Educational evaluation in the RAP project

By Steve Draper,   Department of Psychology,   University of Glasgow.

This page is a first rough note on what the brief is for (educational) evaluation work in the Reengineering Assessment Project.

Basically: it is to gather information/data on what the effect of the various redesigns of courses is (during the academic session 2006-7). There are numerous such courses: maybe two at UoG, several all in the business school at GCal, and at least one in each of 5 different faculties at Strathclyde.

Effects are on a) learning outcomes, b) student attitudes, enjoyment, motivation c) costs of putting on the courses. (While probably cost measures will be addressed by others, it is an important category: money and staff time are always finite: would they secure more learning benefits if spent on a different aspect of the course?)

Measures may be comparable (allowing comparisons to be made), or open-ended (essential for detecting surprises not foreseen by the researchers and/or teachers). This is often inaccurately called quantitative vs. qualitative. Another contrast is between summative and formative (for summing up what happened vs. detecting what might and should be adjusted to improve this course). More on these issues and how to combine them can be seen in two papers of mine: (Draper, 1996, and submitted).

Deciding what to measure requires combining what is possible to collect, what the project is required to collect (by its funders, by what it promised), what the evaluators think is important, and what the teachers concerned are interested in.

Related to this, we should as far as possible collect "design rationales": accounts from the teachers of why the courses were (re)designed the way they were. These are probably important project outputs in themselves. But they also should be related as far as possible to what we measure. If a redesign was meant to increase student engagement, then we should take measures of this. If it was meant to reduce staff work, then we should measure staff time/effort and also check if related quality was maintained. For instance, if a system to support staff giving written feedback on essays is introduced, then a) did it save time, or in the end not much because most time goes in reading the essays not writing the comments; b) did the students find the feedback any more or less useful than usual? Ideally, design rationales should be closely linked to measures that tell us how each intended benefit panned out in practice.

You can read the original project grant application if you like, but probably won't find it all that interesting.

The pedagogical inspiration for this project, i.e. the educational ideas that will hopefully inform how the courses are being redesigned, is in Nicol & Macfarlane-Dick (2006). This may interest you, and should be one of the things informing what we try to measure. Currently I think the single most important idea there is Sadler's (1989): that much of learning depends on feedback, and much of whether feedback is effective depends on how well the learner already understands the assessment criteria. Practice isn't the only, and certainly isn't the fastest or cheapest, thing we can do to improve learners' understanding of these.

Possibly more interesting is a paper (Nicol & Draper, in progress), currently a pretty rough draft, exploring one important part of the background to the whole funding programme; particularly part B of the paper. It relates to the Pew programme in the USA, which initiated 30 course redesigns, all of which achieved cost savings and most of which simultaneously measurably improved learning quality. Is it real? is it relevant? could it be done here? Above all (for us evaluators) could we report anything so definite for each of our course redesigns? We are already in a weak position, but may if we act fast be able to get essential measures in place in order to be able to say something by the end of the next academic year. Our guiding heuristic -- even more here than in all research -- should be to begin by asking ourselves what we would wish to be able to write at the end. The reports on the Pew projects have a planned framework (though not all used exactly the same measures as each other), and decided in advance what they would measure.

References

  • Examples of Pew programme redesigns

  • The grant application (0.6Mbytes; 56 pages; 22,500 words)

  • Draper,S.W. (submitted) "Including the quantitative: Experiment, surprise seeking, theory" Journal of Interactive Media in Education [local PDF]

  • Draper,S.W., Brown, M.I., Henderson,F.P. & McAteer,E. (1996) "Integrative evaluation: an emerging role for classroom studies of CAL" Computers and Education vol.26 no.1-3, pp.17-32 [local copy]

  • Nicol,D. & Draper,S.W. (in progress) "Understanding the prospects for transformation" [local PDF]

  • Nicol,D. & Macfarlane-Dick,D. (2006) "Formative assessment and self-regulated learning: A model and seven principles of good feedback practice" Studies in Higher Education vol.31 no.2 pp.199-218 [local PDF]

  • Sadler, D.R. (1989) "Formative assessment and the design of instructional systems" Instructional Science vol.18 pp.119-144.

    Web site logical path: [www.psy.gla.ac.uk] [~steve] [rap] [this page]
    [Top of this page]