ChristyBannerman

Draper's Catalytic Assessment

“assessment designed to lead to learning later, where that learning typically occurs without formative feedback but through processes internal to the learner” (Draper, 2009)



Contents


Summary of the theory

How it works

My response

Recommendation

References



Summary of the theory

Catalytic assessment describes the use of the multiple choice question (MCQ) format in such a way that it promotes deep learning. Catalytic assessment can be aided by technology such as the electronic voting system (EVS), but this is not a requirement for all of the techniques. Used in this way, the MCQ is primarily a tool to facilitate learning as opposed to an assessment method in itself.


How it works


There are various ways of using MCQs to accomplish this goal. These can be divided into helpful subheadings:



Method 1: Asking directly



Assertion-reason questions

The MCQ takes the form of statement - response. This is more powerful than a standard MCQ, because it requires more than basic recognition.


Justifying the response

Instead of ticking a response, the learner thinks of reasons why the answer is not any of the others.


Giving confidence levels

Participants select an answer and indicate their level of confidence that they are correct. Marks earned depends both on the correctness of their answers and their confidence in their answers. This has been developed into an organised and easy to implement system by Gardner-Medwin (2006). According to a study by Issroff and Gardner-Medwin, 87% of students rate the confidence based marking system as 'useful' or 'very useful' (1998).


Method 2: Peer instruction and interaction


Mazur's method (1997)

The question is displayed and each class member takes a vote. The class answers are displayed for everyone to see and the students discuss opinions with their peers. Following this, another vote is taken and the correct answer is revealed by the instructor. Howe, Tolmie and Rogers found that the positive results of this technique are not visible at the time of testing, but at subsequent testing 4 weeks later (Howie et al., 1992). According to a later study by Howe and colleagues, this is because the uncertainty created during the MQC prompted students to seek out extra material (Howie et al., 2005).


Method 3: Learner-authored questions


Learners create MCQs themselves. This gets the learners to think of reasons why each question is right or wrong, which prompts deeper learning. Sharp and Sutherland got students to design MCQs in small groups to be used as part of a class presentation (Sharp and Sutherland, 2007). Informally, students reported that the process of question design had prompted useful discussions. In a related experiment, Arthur asked student teams to produce MCQs for the class. Marks were awarded for clarity, correctness of responses, fidelity to the course learning objectives and appropriate judgement of difficulty (Arthur, 2006). This system has the benefits of the other techniques, while also allowing students an opportunity to practise skills relevant to exams.




My response


There is certainly an evidence base for the catalytic assessment techniques detailed in Draper (2009). My main criticism is not that I do not believe that they are sound techniques, but that I believe that it is not possible to monitor most of them in a typical educational setting.


Response: Asking directly and peer interaction

Assertion-reason questions and justifying responses are good ideas, but are student reliant. It is each student's own choice whether to be faithful to the system and actually think around each question as they should. The explicit part of the assessment is still, in the end, selecting a response from a range of responses - whether the student does the extra work is unknown. Similarly, there is the same problem with the confidence level technique. As you have read, the point is that for low confidence answers, students seek out the information they need to improve their grasp of that area. However, there is no guarantee that students will actually do this. The peer interaction model again puts a lot of confidence in the student. How do you tell whether people are actually discussing the course when they break-off into groups? How do you ensure that students seek out the material they don't understand fully?


Response: Learner-authored test questions

By contrast, I can see clearly how this technique could be assessed and monitored. Firstly, creating a MQC is a tangible task – it's not some vague promise to 'catch-up on reading' or 'look into it'. In the Sharp and Sutherland task, for example, the MCQs had to be presented to the class - the students were asked to produce work of some kind, and their efforts could be independently assessed. In the study by Arthur, we can see how a well thought out marking scheme can actively encourage the kind of deep learning intended by catalytic learning while also, crucially, ensuring that students actually do the work.


Recommendation

What I would improve is the monitoring of students' work, specifically for the first two methods (asking directly and peer interaction). I would make sure that the formulation of reasons against the unselected MCQs was an explicit part of the assessment, rather than a recommendation. Similarly, for the confidence level technique, I would use students' answers and confidence judgements to form the basis of mandatory homework assignments. Lastly, I would fully integrate the peer interaction method into the learner-authored questions method so that there would be a specific assignment to do for each discussion session. In this way, the instructor could be sure that learners were actually doing what they were supposed to.




References



Arthur, N. (2006). Using student-generated assessment items to enhance teamwork, feedback and the learning process. Synergy: Supporting the Scholarship of Teaching and Learning at the University of Sydney, 24, 21–23.


Draper, S. W. (2009) Catalytic assessment: understanding how MCQs and EVS can foster deep learning. British Journal of Educational Technology, vol. 40, no. 2, 285-293


Gardner-Medwin, A. R. (2006). Confidence-based marking: towards deeper learning and better exams. In C. Bryan & K. Clegg (Eds), Innovative assessment in higher education. London: Routledge.


Issroff K., Gardner-Medwin A.R. (1998) Evaluation of confidence assessment within optional coursework. In : Oliver, M. (Ed) Innovation in the Evaluation of Learning Technology, University of North London: London, pp 169-179


Howe, C. J., Tolmie, A. & Rogers, C. (1992). The acquisition of conceptual knowledge in science by primary school children: group interacting and the understanding of motion down an incline. British Journal of Developmental Psychology, 10, 113–130.


Howe, C., Mcwilliam, D. & Cross, G. (2005). Chance favours only the prepared mind: incubation and the delayed effects of peer collaboration. British Journal of Psychology, 96, 1, 67–93.


Mazur, E. (1997). Peer instruction: a user’s manual. London: Prentice Hall.


Sharp, A. & Sutherland, A. (2007) Learning Gains ... My (ARS)S—the impact of student empowerment using Audience Response Systems Technology on Knowledge Construction, Student Engagement and Assessment. The REAP International Online Conference on Assessment Design for Learner Responsibility