Last changed 26 July 1998 ............... Length about 900 words (6000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/courses/cbl.teach.html.

[ Teacher overview    Teacher details    Learner main page    Course home page (sample) ]

The design of the CBL evaluation ATOM

[Details and notes for teachers and authors]

This is notes about the design and organisation of the ATOM on CBL evaluation. Whereas the ATOM's main page is written primarily for students trying to do the ATOM, (and the Heriot-Watt delivery page is is written to pull together links for the first delivery of the ATOM at Heriot-Watt) this is written for teachers, authors, and students interested in the aims and design of the atom.

Aims

  • To have experience of trying to do an evaluation
  • To understand some of the issues involved in selecting a CBL evaluation method

    Objectives

  • Design an evaluation using Integrative Evaluation
  • Analyse the data from the evaluation
  • Write a report on the evaluation
  • Recommend changes to a course on the strength of your evaluation

    How it fits into the course

    See the overview page.

    Rationale (why is the exercise designed this way?)

    My previous experience of teaching HCI suggests to me that the big learning experience is having students DO the HCI prototyping cycle of constructing a program, testing it on users, and modifying it in the light of observed problems. I expect doing it to be the best way of learning the strengths, meaning, and limitations of educational evaluation too. Certainly, that is how we developed it. That is, we didn't think theoretically this would be a good idea, but did "evaluation" then reflected on what aspects of it turned out to be useful in practice.

    Since evaluation is a procedure -- something you do -- then doing it is probably the best way of learning it.

    And doing it is the best way of linking it to a student's personal experience: another good principle. This also suggests it might be most powerful if the student evaluates a course they are themselves taking, although it may be confusing to play both subject and evaluator in one situation.

    In the first delivery, the local deliverers felt that the main value of the remote expert was in the second video conference, commenting on students' initial findings. However, it could be that if the first video conference was better timed so as to be when students had really read the assigned reading and created proposed designs for their evaluations, then discussing that would be a good use of the remote expert.

    Delivery

    Delivery requires various preparatory organising actions by the local teachers.

    Planning

    Two major planning actions had to be done.

    Firstly, deciding on the content of the exercise for this delivery i.e. what CBL delivery is to be evaluated. My preference was to have the students evaluate another set of learners taking some CBL material. This was deemed too hard to organise (it would require time-tabling constraints on two classes to be compatible, apart from anything else). It would have been realistic in that the subjects and clients would have been separate.

    Instead, it was decided to evaluate some other item in the same course. This means the client teachers are the teachers of this course; and the subjects and evaluators are both the students. Good in some ways: they get to experience being both subject and evaluator, and they don't have to persuade their subjects to participate and address confidentiality issues.

    Secondly, you have to schedule and book the video conferencing: to fit with class times, availability of the remote expert, and availability of the video conferencing network and studio.

    (And in principle we could/should have planned when the remote expert would be responsive by email ...).

    Resources/media

    The exercise only makes sense if students read about evaluation first. These papers should be made available, probably by local photocopying rather than having each student print them separately from the web. (The papers suggested here are on the web; links from the home page.)

    The exercise was presented by web pages to the students (and teachers).

    We used both email and conferencing tools for discussion. However we failed to establish a firm structure for how these should be used as part of this ATOM. There was a notion that as part of the whole course, there would be a discussion topic each week led by the local teachers. This was not enforced by assessment etc.; and the remote expert's role with this was not laid down rigidly.

    We organised 2 video conferences with the remote expert and the class: booked in advance.

    My personal notes for mini-lecture given as part of first video conference

    TRAILs: past student work

    There is no past student work available as examples.

    Evaluation reports and comments

    There has been one delivery of this ATOM so far, and I hope the evaluation report from this will be available here soon.

    How the students did in deliverers' view
    The students worked in 2 groups.

    Neither were great. The group evaluating the CTL module designed some good instruments (resource questionnaire + quiz) but didn't use them, ended up doing a course review style questionnaire + user interface evaluation on some software used. Didn't end up with recommendations.

    Other group did more in the way of talking to the lecturer involved, did an email questionnaire of the form "Do you feel more confident about topic X after session", and a "focus group" interview (which reads more like individual questionnaires) on the use of a video in the teaching session.

    Both reports are fairly coherent and sensible, apart from the feeling that they are "missing the point" a bit on integrative evaluation, and wanting to do summative evaluation.

    Group coherence
    A real issue is to what extent the remote expert feels part of the group. For instance, in the video conference, he is alone in a different room, while the class is together and know each other in advance. A second example is that the electronic discussion clearly took place in the context of face to face discussions in class: but the remote expert had no knowledge of that.

    Notification
    The other big issue is pace and pacing. 1) of students doing work to a timetable; 2) of the remote expert (and others?): email has to be processed regularly, but there is no prior need to look at the conference tool (no pre-existing habit) at any particular time or at all.

    In other ATOMs there was an issue of students not being notified when feedback was in fact posted on the web. The converse is also an issue: that a remote expert may not know when discussion is appearing on a web tool, particularly as for the class and local deliverers, that discussion is in a context that includes face to face conversations and reminders.

    Size of the ATOM
    In post-delivery discussions between author and the two local deliverers, a division emerged about what size this ATOM should be. It was supposed to be a week's topic in the course, but tended to buldge out of that. The issue is whether a next time it should be shrunk down to the standard size, or allowed to be oversize.

    The argument for smallness is that this course, like many others, is mainly organised around weeks, with one work topic per week. That is why ATOMs are normally meant to be of a size to fit one week, allowing easy design of courses. If they grow, they make course organisation more difficult. The argument for bigness is really implied (though I didn't recognise this originally) by the rationale given above (in the Rationale section). The best way to believe in the virtue of evaluation is to do it yourself and experience the surprises. That entails an exercise that is unlikely to fit into a small week's work. The rationale came originally from a similar exercise on HCI evaluation. There, it is so important that it justifies taking a large part of student's time. It is not so clear that the same effort is necessarily justified in a CBL course: that depends on the deliverers' (the course organisers') values.

    [ Teacher overview    Teacher details    Learner main page    Course home page (sample) ]