Web site logical path: [www.psy.gla.ac.uk] [~steve] [EVSmain] [this page]
This paper is derived from a talk at CAL03
An unstated implication of this was that we ourselves should look at the teaching practices around us, identify the weakest points, and try to discover how ICT could address these. One of the weakest points in the teaching at many universities is the use of lecturing especially to large classes. The common diagnosis of what is weak in this method is the lack of interactivity. Teachers (i.e. lecturers) experience this as a feeling that they cannot get any discussion going and so lose much sense of how well the material is going over. A more theoretical view is that because no overt response is required of students, little mental processing in fact takes place, and hence little learning, at least during the lecture. A technology aimed directly at this gap is that of interactive handsets (similar to those used in the "Ask the audience" part of the TV show "Who wants to be a millionaire?"), where every student can key in a response to a displayed MCQ (multiple choice question), and the aggregated results are immediately displayed to everyone. Because this argument applies to lectures in general, independent of subject, audience size, or point in a degree programme, we obtained portable equipment that could be set up at any time in any lecture theatre, and advertised its availability in the university newsletter; the result was a wide variety of applications with teachers motivated to attempt this innovation.
In addition an analysis was developed of ways in which this might be used pedagogically (Draper et al. 2002). Despite the limitation to an MCQ format, these pedagogic uses seemed plausible:
This then was the background for the introduction of the technology.
Fig.1 Infrared handset transmitter
Fig.2 A receiver
Fig.3 The projected feedback during collection, showing handset ID numbers
Fig.4 Display of aggregated responses
While answers are being collected, feedback is displayed when a vote is received (fig.3) in the form of the handset's unique ID number (shown on a label on the back of each handset). With an audience of 200, it typically takes about two minutes to collect the answers; with 30 people, 30 seconds is enough. (Displaying and explaining the question is usually given additional time before that; for questions that require considerable thought by the audience extra thinking time during collection will be required; while discussing the answers is often given considerable time after collection.) When collection is stopped, a barchart is projected (fig.4) showing the number or percentage of people who voted for each alternative. Thus everyone can see the degree of consensus, while each participant also knows what they themselves selected, and so how their response compares with the rest. Each person's response is anonymous to the rest of the audience, and also anonymous to the teacher, unless records are kept and used of which learner has which handset.
Enough equipment was acquired for use in our two largest lecture theatres simultaneously (350 and 300 seats), and aimed to provide assistants to transport, set up, and take down the equipment. This frees the lecturer to concentrate on their job of managing the occasion as a whole, and allows the equipment to be used without having to book special rooms. For practised assistants (not always the case in the trials reported here), setup of the receivers and laptop takes about three minutes. If, as is being increasingly done, receivers and cables are permanently installed in the room, some of this is avoided, but of course usually the computer still needs to be connected and started up. Handing out transmitters (handsets) is like distributing handouts: with organisation it can be done as people enter the room, and similarly collected in boxes as they leave. Only one handset has so far been lost using this arrangement. Alternatively, as was done for one course, handsets can be given out, subject to a fine for non-return, to each student for a whole semester. This is a less efficient use of equipment, and resulted in about 5% not being recovered after the course, but a still greater problem was that on a given day, 25-35% forgot to bring them. Both the hardware and software of the handset equipment has proved largely reliable. Unpractised assistants were occasionally a problem, causing delays. However the biggest disruptions have been caused by the data projection equipment provided by the university.
It is important never to allow the equipment to become the point around which the occasion is organised. It is perfectly possible and desirable to mix handset use with asking questions in other modes: by shows of hands, and simply asking the audience to volunteer answers. As lecturers become more fluent with the equipment, they tend to do this more and more naturally. The other modes tend to feel more spontaneous and quicker, while returning to the handsets gets every audience member (again) to contribute a response.
The equipment is in use elsewhere in the UK, in other countries such as the USA and Hong Kong, and in schools as well as universities. Here we report on the experience in one institution, based on evaluations in a variety of teaching situations, and with particular focus on its general applicability.
Between October 2001 and March 2003 it was used in Medicine, Dentistry, Veterinary Science, Biology, Psychology, Computing Science, Statistics, and Philosophy. There thus appear to be no constraints on the subject it can be used in. (In other UK universities we know of, it has also been used in French language, Economics (Elliott, 2001), Mechanical Engineering (Nicol & Boyle, in press), and Mathematics.)
It was used with undergraduate levels from first year to final year. It was used on groups with sizes from about 20 to about 300. While it takes a bit longer to collect responses from larger groups, it is still perfectly practicable with them; and in larger groups the expected added pedagogic value of the added interactivity is greater because it is more difficult to achieve there by other means.
It has been used for a range of periods. It was used on a one-off occasion to give medical students practice on an MCQ format exam, thus demonstrating that it can be used successfully by learners and teachers with no previous experience of it. (The success of that occasion led to it being used again with that class at later dates.) At the other end of the range, it has also been used (twice) throughout a Computing Science course with two lectures a week for a semester.
|Handset uses between Oct.2001 and March 2003
* some evaluation carried out
|Target number in class
|* Computing Science
|Level 1 2001-02
|20 x 2
|Level 1 2002-03
|20 x 2
|Level 4 Education
|Level 4 HCI
|3 x 2
|Level 2 Logic
|Level 1 Mind & Body
|1 x 2
|GP's (short course)
|Level 1/ Level 2
The types of pedagogic use included:
The amount of data sought had to be proportionate to how long the equipment was used. Thus in the longer uses learners were asked about some of the possible specific advantages and disadvantages, while in the short, one-occasion uses only a single question might be asked. Lists of advantages and disadvantages were compiled from suggestions in interviews, then in some cases whole classes were asked about these. An early example of each of these lists follows. Note that all of these items were perceived by at least some students in one of the applications, but often did not apply generally to all uses of the equipment. Furthermore the relative importance of each item changed a lot over different cases. Generally speaking, the benefits stayed fairly stable while the disadvantages changed, as one would hope, as we improved our practice in the light of the evaluation data.
In the cases where advantages and disadvantages were asked about in detail, the highest scoring advantage was most often "checks whether you are understanding it as well as you think you are". Anonymity ranked high in some classes, but in some others was not seen as important. Developing practice in the first year computing science course meant that in the second year of use (but not in the first) one of the leading benefits became "Makes me think about the course material" and one of the leading disadvantages became "often don't have enough time to think before having to vote".
If learners are asked to identify problems then these will be suggested or confirmed, which is good for making improvements but does not show whether the application is beneficial. If they are asked (as they were at first) whether handsets are useful, most agree that they are: this may be good for propaganda, but only shows that there are some perceived benefits, not whether on balance there is a net benefit and so that the application is worthwhile. In later studies this question about net benefit was asked directly whenever possible: "What was, for you, the balance of benefit vs. disadvantage from the use of the handsets in your lectures?" with the response options from "definitely benefited" through neutral to "definite negative net value". In all cases except one, there were a clear majority of students who reported that the advantages outweighed the disadvantages. For example fig.6 shows the common pattern, while fig.7 includes the exception.
Fig.6 Responses to the net value question in 2002-3 in a first year computing science course (210 respondents).
Fig.7 Contrasting responses to the net value question in two different final year Psychology courses.
The exception, embarassingly but perhaps significantly, was in a class given by one of the authors (fig. 7: the HCI class). In this class, as with another class run by someone heavily involved in the whole initiative, there was a perception reported by a significant subset of students that the equipment was being used for its own sake, because of the enthusiasms of the teacher, rather than being of direct benefit to the class. This strongly reinforces the "niche" argument, that only when education is put first do we actually see real benefits of technology. A second factor was that this class witnessed several problems with setting up the equipment, and so more of the disadvantages than in some other cases. A third interpretation is that this class (with typically about 20 attending on each occasion) already made use of many interactive techniques: participatory demonstrations of methods, buzz groups, structured discussions, and so on. Thus the relative increase in interactivity was less marked than in most classes.
Similarly all teachers except two felt it had been worth it. A lecturer in the veterinary school gave this feedback: "With the handsets, I could see exactly which points I had not conveyed clearly and could rectify it straight away, the major example being when I asked the students what I thought was a simple question -- identifying the FCoV carrier cat! Although most (68%) got it right, but an astonshing number chose one of the other cats. I could see that they hadn't fully understood that many antibody positive cats are not infected. It was great, because the students who got the wrong answer are very likely the same ones who never utter a word in interactive lectures and it gave them a chance to participate anonymously." The exceptions were one statistics lecturer from a group who had collectively decided to give the equipment a trial in a series of tutorials, and one medical lecturer from a group of three who had decided to use it in a joint session (with each taking a turn to present). In both these cases there was not only less personal commitment, but also concern with time overall (would it be enough to "get through" the planned material?) made worse by any use of extra equipment. (See also Stuart & Brown (under review) for a philosophy teacher who judged the attempt with one class to be a failure, but another better planned one to be a great success.)
It can be used successfully from the first session, at least by teachers with a prior idea of why and how it would help them in their particular situation, which amounts to knowing what questions they want to ask, and how that would fit into their lecture plan. Examples of this could be paraphrased as "I want to give them practice at the MCQ exam format they will shortly be facing", "I want to give them practice before their lab at interpreting photomicrographs", "I want to ask them which form of logical deduction they find most difficult, so I can concentrate the time on that", "I want to drill them on classifying each of the evaluation instruments I'm teaching against a standard framework, so after describing each method I'll ask them how to classify it against each of the six dimensions in the framework".
After the great majority of these uses (with the exceptions noted above) both learners and teachers judged its advantages outweighed its disadvantages.
In this startup phase, the equipment was used only by volunteers, and furthermore almost exclusively by those motivated by a clear pedagogic idea in relation to the equipment. What therefore is limited about this report of successful institutional change is that they were all keen, and all had their own spontaneous vision of how it would be a real help. What is good about this is that their visions were diverse, and they all worked first time in very varied contexts. Thus the expectation that it would be useful in all subjects is supported, but whether it could add value to all lectures is largely untested.
An issue that arises is whether this may be a novelty effect. Certainly in almost all classes where it is introduced for the first time, there is a great rustle and smiles of interest. However after a few minutes, and certainly by the end of the first lecture, the feedback showed that it was being judged again by whether it was serving a clear educational purpose, and if not, then complaints and down-ratings were articulated if asked for. Thus there is a novelty effect, but it seems to last only somewhere between 5 and 50 minutes, and this was true whether the audience was mainly female, final year Arts faculty students, or mainly male, first year, computing students. In fact our experience with this and other innovations is often the reverse. On first introduction learners are ready to be sceptical (even if entertained), but in subsequent years take it for granted and rate it higher. This is probably due to two factors: the teachers becoming more practised and so using it better, and the learners being influenced by this growing confidence in the teachers. Thus innovation is often valued more not less highly as it becomes less novel, more familiar: an anti-novelty effect. Comparative ratings and teacher reports from the first year Computing Science class showed this pattern for the voting equipment across the two academic years.
The general motivation for trying this equipment was to introduce more interactivity into lectures. This issue was explored by asking students in some classes how likely they were to work out an answer if it was to be given in different ways: via the handsets, orally (with students putting up their hands), and so on. We first asked a handful of students informally after a variety of handset use lectures about this, and then did a systematic survey on this in some classes: see fig.8. Data from a different class but showing a similar pattern is given in Stuart & Brown (under review).
Given a problem to work out in a lecture, were you more likely to work out the answer if:
|% of students who voted for each option
(LT1 + LT2)
|The class was asked for a verbal response to the question
|The class was asked to vote on one or more answers using "hands up"
|The class was asked to vote on one or more answers using the handsets
|None of the above (i.e. I never try to work out an answer)
|All of the above (i.e. I always try to work out an answer)
|"Verbal" and "Hands up" but not "Handsets"
|"Verbal" and "Handsets" but not "Hands up")
|"Hands up" and "Handsets" but not "Verbal"
Continuing informal probes show this remains a constant theme: asking questions via the handsets makes far more students actually think through and decide on an answer than presenting the question in other ways. Having to produce an answer oneself causes the mental processing; otherwise most students play the role of spectator and wait to see how it will be answered by others. This is strong support for the importance of one sense of interactivity: prompting mental processing in every learner's mind. This is true whether the question is one meant to provoke discussion, or simply a small problem of the kind that might form part of a basic test. Furthermore students value this sense of interactivity, saying things such as "how nice to be actually asked to think in a lecture".
Another recurring theme is the importance of the anonymity provided by the equipment. In some ways this is surprising. It is easy to imagine that a first year student in a group of 300 people they don't know, faced by a lecturer they have no personal relationship with, is reluctant to answer in public. However a group of about 30 (in final year psychology lectures on education) who knew each other well, and had shown no hesitation in joining in an oral group discussion previously, still said that the anonymity was important when challenged by a self-assessment question about the theory being taught that they felt uncertain about. Thus it seems it is not only about having a "good" atmosphere within a group, but about how threatening each question alone feels. In contrast, it is very noticeable that few select the "don't know" response option when that is offered in a handset question. Anonymity seems to function to induce people to pick a definite answer even when they are quite uncertain; and this in turn seems useful in getting people both to think in order to produce an answer, and then to take this (if they get it wrong) as a reason for working on the point later. Thus anonymity seems often important (not just to break the ice and establish a good atmosphere at the start), and when mixed methods of interaction are used, returning to the handsets is probably still important because of this. This is one distinctive advantage of this equipment over other methods such as raising hands, holding up cardboard response cards, and so on.
The benefit most frequently mentioned by students was the feedback they got about their own understanding from many uses of the handsets. This supports the widely reported point that useful feedback is in short supply for students, and given the rise in student numbers, an ever more important bottleneck in UK university educational provision. Handsets are one way of providing immediate personal feedback to the whole class simultaneously (since they all know what they answered, and can compare that both to what is announced as the correct answer, and to what the rest of the class selected as the answer). In fact this allows even relatively uninspired handset use to be valued by students. In one case, we persuaded a colleague to try using them, and though he agreed, he spent only a few minutes designing some simple self-assessment questions he tacked on to the end of his prepared lecture. Nevertheless, students regarded this as a worthwhile increase in value to them. In a different case, of statistics "tutorials" of up to 200 students, students also sometimes showed no interaction in the overt, social sense: they resolutely declined all invitations to respond orally to questions, and in many cases didn't discuss questions with their neighbours when invited to; but they still reported afterwards how valuable they found the feedback provided on what they did and didn't understand correctly. This not only underlines the value of feedback to students and the potential of handsets to support this, but draws attention to the different senses of interactivity. Human-human interaction is one important kind of activity that facilitates learning, but it is by no means the only such kind supported by the handsets.
So in summary, in the applications in the first 18 months of introducing the equipment, the three most commonly important features of it that emerged were getting feedback to learners about whether they understand the material presented, that it does get most students to think about the question and decide on an answer while the alternatives do not, and that the anonymity is often important in achieving these benefits. What should be the next focus of interest in developing handset use?
An example is as follows. "Remember the old logo or advert for Levi's jeans that showed a pair of jeans being pulled apart by two teams of mules pulling in opposite directions. If one of the mule teams were sent away, and their leg of the jeans tied to a big tree instead, would the force (tension) in the jeans be: half, the same, or twice what it was with two mule teams?" Designing a really good brain teaser is not just about a good question, but about creating distractors i.e. wrong but very tempting answers. In fact, they are really paradoxes, where there seem to be excellent reasons for each contradictory alternative. Such questions are ideal for starting discussions, although perhaps less than optimal as a fair diagnosis of knowledge.
The interactive engagement approach has been shown to have a large positive effect, and to work across large numbers of institutions in teaching mechanics (Hake, 1998). It is natural to use handsets with this approach. The experience of Jim Boyle in Mechanical Engineering at Strathclyde University is that progressive reorganisation of the teaching around this approach has effects on the timetable (prefer two hour to one hour sessions), the architecture (seating to facilitate small group discussion within large rooms with handset equipment built in), and the relationship of the teaching to the curriculum (abandon the commitment to cover all material in the sessions in favour of concentrating on the topics that are most difficult for that group). See for example Nicol & Boyle (in press).
This is certainly one important model to emulate, and a number of the trials discussed here did use questions to launch peer discussion. However "interactive engagement" as a general approach has been demonstrated only in one part of one subject area; and more importantly it seems to depend on a question bank of brain teasers, which are a considerable effort to invent or discover. Furthermore, handset equipment supports many other kinds of pedagogic use than initiating peer discussion. While this technique certainly warrants further development across subject areas, a slightly different issue has emerged as of general importance.
When using handsets, most teachers naturally do this in a small way by varying the amount of explanation of the question and alternative responses, cutting it short if most students gave the correct answer, expanding it if many got it wrong. However most feel the pressure of a fixed agenda for the session to the point of preferring to "finish" what they planned to "cover" even in the face of evidence that they are failing to communicate its meaning. Clearly the next stage is learning to design sessions that are more contingent. This is important because it makes the teaching relevant to actual needs. (It is in fact equally pointless to waste everyone's time in sessions that fail to achieve learner understanding, or in sessions on topics the whole group understands already.) This is more important than may be realised. Classes vary from year to year: lecturers with regular handset feedback report not being able to predict what learners will find difficult from year to year. (Those who say they can, typically do not in fact have much feedback about their students). It is also important because it cheers learners up enormously to see their response having a direct effect, and to see a teacher respond on the spot to their actual learning needs.
We are currently involved in three developments related to developing contingent teaching further. The first is to gain more experience in following the "interactive engagement" direction of using the handsets as a tool to launch productive peer discussion. The second is a collection of technical developments in new software for the handsets by Quintin Cutts and others. Most of these new features are aimed at improving the extent to which teachers can examine and reflect on a classes' performance with a view to adapting the course from week to week and year to year, and linking what happened in one session to others (e.g. by displaying answer distributions from an earlier lecture beside those from the current lecture). The third is the adoption of the handsets by an innovative group of Statistics lecturers for use in giant "tutorial" sessions with up to 200 students in a first year class (Wit, 2003). These sessions are not for introducing new material, and so can focus entirely on how to meet the needs of the learners by attempting to adapt on the spot to what they needed. Rather than simply varying the amount of explanation for a fixed set of questions, they have begun to experiment with bringing a large diagnostic set of questions, and selecting the next question depending on the audience response to earlier questions. We have already written web pages inspired by this (Draper, 2003) offering advice to teachers developing materials for handsets. We hope to test these ideas more directly.
Our sense of the first year of use is that in bracing ourselves against numerous little practical hitches, we were able to realise the pedagogical benefit we had anticipated, but were not immediately able to relax and reap still further benefits that were only just occurring to us. In the second year, with more confidence that it would all work without real anxiety (and work not just in a technical sense, but in the sense that sessions would go well and be well received and effective with students), we could begin to focus more attention on pedagogical benefits, and on the issue of how we might improve particular classes still further. Improved evaluations from the Computing Science class over the two academic years, and the more confident feeling the lecturer reports, are one illustration of this.
It seems reasonable to claim that immediate benefits for students have been achieved, and that in the first phase this has been mainly due (as anticipated) to increased interactivity for the learners. The issue now claiming our attention is how to promote more interactivity by the teachers i.e. more contingent teaching.
These handsets are about learner and teacher interaction: about having what individuals think and do affect what others consequently think and do. The point of having individuals co-present is to allow this. In large groups, it is easy and efficient to have everyone hear what one person has to say (this aspect scales up very well), but a basic limitation is in awareness of what the many think or say. This simple technology directly addresses this fundamental problem in a limited yet largely effective way, that gets the issue of feedback from the many to scale up too. In educational applications, such electronic voting systems can be used to get immediate feedback to learners on their understanding and many first uses concentrate on this, remedying one basic drawback to large class teaching. However just as important, but requiring somewhat more care to achieve, is feedback to the teacher about how this particular group is coping. The dream of personal teaching is really about adaptive teaching: where what is done depends on the learner's current state of understanding. The handsets also make this possible even with large groups, although it is additionally necessary for the teacher concerned to plan to do this, for instance by coming with a bank of questions and other material and being prepared for alternative actions depending on the group response. Thus this equipment addresses an intrinsic weakness of large group situations, and so has the potential for yielding real advantages over previous practices.
In summary, use of the handsets was judged by both learners and teachers to benefit them. It can immediately be used successfully by teachers new to it provided a) that they come with a niche-specific idea of how to use the equipment in their situation (although simply adding self-assessment questions seems to be valued almost generically), and b) that there is human assistance sufficient that no technical difficulties obtrude on the learning situation. Success is associated with increasing the interactivity of the occasion. Promising ways forward from the initial modes of use discussed here are to increase this interactivity by a) peer discussion, and b) more contingent teaching.
Thus we consider that the use of an electronic voting system can support modest but worthwhile learning improvements of a variety of kinds in a wide range of subjects and contexts. However this benefit does not depend merely on the technology but on how well it is used on each occasion to promote, through learner interactivity or contingent teaching or both, thought and reflection in the learners.
We would like to thank all the teachers who allowed us to observe their teaching even while it was developing, and both the students and teachers who gave us evaluation data.
We would also like to thank Quintin Cutts and Chris Mitchell for their prolonged support for all handset users throughout this period.
Draper, S.W. (2001) "Want to try interactive handsets in your lectures?" The Newsletter, no.231 (The University of Glasgow)
Draper (2003) "Interactive Lectures Interest Group"
Draper,S.W., Brown, M.I., Henderson,F.P. & McAteer,E. (1996) "Integrative evaluation: an emerging role for classroom studies of CAL" Computers and Education vol.26 no.1-3, pp.17-32
Draper,S.W., Cargill,J., & Cutts,Q. (2002) "Electronically enhanced classroom interaction" Australian Journal of Educational Technology vol.18 no.1 pp.13-23
Elliott,C. (2001) "Case Study: Economics Lectures Using a Personal Response System" http://www.economics.ltsn.ac.uk/showcase/elliott_prs.htm
Hake,R.R. (1998) "Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses" American Journal Physics vol.66 no.1 pp.64-74
Howe, C. J. (1991) "Explanatory concepts in physics: towards a principled evaluation of teaching materials" Computers and Education vol.17 no.1 pp.73-80
Miyake,N. (1986) "Constructive interaction and the iterative process of understanding" Cogntive Science vol.10 no.2 pp.151-177
Nicol, D. J. & Boyle, J. T. (in press) "Peer Instruction versus Class-wide Discussion in large classes: a comparison of two interaction methods in the wired classroom" Studies in Higher Education
Stuart,S. & Brown,M.I. (under review) "An electronically enhanced philosophical learning environment: Who wants to be good at logic?" Submitted to Computers and Education
Tinto,V. (1975) "Dropout from Higher Education: A Theoretical Synthesis of Recent Research" Review of Educational Research vol.45, pp.89-125.
Wit,E. (2003) "Who wants to be... The use of a Personal Response System in Statistics Teaching" MSOR Connections Vol.3, No.2 pp.5-11 (publisher: LTSN Maths, Stats & OR Network)
Wood, D., Wood, H. & Middleton, D. (1978) "An experimental evaluation of
four face-to-face teaching strategies" International Journal of Behavioral
Development vol.1 pp.131-147.
Web site logical path:
[Top of this page]