Support for startup uses of electronic voting in lectures

By Steve Draper,   Department of Psychology,   University of Glasgow.

[Third newsletter article. A version of this appeared in the University of Glasgow Newsletter 278, Sept 2006 here]

A considerable number of lecturers across the university have adopted the use of EVS (electronic voting systems) in some of their lectures since we first acquired the equipment in 2001. For the next six months, we are able not only to loan the equipment for any room but to provide people to help you with it, if you wish to give it a try. To do this contact either Steve Draper or Chris Mitchell (details at

The web pages there also include suggestions about the many different ways of using EVS, published papers on the evidence of their effectiveness here at Glasgow, and even a video of a class using them. If you've seen the TV programme "Who wants to be a millionaire?" then the "ask the audience" feature gives you an idea of the format. However from a pedagogical point of view, the most important underlying aspect is not the pub quiz one of who knows the right answer, but the way the resulting bar chart of aggregated answers gives everyone in the room, teacher and students alike, in one glance a summary of how much agreement and disagreement there is on the topic of the question. This allows even a huge group to keep in touch on the degree of commonality versus divergence in a way a babble of voices, or one or two voices out of hundreds, never can, and is equally useful for tracking comprehension of undisputed truth, or the spread of views on a discussion topic. Those with the longest experience of using EVS say that, while they began with the idea that it increased student engagement (which they still feel it manifestly does), why they would find it intolerable now to do without it is that they would lose the much better day to day grasp it gives them of where their class "is" with respect to the subject.

One application we are particularly interested in at the moment, is its use for class tests (provided they can be expressed in multiple choice question format). Here students might work for half an hour on paper on a set of questions; then they key in the answers they have calculated; and the lecturer responds to aggregated results: moving swiftly over unproblematic questions, and explaining those questions where substantial numbers got the wrong answer. Besides providing instant feedback to students without staff labour, I have seen students question staff for clarification at this stage, thus essentially making the feedback interactive and responsive, while also shared by the whole class. This, even though much cheaper, is qualitatively superior to the usual written feedback we give students, which must often miss the mark because it has to assume some level of understanding which we cannot check.