This compilation assembled on 22 Aug 2005 .


Last changed 20 Feb 2005 ............... Length about 400 words (6,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/lobby.html. You may copy it. How to refer to it.

Electronic Voting Systems and interactive lectures: entrance lobby

(written by Steve Draper)

Pic version of text below why


This is the entrance point for my web pages on Electronic Voting Systems (EVS) for use in lectures; or more generally for interactive lectures (ILIG = Interactive Lecture Interest Group); or more specifically for the PRS equipment which we mainly use, and for local Glasgow University arrangements.

If you want a quick look at what it's all about, to see if it might interest you, then try

To see all the things available on this site you should read over the main website index page; or print off all the pages to study: they are available as a single web page ready for printing: compilation on designing lectures (that use an EVS) and a comprehensive compilation.

Some of the most popular parts are:


Last changed 19 Aug 2005 ............... Length about 2,000 words (26,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/main.html. You may copy it. How to refer to it.

Interactive lectures interest group (ILIG): main website index page

(written by Steve Draper)

This is the home page for some web documents about interactive lecture methods in general, and using classroom electronic voting systems (EVS) in particular. (EVS are also sometimes referred to as PRS, GRS, CCS: for a discussion, see this list of terms used.)
(If you find this site useful, another major set of pages on a similar topic is at Amherst; or Hake's introduction.)

You can access the pages on this website in alternative ways:

Contents of this page (click to jump to a section)


There are basically two ways for a newcomer to tackle these web pages and the subject of interactive teaching with EVS. If you just want to know what EVS are, or if you have already decided to give them a try (perhaps because you have an idea where and how they would fit into your own work), then you want the "bottom up" approach: go to the section below on "How-to advice", and it will take you from low level practical details, up through designing a question, then the presentation issues (such as explanations) around a single question, then on to designing sets of related questions, and on "up" to wider scopes. On the other hand, if you aren't particularly committed to technology but are interested in systematically changing teaching to be more effective by being more "interactive", then you want the "top down" approach, and should begin with the first section "Interactive Lectures". You are more likely to be interested in this approach if you are a head of department or at least a course team leader, and can consider substantial changes to the demands made of your students and the timetable.

Interactive Lectures

  • Interactive Lectures: overall points.
          The EVS technique       The one minute paper technique.
  • Short (2 pages and a table) overview of our work with electronic voting systems. PDF file
  • EVS: a catalyst for lecture reform by Alistair Bruce.
  • Transforming lectures to improve learning

    Using EVS at the University of Glasgow

  • The EVS technique (short introduction with pictures)
  • More practical details (longer introduction)
  • To book the equipment: Chris Mitchell email: mitchell@dcs.gla.ac.uk ext.0918
  • Past workshops for prospective users
  • Overview evaluation paper about uses Oct 2001 - Dec 2003.

    Web sites related to the Glasgow initiatives

    How-to advice on using EVS anywhere

    This section is essentially a "bottom up" tour, beginning with practical technical details, and gradually leading to wider questions of how to string questions together or redesign whole sessions.

    What's it all about?

    If you want a quick look at what it's all about, to see if it might interest you, then try

    Getting started quickly

    The majority of the lecturers and presenters who have approached us to try using the EVS have already had an idea about how they might be used, and wanted practical tips on putting this into practice. Here are some introductory how-to topics for your first few uses.

    More detailed issues in designing and conducting sessions with EVS questions

    The set of different benefits and pedagogical approaches

    What are the pedagogical benefits / aims?     Short answer     Best summary     (an alternative expression)     Long answer (a whole paper)    

    Technologies

    Technologies and alternatives are given on this page,
    which also includes contact information on equipment purchase
    (and a few bits of history of earlier efforts)

    FAQs

    Some other common questions not answered elsewhere on these pages are here.

    Evaluating evidence on EVS effectiveness

    There are basically three classes of evidence to consider: as discussed on this page.

    Papers

    Ad hoc bibliography.

    written at Glasgow University

    written elsewhere in the UK

    mentions in THES (Times Higher Education Supplement)

    These include:

  • Guardian Education 21 may 2003 pp.14-15

    written elsewhere in the world

    A large, if rather random, collection of articles related to EVS written elsewhere in the world are in the ad hoc bibliography. Here are a few notable ones.

    Handset use in the UK

    Some other UK sites and people who use EVS are listed here.

    Other Web documents

  • Newsletter ad for users at University of Glasgow
  • Second Newsletter ad for users at University of Glasgow
  • A letter to THES
  • Alternatives, TECHnologies, and VENDORS.
  • Unnecessary technical details of PRS [not finished yet]
  • Hake and what matters (To be written)
  • Undigested notes and URLs
  • Some pictures of PRS: at the end of this page and also here
  • Some UK sites and PEOPLE who use EVS
  • Newsletter article on use in English Literature

    Some human contacts

    If you want to actually talk to someone, you could try:

    Other important Glasgow University contacts:


    Some other people who use EVS (and PRS users) in the UK are listed here.


    Last changed 20 Feb 2005 ............... Length about 1300 words (9,000 bytes).
    (Document started on 29 Jan 2005.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/terms.html. You may copy it. How to refer to it.

    Terms for EVS

    By Steve Draper,   Department of Psychology,   University of Glasgow.

    There are various terms or names used to refer to the equipment in question:

    I don't like them. On the other hand, some others do. I've debated this most with Michael McCabe. Here are my views (with which he disagrees quite strongly) on what the names should say, and what is wrong with the ones being used.

    The main points

    [Electronic, digital vs. group, audience, classroom, ...]
    A problem with the terms ARS, PRS, GRS etc. is that they fail to express the main meaning. Putting your hand up in class, shouting, a mob lynching someone are all "group responses" but that isn't what people who use these phrases mean. They are almost entirely interested in new electronic systems, so the phrase fails to express what is, to the speaker and to their intended audience, the key defining feature. So the name should say "electronic" or "digital" to mark this, except for those who really are discussing the use of raised hands etc., and not just new technology.

    [System vs. equipment, technology]
    Saying "system" to refer to a small bit of equipment which is not a system that stands alone (without human operators it does nothing), but a wholly dependent adjunct on the real system of, say, teacher, students and discussion is inaccurate and self-inflating: "equipment" might be more exact. The real "system" using EVS in, say, education is something like the plan for the whole lecture or session. There are a number of quite different alternatives that do use EVS (e.g. Mazur's "Peer instruction", or contingent teaching); and also still others (e.g. MacManaway's use of "lecture scripts") that do not, but are equally revolutionary and promising.

    [voting, polling vs. texting vs. other shared data types]
    The equipment I'm usually referring to is for giving one of a small number of pre-determined alternative choices i.e. responding only to MCQs (multiple choice questions): hence the direct term would be "voting" or "polling". This also contrasts it to some other technologies that support free-text open-ended input from the audience (like mobile phone SMS texting). Note, however, that although this too certainly could be useful in some ways, many types of meeting cannot handle this: imagine a hundred people all sending in text responses: no-one (neither audience nor presenter) can scan a hundred different text messages and summarise or cluster them usefully. A feature of voting (i.e. MCQs) is that summarising is easy: just count the votes for each alternative and present these five or so numbers. This is a fundamental advantage for large groups of more than about 6 (say). So voting is a feature not a limitation for such groups. Of course other kinds of interaction are organised round free-text: email, blogs, discussion fora, etc. So we need a term for these that contrasts with voting, but covers all the free-text group electronic communication systems -- perhaps "texting". A third alternative is passing around other material e.g. software, as in a classroom or lab with networked computers.

    Further points

    [Synchronous vs. asynchronous]
    Part of what I usually mean is the use as part of a synchronous meeting, whether face to face or online; as opposed to asynchronous like email, or phone (text message) polls done for TV over a day or a week. And in fact response time really does matter here. A class often wants to move on quickly from a vote to another topic or to explanations and discussion of the disagreement, and a response time of minutes, and preferably of seconds, is needed. In contrast even in a small area like the UK, parliamentary elections take more than a day to decide and broadcast the results, and SMS texting may take hours depending on the network state. Remember "synchronous" doesn't mean instantaneous but it does mean the recipient is sitting waiting for the result before they can do anything else.

    [1 vs. 2 way]
    To technologists, a huge difference is equipment that offers 1-way vs. 2-way communication (e.g. feedback lights or a little screen on each handset). However to users, this is about as important as whether the person you are talking to says "yes" (2-way) or nods (1-way for a sonic technologist, but 2-way in terms of human communication). All the equipment relies on fast feedback, but some do this by projecting information on a big screen for all to read together.

    [Decision support vs. establishing mutual knowledge of the spread of opinions]
    Furthermore the applications are less about making group decisions (at least with the voting technology) and more about coordinating group thinking and understanding by giving everyone an overview of what and how strong the consensus or disagreement is. These distinguish it from formal voting for political candidates or in shareholder meetings: more synchronous than asynchronous; more about establishing mutual knowledge of the varieties of opinion than reaching a final decision.

    [personal vs. subgroup voting]
    Another issue is whether every audience member has their own handset and vote, or whether they agree a group vote i.e. one vote per small group.

    [Face to face vs. online, "virtual"]
    The main application I'm interested in is face to face, but actually it could perfectly well be done online (but synchronously) (though the equipment might be different). And one of the areas we are exploring at Glasgow is moving MCQs and associated discussion between the web out of class, and EVS in class as seamlessly as possible.

    [Education vs. other applications]
    The applications I am interested in are educational, but many sets of the same technology are sold to business for meetings for planning, brain-storming etc. That's what is wrong for some audiences in saying "classroom EVS". "Group decision support system" is a term sometimes used for the business, not educational, applications.

    Technological distinctions that can matter are:

    Summary

    What is generally meant here is an electric or electronic technology, used for polling in groups of size 10-1000 (not millions, as in serious national electronic voting), as part of a synchronous interaction (could be face to face or online), usually to share thinking and disagreements more than to come to decisions. What is most important really is that the human interaction supported by the equipment is real time (i.e. synchronous), and always interactive (even if one direction is optical and only one is electronic).

    I've started to standardise on the term "EVS", although perhaps "synchronous electronic polling equipment (SEPE)" would really be even more exact.


    Last changed 15 Feb 2005 ............... Length about 800 words (7,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/il.html.

    Interactive Lectures

    (written by Steve Draper,   as part of the Interactive Lectures website)

    A summary or introductory page on interactive lectures.

    Contents (click to jump to a section)

    Why make lectures interactive?

    To improve the learning outcomes. [The positive way of putting it.]

    Because there is no point in having lectures or class meetings UNLESS they are interactive. Lectures may have originated before printing, when reading a book to a class addressed what was then the bottleneck in learning and teaching: the number of available books. Nowadays, if one-way monologue transmission is what's needed, then books, emails, tapes will do that, and do it better because they are self-paced for the learner. [The negative way of putting it.]

    What are interactive lectures?

    Whenever it makes a difference that the learners are co-present with the teacher and each other. This might be because the learners act differently, or think differently; or because the teacher behaves differently.

    In fact it is not enough to be different: it should be better than the alternatives. Learners are routinely much more interactive with the material when using books (or handouts) than they can be with lectures: they read at their own pace, re-read anything they can't understand, can see the spelling of peculiar names and terms, ask other students what a piece means, and carry on until they understand it rather than until a fixed time has passed. All of these ordinary interactive and active learning actions are impossible or strongly discouraged in lectures.

    So for a lecture to be interactive in a worthwhile sense, what occurs must depend on the actions of the participants (not merely on a fixed agenda), and benefit learning in ways not achieved by, say, reading a comparable textbook.

    Alternative techniques

    One method is the one minute paper: have students write out the answer to a question for just one minute, and collect the answers for response by the teacher next time.

    Another method is to use a voting system: put up a multiple choice question, have all the audience give an anonymous answer, and immediately display the aggregated results.

    Another method is "Just in time teaching", where students are required both to read the material and to submit questions on it in advance, thus allowing the contact time to be spent on what they cannot learn for themselves.

    In fact there are many methods.

    Pedagogical rationale / benefits

    In brief, there are three distinct classes of benefit that may be obtained by interactive techniques:

    The general benefits, and specific pedagogic issues, are very similar regardless of the technique used. I have written about them in a number of different places including:


    The key underlying issues, roughly glossed by the broad term "interactivity", probably are:


    Last changed 20 Feb 2005 ............... Length about 300 words (3,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/why.html.

    Why use EVS? the short answer

    (written by Steve Draper,   as part of the Interactive Lectures website)

    What are the pedagogical benefits / aims?
    To "engage" the students i.e. not only to wake them up and cheer them up, but to get their minds working on the subject matter, and so to prompt learning.

    How specifically?:

    1. Simple questions to check understanding: "SAQs" (self-assessment questions) to give "formative feedback" to both students and presenter.
    2. Using responses (e.g. proportion who got it right) to switch what you do next: "contingent teaching" that is adapted on the spot to the group.
    3. Brain teasers to initiate discussion (because generating arguments (for and against alternative answers) is a powerful promoter of learning).
  • A short argument on why be interactive
  • A short introduction to EVS
  • EVS: a catalyst for lecture reform by Alistair Bruce.
  • Long answer (a whole paper) on pedagogic potential

    But above all, realise from the start that there are powerful benefits not just for learners but also for teachers. Both need feedback, and both do much better if that feedback is fast and frequent -- every few minutes rather than once a year. So the other great benefit of using EVS is the feedback it gives to the lecturer, whether you think of that as like course feedback, or as allowing "contingent teaching" i.e. adapting how the time is spent on rather than sticking to a rigid plan that pays no attention to how this particular audience is responding.


    Last changed 31 Jan 2005 ............... Length about 500 words (5,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/handsetintro.html.

    Using EVS for interactive lectures

    (written by Steve Draper,   as part of the Interactive Lectures website)

    This is a brief introduction to the technique of using EVS (electronic voting systems) for interaction in lectures. (A complementary technique is the one minute paper which uses open-ended audience input. An introduction to interactive lectures and why attempt them is here.)

    The technique is much as in the "Ask the audience" lifeline in the TV show "Who wants to be a millionaire?". A multiple choice question (MCQ) is displayed with up to 10 alternative response options, the handsets (using infrared like domestic TV remote controls) distributed to each audience member as they arrive allow everyone to contribute their opinion anonymously, and after the specified time (e.g. 60 seconds) elapses the aggregated results are displayed as a barchart. Thus everybody sees the consensus or spread of opinion, knows how their own relates to that, and contributes while remaining anonymous. It is thus like a show of hands, but with privacy for individuals, more accurate and automatic counting, and more convenient for multiple-choice rather than yes/no questions.

    It can be used for any purpose that MCQs can serve, including:


    At Glasgow University we currently use the PRS equipment: small handheld transmitters for each audience member, some receivers connected to a laptop up front, itself connected to a data projector and running the PRS software. This equipment is portable, and there is enough for our largest lecture theatres (300 seats). Given advance organisation, setting up and packing up can be quick. We can accommodate those who normally use OHPs, powerpoint, ad hoc oral questions, or a mixture.

    More practical details are offered here, and more details of how to design and use the questions are available through the main page, e.g. here.


    Last changed 26 April 2005 ............... Length about 2000 words (16,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/resources/minute.html.

    One minute papers

    By Stephen W. Draper,   Department of Psychology,   University of Glasgow.

    The basic idea is that at the end of your session (e.g. lecture) you ask students to spend one minute (60 seconds) writing down their personal (and anonymous) answer to one or two questions, such as "What was least clear in this lecture?". Then you collect the scraps of paper and brood over them afterwards, possibly responding in the next session. It's wonderful because it takes only a minute of the students' time (each), requires no technology or preparation, but gives you immediate insight into how your class is doing. There are probably other benefits too.

    That is the short version, which is all you really need to give it a try out. Trying it out is probably, if it is at all possible, the best second step in understanding the technique. However when you want more information, theory, and examples, then the rest of this document offers some.


    The longer version

    This is a note on the simple but excellent technique summarised above to use in teaching, particularly lectures. These particular notes are mainly adapted (stolen) from David Nicol, although the ideas also appear in the literature if you look for them. [Angelo,T.A & Cross,K.P. (1993) Classroom assessment techniques: a handbook for college teachers (San Francisco : Jossey-Bass Publishers) p.148] For more, you should go on his workshop (as part of a course for new lecturers, or see here), or bother him personally.

    I am addressing this note to teachers like myself: what they might do, and why. However a student could usefully read this, and carry it out privately. They could then use what they write for these one minute papers a) as a useful study habit; b) as a procedure for generating a question to ask as part of their good practice in being a student.

    Although your first uses are likely to be generic, if you use it regularly you can focus it to your particular concerns that day for that class, by designing questions with respect to the learning objectives, or important disciplinary skills, or the sequence of development important for that course.

    Remaining Contents (click to jump to a section)

    How to do it

    Most common questions to set

  • "What question do you most wish to have answered at this moment?"
    [I.e. tells you what you failed to get across, what you should fix at the start of next time.]

  • "What was the main point of today's lecture?"
    [Often a lot of what you said went aross, but the overall point is not apparent to them, or not apparent that it WAS the chief point.]

  • "What are the most important questions remaining unanswered?"

  • "What was the muddiest point?"

    More questions

    Asking questions

    Many of these questions could be asked either at the end, or in the middle, or at the start.

    Many are best announced at the start but written at the end i.e. "At the end I am going to ask you to write for a minute on ....". This should promote more thinking during the class.

    In asking each question, don't forget to specify the "rubric" i.e. state what kind of response is required e.g.


    Classifying questions

    Questions could be classified in various ways e.g.

    Many questions can be fitted under both of two contrasting types e.g. asked either as MCQs or as one-minute open ended papers; or be both reflective and about testing content retention.

    Feedback

    Content

    Reflective

    Rationale: theoretical articulations of why this is good

    Most of the reasons for using this technique apply more generally to interactive lectures but can be spelled out as follows.

    Course feedback; feedback from learner to teacher

    The first kind of benefit from this technique is to get good feedback from learners to teacher on how the learning and teaching process is going. Standard course feedback is largely ineffective in improving things. Two massive drawbacks, each alone sufficient to render it ineffective, of the standard method of one feedback questionnaire per course, are:

    You can get, if you wish, still more precise information by focussing the question you ask e.g. on a learning objective from the course, on a specific skill you think important to the discipline, etc. In other words, as an evaluation technique, it can be sensitive to context, to the discipline, to the course, to a particular (perhaps unusual) session. But also, it can be completely open-ended, and detect the surprises the teacher would never have thought to ask about (e.g. "I had no idea my graphs were not self-explanatory").

    Direct benefits to the learners

    If your teaching is too perfect to need improvement, or if you are too wimpish to take negative feedback, or in addition to the course feedback function, there are arguably direct benefits to the learners even if the teacher never reads the collected bits of paper.

    Above all, they can be used to get learners to:

    Fostering interaction / dialogue between teacher and learners

    Independently of private benefits to the teacher and of private benefits to the learners, there are the benefits of establishing real "dialogue": that is, an iterative (to and fro) process in which a common understanding is progressively established rather than communications each succeeding or failing as one-off acts. This is both immediately valuable, and makes it progressively easier for little interactions such as clarification questions to be made and dealt with easily, and quickly.

    Aspects of this, and of how this technique contributes and can succeed at this, are:

    And as a complement to handsets

    And finally: this technique may also be very valuable as a complement to using handsets in lectures. Handsets are excellent in many ways, above all in promoting dialogue. But they are essentially a technique revolving around Multiple Choice Questions (MCQs) which have fixed response sets. One minute papers use open-ended responses, and so collect the unexpected and the unprompted. MCQs invite guessing; one minute papers do not.

    The handsets give an immediate shared group response, and so can move the dialogue forward faster (every 5 minutes rather than once per session). However one-minute papers are better at uncovering complete surprises (students saying things it didn't occur to the teacher to put as an optional response in an MCQ); and at giving you a chance to think about each answer even if it does take you by surprise.


    Last changed 24 Feb 2005 ............... Length about 4,000 words (29,000 bytes).
    (Document started on 15 Feb 2005.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/td.html. You may copy it. How to refer to it.

    Transforming lectures to improve learning

    By Steve Draper,   Department of Psychology,   University of Glasgow.

    Contents (click to jump to a section)

    Introduction

    Some of the most successful uses of EVS (Electronic Voting Systems) have been associated with a major transformation of how "lectures" have been used within a HE (Higher Education) course. Here we adopt the approach of asking how in general we might make teaching in HE more effective, and keeping an open mind about whether and how ICT (Information and Communication Technology) could play a role in this. The aim then is to improve learning outcomes (in quantity and quality) while only investing about the same, or even fewer, teaching resources. More specifically, can we do this by transforming how lectures are used.

    Replacing exposition

    The explicit function of lectures is exposition: communicating new concepts and facts to learners. In fact lectures usually perform some additional functions, as their defenders are quick to point out and as we shall discuss below, but nevertheless in general most of the time is spent on exposition and conversely most exposition (in courses based on lectures) is performed by lectures. Clearly this could be done in other ways, such as requiring learners to read a textbook. On the face of it, this must be not only possible, but better. Remember, the best a speaker, whether face to face or on video, can possibly do in the light of individual differences between learners is to speak too fast for half the audience and too slowly for the other half. Reading is self-paced, and is therefore the right speed for the whole audience. Furthermore reading is in an important sense more interactive than listening: the reader can pause when they like, re-read whatever and whenever they like; pause to think and take notes at their own pace, before going on to try to understand what is said next -- which is likely to assume the audience has already understood what went before. So using another medium for the function of exposition should be better. Can this be made to work in actual undergraduate courses?

    Yes. Here are several methods of replacing exposition and using the face to face large group "lecture" periods for something else.

    It seems clear that lectures are not needed for exposition: the Open University (OU) has made this work for decades on a very big scale. Another recurring theme is the use of questions designed not for accurate scores (summative assessment), but to allow students to self-diagnose their understanding, and even more, to get them thinking. A further theme is to channel that thinking into discussion (whether with peers or teachers). This requires "interactivity" from staff: that is, being ready to produce discussion not to some plan, but at short notice in response to students' previous responses.

    Should we expect to believe the reports of success with these methods, and should we expect them to generalise to many subjects and contexts? Again the answer is yes, which I'll arrive at by considering various types of theoretical analysis in turn.

    The basic 3 reasons for any learning improvements

    Many claims of novel learning success can be understood in terms of three very simple factors.

    1. The time spent by the learner actually learning: often called "time on task" by Americans. The effect of MacManaway's approach is to double the amount of time each learner spent (he studied how long they took reading his lecture scripts): first they read the scripts, then they attended the classes anyway. In fact they spent a little more than twice as long in total. Similarly JITT takes the same teacher time, but twice the student time.

    2. Processing the material in different ways. It probably isn't only total time, but (re)processing the concepts in more than one way e.g. not only listening and understanding, but then re-expressing in an essay. That is why so many courses require students not just to listen or read, but to write essays, solve written problems etc. However these methods are usually strongly constrained by the amount of staff time available to mark them. Here MacManaway got students to discuss the issues with each other, as do the IE and JITT schemes. Discussion requires producing reasons and parrying the conflicting opinions and reasons produced by others. Thinking about reasons and what evidence supports what conclusions is a different kind of mental processing than simply selecting or calculating the right answer or conclusion.

    3. Metacognition in the basic sense of monitoring one's degree of knowledge and recognising when you don't know or understand something. We are prone to feeling we understand something when we don't, and it isn't always easy to tell. The best established results on "metacognition" (Hunt, 1982; Resnick, 1989) show that monitoring one's own understanding effectively and substantially improves learning. Discussion with peers tests one's understanding and often leads to changing one's mind. The quizzes in the OU, JITT and the IE methods also perform this function, because eventually the teacher announces the right answer, and each student then knows whether they had got it right.
      Brain teaser questions also do this, partly because they frequently draw wrong answers and so force the learner to reassess their grasp of a concept, but for good learners the degree of uncertainty they create, even without the correct solution being announced, is alone enough to show them their grasp isn't as good as it should be.

    The Laurillard model

    The Laurillard (1993) model asserts that for satisfactory teaching and learning, 12 distinct activities must be covered somehow. Exposition is the first; and in considering its wider place, we are concerned with the first 4 activities: not only exposition by the teacher, but re-expression by the learner, and sufficient iteration between the two to achieve convergence of the learner's understanding with the teacher's conception.

    Re-expression by learners (Laurillard activity 2) is achieved in peer discussion in the MacManaway and Interactive Engagement schemes, and by the quizzes in the OU and JITT schemes. Feedback on correctness (Laurillard activity 3) is provided by peer responses in the IE schemes and by the quiz in the JITT and IE schemes. Remediation more specifically targeted at student problems by the teacher (a fuller instantiation of Laurillard activity 3) is provided in the JITT scheme (because class time is given to questions sent in in advance), and often in the IE schemes in response to the voting results.

    Thus in terms of the Laurillard model, instead of only covering activity 1 as a strictly expository lecture does, these schemes offer some substantial provision of activities 2,3 and 4 in quantities and frequency approaching that allocated to activity 1, while using only large group occasions and without extra staff time.

    The management layer

    I argue elsewhere that the Laurillard model needs to be augmented by a layer parallel to the one of strictly learning activities: one that describes how the decisions are made about what activities are performed. At least in HE, learning is not automatic but on the contrary, highly intentional and is managed by a whole series of decisions and agreements about what will be done. Students are continually deciding how much and what work to do, and learning outcomes depend on this more than on anything else. In many cases lectures are important in this role, and a major reason for students attending lectures is often to find out what the curriculum really is, and what they are required to do, and what they estimate they really need to do. One reason that simply telling students to read the textbook and come back for the exam often doesn't work well is that, while it covers the function of exposition, it neglects this learning management aspect. Lectures are very widely used to cover it, with many class announcements being made in lectures, and the majority of student questions often being about administrative issues such as deadlines.

    The schemes discussed here (apart from the OU) do not neglect this aspect, so again we can expect them to succeed on these grounds. They do not abolish classes, so management and administrative functions can be covered there as before. In fact the quizzes and to some extent the peer discussion offer better information than either standard lectures, a textbook or lecture script about how a student is doing both in relation to the teacher's expectations and to the rest of the class. They also do this not just absolutely (do you understand X which you need to know before the exam) but in terms of the timeline (you should have understood this by today).

    In addition to this, these schemes also give much superior feedback to the teacher about how the whole course is going for this particular class of students. This equally is part of the management layer. However standard lectures are never very good for this. While a new, nervous, or uncaring lecturer may pick up nothing about a classes' understanding, even a highly skilled one has difficulty since at best the only information is a few facial expressions and how the self-selected one student answers each question from the lecturer. In contrast most of the above methods get feedback from every student, and formative feedback for the teacher is crucial to good teaching and learning. What I have found in interviewing adopters of EVS is that while many introduced it in order to increase student engagement, the heaviest users now most value the way it keeps them in much better touch with each particular class than they ever had without it.

    This formative feedback to teachers is important for debugging an exposition they have authored, but is also important for adapting the course for each class, dwelling on the points that this particlar set find difficult.

    Other functions of lectures

    Arguments attacking the use of lectures have been made before (Laurillard, 1993). Those seeking to defend them generally stress the other functions than simple exposition that they may perform. One of these is learning management, as discussed in the previous section. Some others are:

    Conclusion

    We began by considering some schemes for replacing the main function of lectures -- exposition -- and then used various pieces of theory to discuss whether the proposed schemes would be likely to be successful at replacing all the functions of a lecture. Overall, while providing exposition in other media alone might be worse than lectures because of neglecting other functions, the proposed schemes should be better because they address all the identified functions and address some important ones better than standard lectures do.

    Thus we can replace some or all exposition in lectures. Furthermore, we can re-purpose these large group meetings to cover other learning activities significantly better than usual. We can feel some confidence in this by a careful analysis of the functions covered by traditional lectures, and the ones thought important in general, and show how these are each covered in proposed new teaching schemes. This in turn leads to two further issues to address.

    Firstly: which functions can in fact be effectively covered in large group teaching with the economies of scale that allows, and which others must be covered in other ways? Besides exposition, and the way the schemes above address Laurillard's activities 1 to 4, other functions that can be addressed in large groups in lecture theatres include:

    Secondly, some aspects of a course can use large group teaching (see above), but all the rest must be done in smaller groups. How small, and how to organise them? One of the most interesting functions to notice is that many of the schemes above use peer discussion, coordinated by the teacher but otherwise not supervised or facilitated by staff. For this the effective size is no more than 5 learners, and 2 or 4 may often be best. Both our experience and published research on group dynamics and conversation structures support this. Instead of clinging to group sizes dictated either by current resources or by what staff are used to (which often leads to "tutorial" group sizes of 6, 10, or 20), we should consider what is effective. When the learning benefit is in the student generating an utterance, then 2 is the best size, since then at any given moment half the students are generating utterances. Where spontaneous and flowing group interaction is required, then 5 is the maximum number. For creating and coordinating a community, then it can be as large as you like provided an appropriate method is used e.g. using EVS to show everyone the degree of agreement and diversity on a question, or having the lecturer summarise written responses submitted earlier.

    However forming groups simply by dividing the number of students by the number of staff is a foolish administrative response, not a pedagogic one. What is the point of groups of 10 or 20? Not much. If the model is for a series of short one to one interactions (which may be relevant for pastoral and counselling functions), then consider how to organise this. Putting a group of students in the same room is obviously inappropriate for this, and ICT makes this less and less necessary. If the model is for more personalised topics e.g. all the students with trouble over subtopic X go to one group, then we need NOT to assign permanent groups, but should organise ad hoc ones based on that subtopic. In general, what the schemes above suggest for the future is to consider a course as involving groups of all sizes, not necessarily permanent, not necessarily supervised; and organised in a variety of ways, including possibly pyramids and unsupervised groups. This is after all only an extension of the eternal expectation that learners will do some work alone: the ultimate small unsupervised group.

    In the end, we should consider:

    References

    Draper, S.W. (1997) Adding (negotiated) learning management to models of teaching and learning http://www.psy.gla.ac.uk/~steve/TLP.management.html (visited 24 Feb 2005)

    Dufresne, R.J., Gerace, W.J., Leonard, W.J., Mestre, J.P., & Wenk, L. (1996) Classtalk: A Classroom Communication System for Active Learning Journal of Computing in Higher Education vol.7 pp.3-47 http://umperg.physics.umass.edu/projects/ASKIT/classtalkPaper

    Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand student survey of mechanics data for introductory physics courses. American Journal of Physics, 66, 64-74.

    R.R. Hake (1991) "My Conversion To The Arons-Advocated Method Of Science Education" Teaching Education vol.3 no.2 pp.109-111 online pdf copy

    Hunt, D. (1982) "Effects of human self-assessment responding on learning" Journal of Applied Psychology vol.67 pp.75-82.

    Laurillard, D. (1993), Rethinking university teaching (London: Routledge)

    MacManaway,M.A. (1968) "Using lecture scripts" Universities Quarterly vol.22 no.June pp.327-336

    MacManaway,M.A. (1970) "Teaching methods in HE -- innovation and research" Universities Quarterly vol.24 no.3 pp.321-329

    Mazur, E. (1997). Peer Instruction: A User’s Manual. Upper Saddle River, NJ:Prentice-Hall.

    Meltzer,D.E. & Manivannan,K. (1996) "Promoting interactivity in physics lecture classes" The physics teacher vol.34 no.2 p.72-76 especially p.74

    Novak,G.M., Gavrin,A.D., Christian,W. & Patterson,E.T. (1999) Just-in-time teaching: Blending Active Learning and Web Technology (Upper Saddle River, NJ: Prentice- Hall)

    Novak,G.M., Gavrin,A.D., Christian,W. & Patterson,E.T. (1999) http://www.jitt.org/ Just in Time Teaching (visited 20 Feb 2005)

    Resnick,L.B. (1989) "Introduction" ch.1 pp.1-24 in L.B.Resnick (Ed.) Knowing, learning and instruction: Essays in honor of Robert Glaser (Hillsdale, NJ: Lawrence Erlbaum Associates).


    Last changed 15 Feb 2005 ............... Length about 1700 words (13,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/local.html.

    Using EVS at Glasgow University

    (written by Steve Draper,   as part of the Interactive Lectures website)

    This page is about the use of EVS (electronic voting systems) in lectures at Glasgow University.

    Current person to contact: Chris Mitchell
    email: mitchell@dcs.gla.ac.uk ext.0918
    Or else: Quintin Cutts
    email: quintin@dcs.gla.ac.uk ext.5691

    Past workshops for prospective users     (Past uses)
    Interim evaluation report

    Questions and answers (click to jump to a section)

    Brief introduction

    If you haven't already read a passage explaining what these EVS are about, a brief general account is here.

    To date, student response, and lecturers' perceptions of that, have been almost entirely favourable in an expanding range of trials here at the University of Glasgow (to say nothing of those elsewhere) already involving students in levels 1,2,3 and 4, and diverse subjects (psychology, medicine, philosophy, computer science, ...), and in sequences from one-off to every lecture for a term.

    The equipment is mobile, and so can be used anywhere with a few minutes setup. It additionally requires a PC (laptops are also mobile, and we can supply one if necessary), and a data projector (the machine for projecting a computer's displayed output on to a big screen).

    In principle, the equipment is available for anyone at the university to use, and there is enough for the two largest lecture theatres to be using it simultaneously. In practice, the human and equipment resources are not unlimited, and advance arrangements are necessary. We can accommodate any size audience, but there is a slight chance of too many bookings coinciding for the equipment, and a considerable chance of us not having enough experienced student assistants available at the right time: that is the currently scarcest resource.

    Why would you want to use EVS in your lectures?

    Want to see them in action?

    Look at the bookings, and go and see them in use.

    If it's one of mine you needn't ask, just turn up; and probably other users feel the same. We are none of us expert, yet we all seem to be getting good effects and needn't feel defensive about it. It usually isn't practicable to get 200 students to provide an audience for a realistic demonstration: so seeing a real use is the best option.

    What's involved at the moment of use?

    What's involved at the lecture?

    Ideally (!):

    One way of introducing a new audience to the EVS is described here.

    What preparation is required by the lecturer?

    Equipment?

    There are several alternative modes you could use this in.

    Human resources

    It is MUCH less stressful for a lecturer, no matter how practised at this, if there are assistants to fetch and set up the equipment, leaving the lecturer to supervise the occasion. We have a small amount of resource for providing these assistants.

    What has experience shown can go wrong?

    Generally both the basic PRS equipment, and the PRS software itself have proved very reliable, both here and elsewhere. Other things however can go wrong.

    Unnecessary technical details

    Most lecturers never need to know about further technical details. But if you want to know about them, about the log files PRS creates, etc.etc. then read on here.


    Last changed 6 June 2004 ............... Length about 1600 words (10,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/start.html.

    Introducing the EVS to a new audience

    (written by Steve Draper,   as part of the Interactive Lectures website)

    Here is one possible way of introducing the EVS (electronic voting system) to a new audience. Below is the script for you, the presenter, to act on; and below that, a slide to use.

    Script for the presenter

    Comments in (small font and parentheses) are optional: you may or may not make them to your audience. Comments in [italics and square brackets] are for you alone from me.

    Assuming the handsets have been distributed, and the time (not necessarily the start of the session) has now come to use or comment on them.

    Slide for use during the introduction

    Here's an HTML impression of the slide, also ready to print (for an OHP). Should be in powerpoint, sorry.
    Using the handsets

    A. Check handset is turned on -- green light on?

    B. Turn it over and read the 3 digit ID number

    C. Point at a receiver (small box with red light on)

  • Can press H(igh) or L(ow confidence) first

    D. Press the number of your choice
    -- see your ID come up on the screen

  • If your ID doesn't come up, wait a few seconds then try again.

  • Can change your vote, but don't keep sending unnecessarily as you will obstruct others' votes.


    Startup questions

    Problems occasionally observed in audience operation of handsets

    Don't comment on these to the whole audience, but be aware of them in case you see them. These are all problems that have been seen e.g. 1 in 50 audience members.

    Problems occasionally observed in lecturer operation of PRS

    The importance of getting every single vote in on the first question(s)

    Finally, I just want to repeat the importance, in the first question or two, of being patient and getting every single audience member's vote to register successfully. If it doesn't work for them on the first question, that person will probably never participate throughout the rest of the session or even the course: for them, the moment will have passed when they feel able to ask for help. Furthermore being seen to take such care about this probably sets a valuable tacit precedent that sets everyone up to expect to vote on every question.

    In almost every group we have run, about 1 in 50 of the audience fail to get it to work for them despite considerable effort. However we have failed to identify a pattern, either of the type of person or the type of problem. Furthermore hardly anyone ever asks for help (they are seeing hundreds around them succeed without effort) until they have been explicitly asked several times. Even though it feels like it's holding up the whole session, it is really only a few more minutes. Just keep asking until the total distinct handset IDs counted on the screen display matches your count of the people/handsets handed out. Keep asking, search the audience with your eyes, run up and down the aisles (carrying a spare handset or two) to attend to whoever lets slip they have a problem. It may be anything, or even something you can't fix: but usually it's turning the handset on, a handset battery being flat, not pointing the handset at a receiver (but at the screen, or into the head of the person in front of them); not being able to recognise their ID number on the screen.


    Last changed 25 Jan 2003 ............... Length about 300 words (3,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/question.html.

    Presenting a question

    (written by Steve Draper,   as part of the Interactive Lectures website)

    What is involved in presenting each question?

    How to present a question

  • Display the question (but don't start the PRS handset software)
  • Explain it as necessary
  • "Are you ready to answer it? Anything wrong with this question?" and encourage any questions, discussion of the question.
  • Only then, press <start> on the computer system.
  • Audience answers: wait until the total of votes reach the full audience total.
  • Display answers (as a bar graph).
  • Always try to make at least one oral comment about the distribution of answers shown on the graph. Partly for "closure"/acknowledgement; partly to slow you up and let everyone see the results.
  • State which answer (if any) was right, and decide what to do next.

    What the presenter does in essence

    The presenter's function is, where and when possible, to:

    What each learner does in essence

    For each question, each learner has to:



    Last changed 6 June 2004 ............... Length about 300 words (2500 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/length.html.

    Length and number of questions

    (written by Steve Draper,   as part of the Interactive Lectures website)

    How many questions? How long do they take?
    A rule of thumb for a 50 minute lecture is to use only 3 EVS questions.

    In a "tutorial" session organised entirely around questions, you could at most use about 12 if there were no discussion: 60 secs to express a question, 90 secs to collect votes, 90 secs to comment briefly on the responses gives 4 minutes per question if there is no discussion or detailed explanation, and so 12 questions in a lecture.

    Allowing 5 mins (still very short) for discussion by audience and presenter of issues that are not well understood would mean only 5 such questions in a session.

    It is also possible, especially with a large collection of questions ready, to "use up" some by just asking someone to shout out the answer to warm up the audience, and then vote on a few to make sure the whole audience is keeping up with the noisy few. It would only take 20 seconds rather than 4 minutes for each such informal use of a question. Never let the EVS become too central or important: it is only one aid among others.

    Thus for various reasons you may want to prepare a large number of questions from which you select only a few, depending on how the session unfolds.





    Last changed 15 Feb 2005 ............... Length about 1000 words (8,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/qdesign.html.

    Question formats

    (written by Steve Draper,   as part of the Interactive Lectures website)

    There is a whole art to designing MCQs (multiple choice questions). Much of the literature on this is for assessment. In this context however we don't much care (as that literature does) about fairness, or discriminatory power, but instead will concentrate on what will maximise learning.

    Here I just discuss possible formats for a question, without varying the purpose or difficulty. I was in part inspired by Michele Dickson of Strathclyde University. The useful tactic implied by her practice is to vary the way questions are asked about each topic. Applied to statistics this might be:

    The idea is to require students to access knowledge of a topic from several different starting points. Here I exercised three kinds of link, and each kind in both directions. Exercising these different types and directions of link is not only important in itself (because understanding requires understanding all of these) but keeps the type of mental demand on the students fresh, even if you are in fact sticking on one topic.

    Types of relationship to exercise / test

    In the abstract there are three different classes of relationship to test:

    The first is that of linking ideas or concepts to particular examples or instances of them e.g. is a whale a fish or a mammal? Another form of this is linking (engineering or maths) problems with the principle or rule that is likely to be used to solve it. However both concepts and instances are represented in more than one way, and practice at these alternative representations and their equivalences is usually an essential aspect of learning a subject. Thus concepts usually have both a technical name, and a definition or description, and testing this relationship is important. Similarly instances usually have more than one standard method of description and, although these are specific to each subject, learners need to master them all, and questions testing these equivalences are important. In teaching French language, both the spelling and the pronounciation of a word needs to be learned. In statistics, an example data set should be represented by a graph, a table of values, as well as a description such as "bell shaped curve with long tails". In chemistry, the name "copper sulfate" should be linked to "CuSO4" and a photograph of blue crystals, and questions should test these links. (See Johnstone, A.H. (1991) "Why is science difficult to learn? Things are seldom what they seem" Journal of computer assisted learning vol.7 no.2 pp.75-83 for an argument related to this based in teaching Chemistry.)

    These relationships are all bidirectional, so questions can (and should) be asked in both directions e.g. both "which of these is a mammal" and "to which of these categories do dolphins belong?". Thus a subject with three standard representations for instances plus concept names and concept definitions will have five representations, and so 20 types of question (pick one of five for the question, and one of the remaining four for the response categories). Additional variations come from allowing more than one item as an answer, or asking the question in the negative e.g. "which of these is not a mammal?: mouse, platypus, porpoise?".

    The problem of technical vocabulary is a general one, and suggests that the concept name-definition link should be treated especially carefully. If you ask questions that are problems (real-world cases) and ask which concept applies but use only the technical names of the concepts, then students must understand perfectly both concept and the vocabulary; and if they get it wrong you don't know which aspect they got wrong. Asking concept-case questions using not technical vocabulary but paraphrased descriptions of the concepts can separate these; and separate questions to test name-definition (i.e. concept vocabulary).

    Further Response Options

    The handsets do not directly allow the audience to specify more than one answer per question. However you can offer at least some combinations yourself e.g.
    "Is a Black Widow:
    1. A spider
    2. An insect
    3. An arachnid
    4. (1) and (2)
    5. (2) and (3)
    6. (1) and (3)
    7. (1) and (2) and (3)
    8. None of the above

    It may or may not be a good idea to include null responses as an option. Against offering them is the idea that you want to force students to commit to an answer rather than do nothing, and also the observation that when provided usually few take the null option, given the anonymity of entering a guess. Furthermore, a respondent could simply not press any button; although that, for the presenter, is ambiguous between a decision rejecting all the alternatives, the equipment giving trouble to some of the audience, or the audience getting bored or disengaged. However if you do include them as standard, it may give you better, quicker feedback about problems. In fact there are at least three usually applicable distinct null options to use:

    Some references on MCQ design

  • McBeath, R. J. (ed.) (1992) Instructing and Evaluating Higher Education: A Guidebook for Planning Learning Outcomes (New Jersey: ETP)


    Last changed 15 Feb 2005 ............... Length about 3,000 words (23,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/qpurpose.html.

    Pedagogical formats for using questions and voting

    (written by Steve Draper,   as part of the Interactive Lectures website)

    EVS questions may be used for many pedagogic purposes. These can be classified in an abstract way: discussed at length elsewhere and summarised here:

    1. Diagnostic SAQs i.e. "self-assessment questions". These give individual formative feedback to students, but also both teacher and learners can see what areas need more attention. The design of sets of these is discussed further on a separate page, including working through an extended example (e.g. of how to solve a problem) with a question at each step. SAQs are a good first step in introducing voting systems to otherwise unmodified lectures.
    2. Initiate a discussion. Discussed further below.
    3. Formative feedback to the teacher i.e. "course feedback".
    4. Summative assessment (even if only as practice) e.g. practice exam questions.
    5. Peer assessment could be done on the spot, saving the teacher administrative time and giving the learner much more rapid, though public, feedback.
    6. Community mutual awareness building. At the start of any group e.g. a research symposium or the first meeting of a new class, the equipment gives a convenient way to create some mutual awareness of the group as a whole by displaying personal questions and having the distribution of responses displayed.
    7. Experiments using human responses: for topics that concern human responses, a very considerable range of experiments can be directly demonstrated using the audience as participants. The great advantage of this is that every audience member both experiences what it is to be a "subject" in the experiment, and sees how variable (or not) the range of responses is (and how their own compares to the average). In a textbook or conventional lecture, neither can be done experientially and personally, only described. Subjects this can apply in include:
      • Politics (demonstrate / trial voting systems)
      • Psychology (any questionnaire can be administered then shared)
      • Physiology (Take one's pulse: see class' average; auditory illusions)
      • Vision science (display visual illusions; how many "see" it?)

    However pedagogic uses are probably labelled rather differently by practising lecturers, under phrases like "adding a quiz", "revision lectures", "tutorial sessions", "establishing pre-requisites at the start", "launching a class discussion". This kind of category is more apparent in the following sections and groupings of ways to use EVS.

    SAQs and creating feedback for both learner and teacher

    Asking test questions, or "self-assessment questions" (SAQs) since only the student knows what answer they gave individually, is useful in more than one way.

    A first cautious use of EVS

    The simplest way to introduce some EVS use into otherwise conventional lectures is to add some SAQs at the end so students can check if they have understood the material. This is simplest for the presenter: just add two or three simple questions near the end without otherwise changing the lecture plan. Students who get them wrong now know what they need to work on. If the average performance is worse than the lecturer likes, she or he can address this at the start of the next lecture. Even doing this in a simple, uninspired way has in fact consistently been viewed positively by students in our developing experience, as they welcome being able to check their understanding.

    Extending this use: Emotional connotations of questions

    If you put up an exam question, its importance and relevance is clear to everyone and leads to serious treatment. However, it may reduce discussion even while increasing attention, since to get it wrong is to "fail" in the terms of the course. Asking brain teasers is a way of exercising the same knowledge, but without the threatening overtones, and so may be more effective for purposes such as encouraging discussion.

    Putting up arguments or descriptions for criticism may be motivating as well as useful (e.g. describe a proposed experiment and ask what is faulty about it). It allows students to practise criticism which is useful; and criticism is easier than constructive proposals which, in effect, is what they are exclusively asked for in most "problem solving" questions, and so questions asking for critiques may be a better starting point.

    Thus in extending beyond a few SAQs, presenters may like to vary their question types with a view to encouraging a better atmosphere and more light hearted interaction.

    Contingent teaching: Extending the role of questions in a session

    Test questions can soon lead to trying a more contingent approach, where a session plan is no longer for a fixed lecture sequence of material, but is prepared to vary depending upon audience response. This may mean preparing a large set of questions, those actually used depending upon the audience: this is discussed in "designing a set of questions for a contingent session".

    This approach could be used, for instance, in:


    Designing for discussion

    Another important purpose for questions is to promote discussion, especially peer discussion. A general format might be: pose a question and take an initial vote (this gets each person to commit privately to a definite initial position, and shows everyone what the spread of opinion on it is). Then, without expressing an opinion or revealing what the right answer if any is, tell the audience to discuss it. Finally, you might take a new vote, and see if opinions have shifted.

    The general benefit is that peer discussion requires not just deciding on an answer or position (which voting requires) but also generating reasons for and against the alternatives, and also perhaps dealing with reasons and objections and opinions voiced by others. That is, although the MCQ posed only directly asks for an answer, discussion implicitly requires reasons and reasoning, and this is the real pedagogical aim. Furthermore, if the discussion is done in small groups of, say, four, then at any moment one in four not only one in the whole room is engaged in such generation activity.

    There are two classes of question for this: those that really do have a right answer, and those that really don't. (Or, to use Willie Dunn's phrase, those that concern objects of mastery and those that are a focus for speculation.) In the former case, the question may be a "brain teaser" i.e. optimised to provoke uncertainty and dispute (see below). In the latter case, the issue to be discussed simply has to be posed as if it had a fixed answer, even though it is generally agreed it does not: for instance as in the classic debate format ("This house believes that women are dangerous."). Do not assume that a given discipline necessarily only uses one or the other kind of question. GPs (doctors), for instance, according to Willie Dunn in a personal note, "came to distinguish between topics which were a focus for speculation and those which were an object of mastery. In the latter the GPs were interested in what the expert had to say because he was the master, but with the other topics there was no scientifically-determined correct answer and GPs were interested in what their peers had to say as much as the opinion of the expert, and such systems [i.e. like PRS] allowed us to do this."

    Slight differences in format for discussion sessions have been studied: Nicol, D. J. & Boyle, J. T. (2003) "Peer Instruction versus Class-wide Discussion in large classes: a comparison of two interaction methods in the wired classroom" Studies in Higher Education. In practice, most presenters might use a mixture and other variations. The main variables are in the number of (re)votes, and the choice or mixture of individual thought, small group peer discussion, and plenary or whole-class discussion. While small group discussion may maximise student cognitive activity and so learning, plenary discussion gives better (perhaps vital) feedback to the teacher by revealing reasons entertained by various learners, and so may maximise teacher adaptation to the audience. The two leading alternatives are summarised in this table (adapted from Nicol & Boyle, 2003).

    Discussion recipes
    "Peer Instruction":
    Mazur Sequence
    "Class-wide Discussion":
    Dufresne (PERG) Sequence
    1. Concept question posed.
    2. Individual Thinking: students given time to think individually (1-2 minutes).
    3. [voting] Students provide individual responses.
    4. Students receive feedback -- poll of responses presented as histogram display.
    5. Small group Discussion: students instructed to convince their neighbours that they have the right answer.
    6. Retesting of same concept.
      [voting] Students provide individual responses (revised answer).
    7. Students receive feedback -- poll of responses presented as histogram display.
    8. Lecturer summarises and explains "correct" response.
    1. Concept question posed.
    2. Small group discussion: small groups discuss the concept question (3-5 mins).
    3. [voting] Students provide individual or group responses.
    4. Students receive feedback -- poll of responses presented as histogram display.
    5. Class-wide discussion: students explain their answers and listen to the explanations of others (facilitated by tutor).
    6. Lecturer summarises and explains "correct" response.

    Questions to discuss, not resolve

    Examples of questions to launch discussion in topics that don't have clear right and wrong answers are familiar from debates and exam questions. The point, remember, is to use a question as an occasion first to remind the group there really are differences of view on it, but mainly to exercise giving and evaluating reasons for and against. The MCQ, like a debate, is simply a conventional provocation for this.

    "Brain teasers"

    Using questions with right and wrong answers to launch discussion is, in practice, less showing a different kind of question to the audience and more a different emphasis in the presenter's purpose. Both look like (and are) tests of knowledge; in both cases if (but only if) the audience is fairly split in their responses then it is a good idea to ask them to discuss the question with their neighbours and then re-voting, rather than telling them the right answer; in both cases the session will become more contingent: what happens will depend partly on how the discussion goes not just on the presenter's prepared plan; in both cases the presenter may need to bring a larger set of questions than can be used, and proceed until one turns out to produce the right level of divisiveness in initial responses.

    The difference is only that in the SAQ case the presenter may be focussing on finding weak spots and achieving remediation up to a basic standard whether the discussion is done by the presenter or class as a whole, while in the discussion case, the focus may be on the way that peer discussion is engaging and brings benefits in better understanding and more solid retention regardless of whether understanding was already adequate.

    Nevertheless optimising a question for diagnosing what the learners know (self-assessment questions), and optimising it for fooling a large proportion and for initiating discussion are not quite the same thing. There are benefits from initiating discussion independently of whether this is the most urgent topic for the class (e.g. promoting the practice of peer interaction, generating arguments for an answer probably improves the learner's grasp even if they had selected the right answer, and is more related to deep learning, and promotes their learning of reasons as well as of answers, etc.).

    Some questions seem interesting but hard to get right if you haven't seen that particular question before. Designing a really good brain teaser is not just about a good question, but about creating distractors i.e. wrong but very tempting answers. In fact, they are really paradoxes: where there seem to be excellent reasons for each contradictory alternative. Such questions are ideal for starting discussions, but perhaps less than optimal for simply being a fair diagnosis of knowledge. In fact ideally, the alternative answers should be created to match common learner misconceptions for the topic. An idea is to use the method of phenomenography to collect these misconceptions: the idea here would be to then express the findings as alternative responses to an MCQ.

    Great brain teasers are very hard to design, but may be collected or borrowed, or generated by research.

    Here's an example that enraged me in primary school, but which you can probably "see through".

    "If a bottle of beer and a glass cost one pound fifty, and the beer costs a pound more than the glass, how much does the glass cost?"
    The trap seems to lie in matching the beer to one pound, the glass to fifty pence, and being satisfied that a "more" relation holds.

    Here is one from Papert's Mindstorms p.131 ch.5.

    "A monkey and a rock are attached to opposite ends of a rope that is hung over a pulley. The monkey and the rock are of equal weight and balance one another. The monkey begins to climb the rope. What happens to the rock?"
    His analysis of why this is hard (but not complex) is: students don't have the category of "laws-of-motion problem" like conservation of energy problem. I.e. we have mostly learned Newton without having really learned the pre-requisite concept of what IS a law of motion. Another view is that it requires you to think of Newtons 3rd law (reaction), and most people can repeat the law without having exercised it much.

    Another example on the topic of Newtonian mechanics can be paraphrased as follows.

    Remember the old logo or advert for Levi's jeans that showed a pair of jeans being pulled apart by two teams of mules pulling in opposite directions. If one of the mule teams was sent away, and their leg of the jeans tied to a big tree instead, would the force (tension) in the jeans be: half, the same, or twice what it was with two mule teams?
    The trouble here is how can two mule teams produce no more force than one team, when one team clearly produces more than no teams; on the other hand, one mule pulling one leg (while the other is tied to the tree) clearly produces force, so a second mule team isn't necessary.

    Brain teasers seem to relate the teaching to students' prior conceptions, since tempting answers are most often those suggested by earlier but incorrect or incomplete ways of thinking.

    Whereas with most questions it is enough to give (eventually) the right answer and explain why it is right, with a good brain teaser it may be important in addition to explain why exactly each tempting wrong answer is wrong. This extra requirement on the feedback a presenter should produce is discussed further here.

    Finally, here is an example of a failed brain teaser. "Isn't it amazing that our legs are exactly the right length to reach the ground?" (This is analogous to some specious arguments that have appeared in cosomology / evolution.) At the meta-level, the brain teaser or puzzle here is to analyse why that is tempting to anyone; something to do with starting the analysis from your seat of consciousness in your head (several feet above the ground) and then noticing what a good fit from this egocentric viewpoint your legs make between this viewpoint and the ground.

    May need a link here on to the page seq.html about designing sequences with/of questions. And on from there to lecture.html.

    Extending discussion beyond the lecture theatre

    An idea which Quintin is committed to trying out (again, better) from Sept. 2004 is extending discussion, using the web, beyond the classroom. The pedagogical and technical idea is to create software to make it easy for a presenter to ship a question (for instance the last one used in a lecture, but it could be all of them), perhaps complete with initial voting pattern, to the web where the class may continue the discussion with both text discussion and voting. Just before the next lecture, the presenter may equally freeze the discussion there and export it (the question, new voting pattern, perhaps discussion text) back into powerpoint for presentation in the first part of their next lecture.

    If this can be made to work pedagogically, socially, and technically then it would be a unique exploitation of e-learning with the advantages of face to face campus teaching; and would be expected to enhance learning because so much is simply proportional to the time spent by the learner thinking: so any minutes spent on real discussion outside class is a step in the right direction.

    Direct tests of reasons

    One of the main reasons that discussion leads to learning, is that it gets learners to produce reasons for a belief or prediction (or answer to a question), and requires judgements about which reasons to accept and which to reject. This can also be done directly by questions about reasons.

    Simply give the prediction in the question, and ask which of the offered reasons are the right or best one(s); or which of the offered bits of evidence actually support or disconfirm the prediction.

    Collecting experimental data

    A voting system can obviously be used to collect survey data from an audience. Besides being useful in evaluating the equipment itself, or the course in which it is used (course feedback), this is particularly useful when that data is itself the subject of the course as it may be in psychology, physiology, parts of medical teaching, etc.

    For instance, in teaching the part of perception dealing with visual illusions, the presenter could put up the illusion together with a question about how it is seen, and the audience will then see the proportion of the audience that "saw" the illusory percept, and compare what they are told, their own personal perceptual experience, and the spread of responses in the audience.

    In a practical module in psychology supported by lectures, Paddy O'Donnell and I have had the class design and pilot questionnaire items (questions) in small groups on a topic such as the introduction and use of mobile phones, for which the class is itself a suitable population. Each group then submited their items to us, and we then picked a set drawing on many people's contributions to form a larger questionnaire. We then used a session to administer that questionnaire to the class, with them responding using the voting equipment. But the end of that session we had responses from a class of about 100 to a sizeable questionnaire. We could then make that data set available almost immediately to the class, and have them analyse the data and write a report.

    A final year research project has also been run, using this as the data collection mechanism: it allowed a large number of subjects to be "run" simultaneously, which is the advantage for the researcher.

    In a class on the public communication of science, Steve Brindley has surveyed the class on some aspects of the demonstrations and materials he used, since they are a themselves a relevant target for such communciation and their preferences for different modes (e.g. active vs. passive presentations) are indicative of the subject of the course: what methods of presentation of science are effective, and how do people vary in their preferences. He would then begin the next lecture by re-presenting and commenting on the data collected last time.


    Last changed 6 Aug 2003 ............... Length about 1,600 words (10,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/contingent.html.

    Degrees of contingency

    (written by Steve Draper,   as part of the Interactive Lectures website)

    Besides the different purposes for questions (practising exam questions, collecting data for a psychological study, launching discussion on topics without a right or wrong answer), an independent issue is whether the session as a whole has a fixed plan, or is designed to vary contingent (depending) on audience responses. The obvious example of this is to use questions to discover any points where understanding is lacking, and then to address those points. (While direct self-assessment questions are the obvious choice for this diagnosis function, in fact other question types can probably be used.) This is to act contingently. By contingency I mean having the presenter NOT have a fixed sequence of stuff to present, but a flexible branching plan, where which branches actually get presented depends on how the audience answers questions or otherwise shows their needs. There are degrees of this.

    Contents (click to jump to a section)

    Implicit contingency

    First are simple self-assessment questions, where little changes in the session itself depending on how the audience answers, but the implicit hope is that learners will (contingently i.e. depending on whether they got a question right) later address the gaps in their knowledge which the questions exposed, or that the teacher will address them later.

    Whole/part training

    Secondly, we might present a case or problem with many questions in it; but the sequence is fixed. A complete example of a problem being solved might be prepared, with questions at each intermediate step, giving the audience practice and self-assessment at each, and also showing the teacher where to speed up and where to slow down in going over the method.

    An example of this can be found in the box on p.74 of Meltzer,D.E. & Manivannan,K. (1996) "Promoting interactivity in physics lecture classes" The physics teacher vol.34 no.2 p.72-76. It's a sample problem for a basic physics class at university, where a simple problem is broken down into 10 MCQ steps.

    Another way of looking at this is that of training on the parts of a skill or piece of knowledge separately, then again on fitting them together into a whole. Diagnostically, if a learner passes the test for the whole thing, we can usually take it they know it all. But if not, then learning may be much more effective if the pieces are learned separately before being put together. Not only is there less to learn at a time, but more importantly feedback is much clearer, less ambiguous if it is feedback on a single thing at a time. When a question is answered wrongly by everyone, it may be a sign that too much has been put together at once.

    In terms of the lesson/lecture plan, though, there is a single fixed course of events, although learners contribute answers at many steps, with the questions being used to help all the learners converge on the right action at each step.

    Contingent path through a case study

    Thirdly, we could have a prepared case study (e.g. a case presented to physicians), with a fixed start and end point; but where the audience votes on what actions and tests to do next, and the presenter provides the information the audience decided to ask for next. Thus the sequence of items depends (is contingent) on the audience's responses to the questions; and the presenter has to have created slides, perhaps with overlays, that allows them to jump and branch in the way required, rather than trudging through a fixed sequence regardless of the audience's responses.

    Diagnosing audience need

    Fourthly, a fully contingent session might be conducted, where the audience's needs are diagnosed, and the time is spent on the topics shown to be needing attention. The plan for such a session is no longer a straight line, but a tree branching at each question posed. The kinds of question you can use for this include:

    Designing a bank of diagnostic questions

    If you want to take diagnosis from test questions seriously, you need to come with a large set, selecting each one depending on the response to the last one. A fuller scheme for designing such a bank might be:
    1. List the topics you want to cover.
    2. Multiply these by several levels of difficulty for each.
    3. Even within a given topic, and given level of difficulty, you can vary the type of question: the type of link, the direction of link, the specific case. [Link back]

    Responding to the answer distribution

    When the audience's answers are in, the presenter must a) state which answer (if any) was right, and b) decide what to do next:

    Selecting the next question

    Decomposing a topic the audience was lost with

    While handset questions are MCQs, the real aim is (when required) to bring out the reasons for and against each alternative answer. When it turns out that most of the audience gets it wrong, how best to decompose the issue? My suggestion is to generate a set of associated part questions.

    One case is when a question links instances (only) to technical terms e.g. (in psychology) "which of these would be the most reliable measure?" If learners get this wrong, you won't know if that is because they don't understand the issues, or this problem, or have just forgotten the special technical meaning of "reliable". In other words, a question may require understanding of both the problem case, and the concepts, and the special technical vocabulary. If very few get it right, it could be unpacked by asking about the vocabulary separately from the other issues e.g. "which of these measures would give the greatest test-retest consistency?". This is one aspect of the problem of technical vocabulary.

    Another case of this was about the top level problem decomposition in introductory programming. The presenter had a set of problems (each of which requiring a program to be designed) {P1, P2, P3}. He had a set of standard top level structures {S1,S2, ... e.g. sequential, conditional, iteration} and the problem the students "should" be able to do is to select the right structure for each given problem. To justify/argue about this means to generate a set of reasons for {F1,F2, ...} and against {A1,A2...} each structure for each problem. I suggest having a bank of questions to select from here. If there are 3 problems and 5 top level structures then 2*3*5=30 questions. An example of one of these 30 would be a set of alternative reasons FOR using structure 3 (iteration) on problem 2, and the question asks the audience which (subset) of these are good reasons.

    The general notion is, that if a question turns out to go too far over the audience's head, we could use these "lower" questions to structure the discussion that is needed about reasons for each answer. (While if everyone gets it right, you speed on without explanation. If half get it right, you go for (audience) discussion because the reasons are there among the audience. But if all get it wrong, support is needed; and these further questions could keep the interaction going instead of crashing out into didactic monologue.)


    Last changed 27 May 2003 ............... Length about 900 words (6000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/feedback.html.

    Feedback to students

    (written by Steve Draper,   as part of the Interactive Lectures website)

    While the presenter may be focussing on finding the most important topics for discussion and on whether the audience seems "engaged", part of what each learner is doing is seeking feedback. Feedback not only in the sense of "how am I doing?", though that is vital for regulating the direction and amount of effort any rational learner puts in, but also in the sense of diagnosing and fixing errors in their performance and understanding. So "feedback" includes, in general, information about the subject matter, not just about indicators of the learner's performance.

    This can be thought about as levels of detail, discussed at length in another paper, but summarised here. A key point is that, while our image of ideal feedback may be individually judged and personalised information, in fact it can be mass produced for a large class to a surprising extent, so handset sessions may be able to deliver more in this way than expected.

    Levels of feedback (in order of increasing informativeness)

    1. A mark or grade. Handsets do (only) this if, with advanced software, they deliver only an overall mark for a set of questions.
    2. The right answer: a description or specification of the desired outcome. Handset questions do this if the presenter indicates which option was the right answer.
    3. Diagnosis of which part of the learner action (input) was wrong. When a question really involves several issues, or combinations of options, the learner may be able to see that they got one issue right but another wrong.
    4. Explanation of what makes the right answer correct: of why it is the right answer. I.e. the principles and relationships that matter. The presenter can routinely give an explanation (to the whole audience) of the right answer, particularly if enough got it wrong to make that seem worthwhile.
    5. Explanation of what's wrong about the learner's answer. Since handset questions have fixed alternatives, and furthermore may have been designed to "trap" anyone with less than solid knowledge, in fact this otherwise most personal of types of feedback can be given by a presenter to a large set of students at once, since at most one explanation for each wrong option would need to be offered.

    The last (5) is a separate item because the previous one (4) concerned only correct principles, but this one (5) concerns misconceptions, and in general negative reasons why apparent connections of this activity with other principles are mistaken. Thus (4) is self-contained, and context-free; while (5) is open-ended and depends on the learner's prior knowledge. This is only needed when the learner has not just made a slip or mistake but is in the grip of a rooted misconception -- but is crucial when that is the case. Well designed "brain teasers" are of this kind: eliciting wrong answers that may be held with conviction. Thus with mass questions that are forced choice, i.e. MCQ, one can identify in advance what the wrong answers are going to be and have canned explanations ready.

    Here are two rough tries, applying to actual handset questions posed to an introductory statistics class, at describing the kind of extra explanation that might be desirable here. Their feature is explaining why the wrong options are attractive, but also why they are wrong despite that.

    Example1. A question on sample vs. population medians.

    The null-hypothesis for a Wilcoxon test could be:
    1. The population mean is 35
    2. The sample mean is 35
    3. The sample median is 35
    4. The population median is 35
    5. I don't know
    Why is it that this vocabulary difference is seductively misleading to half the class? Perhaps because both are artificial views of the same real people: the technical terms don't refer to any real property (like age, sex, or height), just a stance taken by the analyst. And everyone who is in the sample is in the population. It's like arguing about whether to call someone a woman or a female, where the measure is the average blood type of a woman or of a female. And furthermore because of this, most investigators don't have a fixed idea about either sample or population. They would like their ideas to apply the population of all possible people alive and unborn; but know it is likely that it only applies to a limited population; but that they will only discuss this in the last paragraph of their report, long after getting the data and doing the stats. Similarly, they are continually reviewing whom to use as a sample. So not only are these unreal properties that exist only in the mind of the analyst, but they are continually shifting there in most cases. (None of this is about casting doubt on the utility of the concepts, just about why they may stay fuzzy in learners' minds for longer than you might expect.)

    Example2. Regression Analysis: Reading versus Motivation

    PredictorCoefSE CoefTP
    Constant2.0741.9801.050.309
    Motivati0.65880.36161.820.085
    The regression equation is Reading = 2.07 + 0.659 Motivation
    S = 2.782     R-Sq = 15.6%     R-Sq(adj) = 10.9%

    Which of the following statements are correct?
    a. There seems to be a negative relationship between Motivation and Reading ability.
    b. Motivation is a significant predictor of reading ability.
    c. About 11% of the variability in the Reading score is explained by the Motivation score.

    1. a
    2. ab
    3. c
    4. bc
    5. I don't know
    There was something cunning in the question on whether a correlation was significant or not, with a p value of 0.085. Firstly because it isn't instantly easy to convert 0.085 to 8.5% to 1 in 12. 0.085 looks like a negligible number to me at first glance. And secondly, the explanation didn't mention the wholly arbitrary and conventional nature of picking 0.05 as the threshold of "significance".

    For more examples, see some of the examples of brain teasers, which in essence are questions especially designed to need this extra explanation.


    Last changed 21 Feb 2003 ............... Length about 700 words (5,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/manage.html.

    Designing and managing a teaching session

    (written by Steve Draper,   as part of the Interactive Lectures website)

    Any session or lecture can be thought of as having 3 aspects, all of which ideally will be well managed. If you are designing a new kind of session (e.g. with handsets) you may want to think about these aspects explicitly. They are:

    Feedback to the presenter

    In running a session, the presenter has to make various judgements on the fly, because they must make decisions on:


    Last changed 20 Feb 2005 ............... Length about 200 words (2,000 bytes).
    (Document started on 6 Jan 2005.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/qbanks.html. You may copy it. How to refer to it.

    Question banks available on the web

    By Steve Draper,   Department of Psychology,   University of Glasgow.

    This page is to collect a few pointers to sets of questions that might be used with EVS that are available on the web. Further suggestions and pointers are welcome.

    For first year physics at University of Sydney: their webpage     and a local copy to print off as one document.

    The Galileo project has some examples if you regester online with them.

    The SDI (Socratic dialog Inducing) lab has some examples.

  • JITT: just in time teaching: example "warmup questions"

    ?Roy Tasker


    Last changed 15 Feb 2005 ............... Length about 900 words (6000 bytes).
    (Document started on 15 Feb 2005.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/evidence.html. You may copy it. How to refer to it.

    Kinds of evidence about the effectiveness of EVS

    By Steve Draper,   Department of Psychology,   University of Glasgow.

    There are basically three classes of evidence to consider:

    .


    Last changed 22 Aug 2005.............Length about 2162 words (21,000 bytes).
    (Document started on 6 Jan 2005.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/bib.html. You may copy it. How to refer to it.

    Ad hoc bibliography on EVS

    By Steve Draper,   Department of Psychology,   University of Glasgow.

    This page is an ad hoc bibliography of papers about EVS. I expect to paste in lists of references I come across without checking them: you use these at your own risk, but they could be a useful starting point. Please send in suggestions: great papers, useful papers, papers you have written, corrections to entries already here. Attached word documents or HTML is preferred (or I may not do the formatting). I will probably only include either published journal articles and books or reports on the web; and I may exclude anything without explanation.

    ABRAHAMSON, A.L. (1998) An Overview of Teaching and Learning Research with Classroom Communication Systems. Paper presented at the International Conference of the Teaching of Mathematics, Samos, Greece.

    ANDERSON, T., HOWE, C., & TOLMIE, A. (1996). Interaction and mental models of physics phenomena: evidence from dialogue between learners. In J. OAKHILL & A. GARNHAM, Mental Models in Cognitive Science. Psychology Press. Imprint of Erlbaum (UK) Taylor & Francis Ltd.

    ANDERSON, T., HOWE, C., SODEN. R., HALLIDAY, J. & LOW, J. (2001). Peer interaction and the learning of critical thinking skills in further education students. Instructional Science, 29, 1-32.

    Angelo, T.A., and K.P. Cross (1993). Minute Paper. In Classroom Assessment Techniques: A Handbook for College Teachers, San Francisco: Jossey-Bass, 148-153.

    Annett, D. (1969): Feedback and Human Behaviour. New York, Penguin.

    Bligh, D. (2000), What’s the use of lectures? San Francisco: Jossey-Bass.

    Bloom’s Taxonomy : http://www.officeport.com/edu/blooms.htm

    Bloom, B. S. (1956). Taxonomy of Educational Objectives, the Classification of Educational Goals — Handbook I: Cognitive Domain New York: McKay.

    Boyle, J.T. & Nicol, D.J. (2003) "Using classroom communication systems to support interaction and discussion in large class settings" Association for Learning Technology Journal vol.11 no.3 pp.43-57

    Brookfield, S. D. (1995), Becoming a critically reflective teacher, San Francisco: Jossey-Bass.

    BROWN, J.S., COLLINS, A. & DUGUID, P. (1989). Situated cognition and the culture of learning. Educational Researcher. 18, 32-42.

    Brumby, M.N. (1984), ‘Misconceptions about the concept of natural selection by medical biology students’, Science Education, 68(4), 493-503.

    Bruner, J. (1985), ‘Vygotsky: a historical and conceptual perspective’ in Wertsch, J.V. (ed), Culture, communication and cognition. Cambridge: Cambridge University Press.

    Burnstein, R. A. & Lederman, L. M. (2001). Using wireless keypads in lecture classes. The Physics Teacher, 39, 8-11.

    Champagne, A. B., Klopfer, L. E. and Anderson, J. H. (1980). Factors influencing the learning of classical mechanics. American Journal of Physics, 48, 1074-1079.

    Chickering, Arthur W. and Gamson, Zelda F, Seven Principles for Good Practice in Undergraduate Education - http://aahebulletin.com/public/archive/sevenprinciples1987.asp

    Christianson, R. G. & Fisher, K. M. (1999). Comparison of student learning about diffusion and osmosis in constructivist and traditional classrooms. International Journal of Science Education, 21, 687-698.

    Cohen, E.G. (1994), ‘Restructuring the classroom: Conditions for productive small groups’, Review of Educational Research, 64(1), 3-35.

    Comlekci, T., Boyle, J.T., King, W., Dempster, W., Lee, C.K., Hamilton, R. and Wheel, M.A. (1999), ‘New approaches in mechanical engineering education at the University of Strathclyde in Scotland: I — Use of Technology for interactive teaching’, in Saglamer, G.(ed), Engineering Education in the Third Millenium, Leuchtturm- Verlag.

    Crouch, C.H. and Mazur, E. (2001), ‘Peer Instruction: Ten years of experience and results’, American Journal of Physics, 69, 970-977

    Crouch,C., A. Fagen, P. Callan & E. Mazur, Classroom demonstrations: learning tools or entertainment, Am. J. Physics in press.

    Cutts,Q. Carbone,A., & van Haaster,K. (2004) "Using an Electronic Voting System to Promote Active Reflection on Coursework Feedback" To appear in Proc. of the Intnl. Conf. on Computers in Education 2004, Melbourne, Australia, Nov. 30th — Dec 3rd 2004.

    Cutts,Q. Kennedy,G., Mitchell,C., & Draper,S.W. (2004) "Maximising dialogue in lectures using group response systems" Accepted for 7th IASTED Internat. Conf. on Computers and Advanced Technology in Education, Hawaii, 16-18th August 2004

    Cutts,Q.I. & Kennedy, G.E. (2005) "Connecting Learning Environments Using Electronic Voting Systems" Seventh Australasian Computer Education Conference, Newcastle, Australia. Conferences in Research and Practice in Information Technology, Vol 42, Alison Young and Denise Tolhurst (Eds)

    Cutts,Q.I. & Kennedy,G.E. (2005) "The association between students’ use of an electronic voting system and their learning outcomes" Journal of Computer Assisted learning vol.21 pp.260–268

    DeCorte, E. (1996), ‘New perspectives on learning and teaching in higher education’, in Burgen, A. (ed.), Goals and purposes of higher education, London: Jessica Kingsley.

    DOISE, W & MUGNY, G. (1984). The social development of the intellect. Oxford: Pergamon.

    Draper, S. W., Cargill, J. and Cutts, Q. (2002). Electronically enhanced classroom interaction. Australian Journal of Educational Technology, 18, 13-23.

    Draper, S.W. (1998) "Niche-based success in CAL" Computers and Education vol.30, pp.5-8

    Draper,S.W. & Brown,M.I. (2004) "Increasing interactivity in lectures using an electronic voting system" Journal of Computer Assisted Learning vol.20 pp.81-94

    Draper,S.W., Brown, M.I., Henderson,F.P. & McAteer,E. (1996) "Integrative evaluation: an emerging role for classroom studies of CAL" Computers and Education vol.26 no.1-3, pp.17-32

    Draper,S.W., Cargill,J., & Cutts,Q. (2002) Electronically enhanced classroom interaction Australian journal of educational technology vol.18 no.1 pp.13-23. [This paper gives the arguments for interaction, and how EVS might be a worthwhile support for that.]

    Dufresne, R.J., Gerace, W.J., Leonard, W.J., Mestre, J.P., & Wenk, L. (1996) Classtalk: A Classroom Communication System for Active Learning Journal of Computing in Higher Education vol.7 pp.3-47 http://umperg.physics.umass.edu/projects/ASKIT/classtalkPaper

    Educational Technologies at Missouri, University of Missouri, Introducing Student Response Systems at MU http://etatmo.missouri.edu/toolbox/doconline/SRS.pdf

    Edwards, H., Smith, B.A. and Webb, G. (2001), Lecturing: Case studies, experience and practice, London: Kogan Page.

    Elliott, C. (2003) Using a personal response system in economics teaching. International Review of Economics Education. Accessed 11 Nov 2004 http://www.economics.ltsn.ac.uk/iree/i1/elliott.htm

    Elliott,C. (2001) "Case Study: Economics Lectures Using a Personal Response System" http://www.economics.ltsn.ac.uk/showcase/elliott_prs.htm

    Frey, Barbara A. and Wilson, Daniel H. 2004, Student Response Systems, Teaching, Learning, and Technology: Low Threshold Applications - http://jade.mcli.dist.maricopa.edu/lta/archives/lta37.php

    Glaser, R. (1990), ‘The re-emergence of learning theory within instructional research’, American Psychologist, 45(1), 29-39.

    ** Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand student survey of mechanics data for introductory physics courses. American Journal of Physics, 66, 64-74.

    Hake,R.R. (1991) "My Conversion To The Arons-Advocated Method Of Science Education" Teaching Education vol.3 no.2 pp.109-111 online pdf copy

    Halloun, I.A. and Hestenes, D. (1985), ‘The initial knowledge state of college physics students’, American Journal of Physics, 53, 1043-1055.

    Horowitz,H.M. (1988) "Student Response Systems: Interactivity in the Classroom Environment" IBM Learning Research http://www.qwizdom.com/fastrack/interactivity_in_classrooms.pdf

    Horowitz,H.M. (2003) "Adding more power to powerpoint using audience response technology" http://www.socratec.com/FrontPage/Web_Pages/study.htm

    Howe, C. J. (1991) "Explanatory concepts in physics: towards a principled evaluation of teaching materials" Computers and Education vol.17 no.1 pp.73-80

    Hunt, D. (1982) "Effects of human self-assessment responding on learning" Journal of Applied Psychology vol.67 pp.75-82.

    Inverno, R. "Making Lectures Interactive", MSOR Connections Feb 2003, Vol.3, No.1 pp.18-19

    Irving,A., M. Read, A. Hunt & S. Knight (2000) Use of information technology in exam revision Proc. 4th International CAA Conference Loughborough, UK http://www.lboro.ac.uk/service/fi/flicaa/conf2000/pdfs/readm.pdf

    Kearney, M. (2002). Description of Predict-observe-explain strategy supported by the use of multimedia. Retrieved April 8, 2004, from Learning Designs Web site: http://www.learningdesigns.uow.edu.au/exemplars/info/LD44/

    Kolb, D. A. (1984): Experiential Learning: Experience as the source of learning and development. Englewood Cliffs, NJ, Prentice Hall.

    Laurillard, D. (1993), Rethinking university teaching, London: Routledge.

    LAVE, J & WENGER, E (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press, Cambridge.

    Ledlow, Susan, 2001, Center for Learning and Teaching Excellence http://clte.asu.edu/active/lesspre.htm

    MacGregor, J., Cooper, J.L., Smith, K.A. and Robinson, P. (2000), Strategies for energizing large classes: From small groups to learning communities, San Francisco: Jossey-Bass.

    MacManaway,M.A. (1968) "Using lecture scripts" Universities Quarterly vol.22 no.June pp.327-336

    MacManaway,M.A. (1970) "Teaching methods in HE -- innovation and research" Universities Quarterly vol.24 no.3 pp.321-329

    Marton, F., and Säljö, R. (1976). On qualitative differences in learning: I — outcome and process. British Journal of Educational Psychology, 46,4-11.

    Matthews, R.S. (1996), ‘Collaborative Learning: creating knowledge with students’, in Menges, M., Weimer, M. and Associates. Teaching on solid ground, San Francisco: Jossey-Bass,

    Mayes, T. (2001), ‘Learning technology and learning relationships’, in J. Stephenson (ed), Teaching and learning online, London: Kogan Page.

    Mazur, E. (1997). Peer Instruction: A User’s Manual. Upper Saddle River, NJ:Prentice-Hall.

    McCabe et al. (2001) The Integration of Group Response Systems into Teaching, 5 th International CAA Conference, http://www.lboro.ac.uk/service/fi/flicaa/conf2001/pdfs/d2.pdf

    McDermott, L.C. (1984), ‘Research on conceptual understanding in mechanics’, Physics Today, 37 (7) 24-32.

    Meltzer,D.E. & Manivannan,K. (1996) "Promoting interactivity in physics lecture classes" The physics teacher vol.34 no.2 p.72-76

    Nicol, D. J. & Boyle, J. T. (2003) "Peer Instruction versus Class-wide Discussion in large classes: a comparison of two interaction methods in the wired classroom" Studies in Higher Education vol.28 no.4 pp.457-473

    Novak,G.M., Gavrin,A.D., Christian,W. & Patterson,E.T. (1999) Just-in-time teaching: Blending Active Learning and Web Technology (Upper Saddle River, NJ: Prentice- Hall)

    Novak,G.M., Gavrin,A.D., Christian,W. & Patterson,E.T. (1999) http://www.jitt.org/ Just in Time Teaching (visited 20 Feb 2005)

    Palinscar, A.S. (1998), "Social constructivist perspectives on teaching and learning", Annual Review of Psychology, 49, 345-375. To be published in: Association for Learning Technology Journal (ALT-J), 2003, 11(3), 43-57.

    Panetta, K.D., Dornbush, C. and Loomis, C. (2002), "A collaborative learning methodology for enhanced comprehension using TEAMThink" Journal of Engineering Education, 223-229.

    Philipp, Sven and Schmidt, Hilary (2004) Optimizing learning and retention through interactive lecturing: Using the Audience Response System (ARS) at CUMC, http://library.cpmc.columbia.edu/cere/web/facultyDev/ARS_handout_2004_overview.pdf

    Pickford, R. and Clothier, H. (2003) "Ask the Audience: A simple teaching method to improve the learning experince in large lectures", Proceedings of the Teaching, Learning and Assessment in Databases conference, LTSN ICS.

    Poulis, J., Massen, C., Robens, E. and Gilbert, M. (1998). Physics lecturing with audience paced feedback. American Journal of Physics, 66, 439-441.

    REITER, S. N. (1994). Teaching dialogically: its relationship to critical thinking in college students. In P. R. PINTRICH, D. R. BROWN & C. E. WEINSTEIN (eds). Student motivation, cognition and learning. Lawrence Erlbaum, New Jersey.

    RESNICK, L.B. (1989). Knowing, learning and instruction: Essays in honour of Robert Glaser. Lawrence Erlbaum Associates, Hillsdale, New Jersey.

    Resnick,L.B. (1989) "Introduction" ch.1 pp.1-24 in L.B.Resnick (Ed.) Knowing, learning and instruction: Essays in honor of Robert Glaser (Hillsdale, NJ: Lawrence Erlbaum Associates).

    Shapiro, J. A. (1997). Journal of Computer Science and Technology, May issue, 408-412. http://www.physics.rutgers.edu/~shapiro/SRS/instruct/index.html

    Sharma,M. (2002a). Interactive lecturing using a classroom communication system. Proceedings of the UniServe Science Workshop, April 2002, University of Sydney, 87-89. http://science.uniserve.edu.au/pubs/procs/wshop7/

    Sokoloff, D. R. and Thornton, R. K. (1997). Using interactive lecture demonstrations to create an active learning environment. The Physics Teacher, 35, 340-347.

    Springer, L., Stanne, M.E., and Donovon, S. (1999), ‘Effects of small group learning on undergraduates in science, mathematics, engineering and technology: A metaanalysis’, Review of Educational Research, 69(1), 50-80.

    Stuart,S.A.J., & Brown,M.I. (2003-4) "An electronically enhanced philosophical learning environment: Who wants to be good at logic?" Discourse: Learning and teaching in philosophical and religious studies vol.3 no.2 pp.142-153

    Stuart,S.A.J., & Brown,M.I. (2004) "An evaluation of learning resources in the teaching of formal philosophical methods" Association of Learning Technology Journal - Alt-J vol.11 no.3 pp.58-68

    Stuart,S.A.J., Brown,M.I. & Draper,S.W. (2004) "Using an electronic voting system in logic lectures: one practitioner's application" Journal of Computer Assisted Learning vol.20 pp.95-102

    Teaching, Learning, and Technology Center, University of California, 2001, Educational Technology Update: Audience Response Systems Improve Student Participation in Large Classes, http://www.uctltc.org/news/2004/03/ars.html

    Thornton, R. K. and Sokoloff, D. R. (1998). Assessing student learning of Newton’s laws: The Force and Motion Conceptual Evaluation and the evaluation of active learning laboratory and lecture curricula. American Journal of Physics, 66, 338-352.

    Tobias, S. (1994). They’re Not Dumb, They’re Different: Stalking the Second Tier. Tuscon, USA: Research Corporation a Foundation for the Advancement of Science.

    Topping,K., The effectiveness of peer tutoring in further and higher education: A typology and review of the literature. Higher Education 32(3), 1996, 321- 345.

    Treagust, D. F. (1988). Development and use of diagnostic tests to evaluate students' misconceptions in science. International Journal of Science Education, 10, 159-169.

    Tyson, L. M. and Bucat, R.B. (1995). Chemical equilibrium: Using a two-tier test to examine students' understanding. Western Australia Science Education Association Annual Conference.

    Uhari,M., Marjo Renko and Hannu Soini (2003) "Experiences of using an interactive audience response system in lectures" BMC [BioMed Central] Medical Education vol.3 article 12 http://www.biomedcentral.com/1472-6920/3/12#IDATWC2D

    Van Dijk, L. A., Van Den Berg, G. C. and Van Keulen, H. (2001). Interactive lectures in engineering education. European Journal of Engineering Education, 26, 15-28.

    VYGOTSKY, L. S. (1978). Mind in society. Cambridge, MA: Harvard University Press.

    West, L.H.T. and Pines, A.L. (1985), Cognitive structure and conceptual change, New York: Academic Press.

    Wit,E. (2003) "Who wants to be... The use of a Personal Response System in Statistics Teaching" MSOR Connections Volume 3, Number 2: May 2003, p.5-11 (publisher: LTSN Maths, Stats & OR Network)

    Wolfman,S., Making lemonade: exploring the bright side of large lecture classes, Proc. SIGSCE '02, Covington, Kentucky, 2002, 257-261.


    Last changed 23 Jul 2004 ............... Length about 900 words (6000 bytes).
    (Document started on 23 Jul 2004.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/faq.html. You may copy it. How to refer to it.

    Other frequent questions

    By Steve Draper,   Department of Psychology,   University of Glasgow.

    This page is a place for other questions frequently asked by inquirers, but not answered elsewhere in these web pages.

    How many handsets do you lose?

    xx

    xx

    xx





    Last changed 7 Aug 2005 ............... Length about 4,000 words (37,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/tech.html.

    EVS technologies, alternatives, vendors

    (written by Steve Draper,   as part of the Interactive Lectures website)

    Contents (click to jump to a section)

    What are the alternative methods and technologies to the PRS handsets we've bought? In fact this is all part of a wider set of choices. Our own approach at Glasgow university adopted the position:

    The ideal system for this would allow huge groups of students to register a response to a MCQ (multiple choice question) with privacy (secret ballot), and have the results immediately summarised, and the summary displayed. Feedback to individual students (e.g. by LCDs on handsets) could be nice. Best of all would be to escape the MCQ strategy and have open-ended input from each audience member.

    Besides these web pages containing our views, there are some other reports on what technology to adopt:

    Non-MCQ

    There are many other interactive techniques than MCQs. See for example:
  • Summary map of techniques
  • Notes from the Social Policy and social work LTSN / Bristol.
  • More pointers.
  • A journal article: Charman, D.J. & Fullerton, H. (1995) Journal of Geography in Higher Education "Interactive Lectures: a case study in a geographical concepts course" vol.19 no.1 pp.41-55
  • "Interactive lectures: 20 years later" by Thiagi (2002).
  • Steinert,Y. & Snell,L.S. (1999) "Interactive lecturing: strategies for increasing participation in large group presentations" Medical teacher vol.21 no.1 pp.37-42.
  • Bligh,D. (2000) What's the use of lectures? (Josey-Bass: San Francisco)
  • Edwards,H., Smith,B.A. & Webb,G. (2001) Lecturing: Case studies, experience and practice (Kogan Page: London)
  • MacGregor,J., Cooper,J.L., Smith,K.A. & Robinson,P. (2000) Strategies for energizing large classes: From small groups to learning communities (Josey-Bass: San Francisco)

    Non-electronic, but MCQ

    Given the use of MCQs as the way to involve student interaction, there are other ways that are possible and in fact have been heavily used in the recent past and present.

    Electric but not wireless (MCQ)

    There have been, and perhaps still are, cases where particular rooms have had systems installed based on wiring rather than wireless technology. Some examples of this are described in the History section at the end of this page. Nowadays the installation or even the cable costs of such systems would outweigh those of wireless ones, besides being tied to a single room.

    Electronic voting (MCQ) technologies

    This section lists non-computer special electronic handset voting systems that support MCQs. Non-MCQ systems, that allow open-ended responses from the audience, are discussed in a later section. And don't forget the alternative of using computers: one PC per student (discussed in this paper) and in a section below.

    For another view you could look at this 5 Aug 2005 news article by news.com, and 3 ads. The article also reports that "U.K market research firm DTC Worldwide, which tracks the global market for education technology, expects that 8 million clickers ... will be sold annually by 2008".

    Open-ended audience input

    The key function that MCQ-oriented technology cannot cover is allowing open-ended (e.g. free text) input from each audience member, rather than just indicating a selection from a small, fixed set of alternatives.

    The most obvious method is to teach in rooms or labs with a computer per student (or at least per group of students); and use the network instead of infrared to interact. If the computers use wireless networking, then the system could be mobile and flexible. (See this discussion.)

    Other specialised equipment however allows some of this.

    Near future

    In 2002 near future solutions look like including using text messaging from mobile phones, or with all audience members having a PDA (personal digital assistant i.e. palm-top computer) with wireless networking.

    There are several issues with this.


    Matt Jones (mattj@cs.waikato.ac.nz) has a paper on trying SMS mobile phone text messaging in this way:
    Jones,M. & Marsden,G. (2004) "'Please turn ON your mobile phone' -- First impressions of text-messaging in lectures" (University of Waikato, Computer Science Tech Report (07/2004))
    In that study:

    Nevertheless, the students were favourable to this. So it is feasible if you don't mind only some voting successfully.

    Mark Finn has a journal paper reviewing projects to date that have used PDAs in teaching.

    Previous cases of classroom voting technology

    I'm interested in having a history of classroom voting technology; and putting it here. I repeatedly hear rumours about a "system just like that": these (so far) turn out to be wired (as opposed to wireless) handset-equipped lecture theatres. Such systems are of course not mobile but tied to particular rooms, and support only MCQs, not open-ended responses from the audience.

    A comment suggested by John Cowan and in different words by Willie Dunn, is that these systems represented a first effort towards engaging with learning rather than teaching. Their importance was perhaps more that shift: and when the equipment, or more generally feedback classrooms, were abandoned, it was as much to take the underlying change in attitude further into new forms than a step backwards.


    Last changed 18 May 2003 ............... Length about 900 words (6000 bytes).
    (Document started on 18 May 2003.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/techtech.html. You may copy it. How to refer to it.

    Some technical details on PRS

    By Steve Draper,   Department of Psychology,   University of Glasgow.

    This page is about details that most lecturers using the PRS handset system will never need or want to know. For the rest of you, ....

    High/low confidence buttons
    ID numbers
    Time stamps
    Log files
    Range
    Angles
    How to install it; magic password; obscure error messges


    Last changed 23 May 2005 ............... Length about 900 words (16,000 bytes).
    This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/people.html.

    UK handset users and sites

    (written by Steve Draper,   as part of the Interactive Lectures website)

    This page lists some sites and people I know of in the UK mainly in Higher Education who are interested in classroom handsets, PRS, or similar approaches to interactive teaching. If you would like to be added or removed, or if you can suggest someone else who should be listed here, please email me (s.draper@psy.gla.ac.uk) and I will act promptly on your request. People have found it very useful to discover who is already interested in EVS or PRS near them, and conversely to advertise here their interest to others in their institution or city.

    (To see why we are interested in this technique, and other information about why you might be interested, look at the parent page to this one.)

    I suggest that if you are looking for a person or place, that you use your browser's "Find" command to search this page for (part of) the name you are interested in.

    This page is organised firstly by (university) site with just a few key people mentioned: it would not be practical to mention them all. The order is idiosyncratic: expect to search using the Find command, not by scanning by eye. This page just contains people I happen to know about: it is not likely to be complete. Again, if you would like to be added or removed, or if you can suggest someone else who should be listed here, please email me (s.draper@psy.gla.ac.uk). Also, any pointers to papers and web documents on this would be gratefully received.

    (PRS is used widely in some places outside the UK, including Hong Kong University of Science and Technology, UMass/Amherst, Rutgers, University of British Columbia, North Dakota, and UC Berkeley. See also http://www.educue.com/users.htm for mainly USA sites using PRS.)

    Strathclyde University

    Jim Boyle at the engineering pub Prof. Jim Boyle, in Mechanical Engineering, may be the longest standing practitioner in the UK, starting with non-technologically supported "peer instruction" in 1997, then using an earlier equipment system (Classtalk), and then being the first UK purchaser of PRS in 1998. He has by now modified not just his teaching practice but the timetable and the architecture of his teaching rooms, and been involved in project NATALIE. A number of papers based on evaluations by David Nicol are becoming available. The longest standing use is in the first year class in Mechanical Engineering, but there is some other use of the equipment at Strathclyde now, e.g. in maths (Geoff McKay), in psychology, in teaching foreign languages, (Michele Dickson) and student induction.

    University of Glasgow

    We have tried out the use of handsets since October 2001 in a variety of departments including Philosophy, Psychology, Computing Science, Biological sciences, Medicine, Vet School and the Dental School (with GPs), with audience sizes from 20 to 300, and with students in level 1 to level 4. See our interim report on this. See also the various papers published, listed on the main web page. Three contacts are Steve Draper (Psychology), Quintin Cutts (Computing Science), and Susan Stuart in Philosophy.

    University of Central England (in Birmingham)

    Bill Madill (Bill.Madill@uce.ac.uk) of the School of Property and Construction wrote an MEd thesis on Peer Instruction. He has a case study of using PRS available on the web: synopsis and full case study.

    University of Wales, college of medicine (Cardiff)

    They bought a (non-PRS) system and used it for a while from 1997, but it may have fallen into disuse: see this paper by Joe Nicholls.
    Wendy Sadler in Physics and Astronomy is buying a set for school liaison as well as for students.

    University of Portsmouth

    Michael McCabe (michael.mccabe@port.ac.uk) was awarded a 3-year HEFCE National Teaching Fellowship for Project LOLA (Live and On-Line Assessment -- the proposal is available). The live part of the assessment relates to the use of interactive classrooms in face-to-face teaching, which includes PRS handsets as one approach. Other papers are listed on the main page.

    An unconfirmed report says the psychology group also bought PRS equipment.

    University of Lancaster

    Caroline Elliott (Economics dept.) has done featured work on using handsets in 2000/1. The dept. of Accounting and Finance also uses them regularly (see here). There is a set of about 150 handsets: contact Sue Armitage.

    University of Southampton

    Su White and Hugh Davis and others in Electronics and Computer Science have acquired some equipment and begun exploring its use in teaching from 2002.

    Ray d'Inverno in Maths has begun using PRS, and has an early report and pedagogic rationale for PRS use.

    University of Nottingham

    Liz Sockett in Genetics is a big fan, and uses them extensively.

    Science Museum (London)

    Deborah Scopes (d.scopes@nmsi.ac.uk) has been exploring the use of handsets as an enhancement to public debates and lectures on science.

    University of Bath

    Nick Vaughan (Mechanical Engineering) may have used handsets (perhaps the CPS system).

    He supervised an undergraduate project that did a study on potential use, comparing PRS, Classtalk, and CPS:
    J.Smith (2001) Dialogue and interaction in a classroom environment (Final year research project, School of Mechanical Engineering, University of Bath).

    University of Ulster

    Edwin Curran says PRS was installed ready for Sept 2003 in a 170 seat lecture theatre in Engineering, plus a portable system.

    University of Liverpool

    Both CPS and PRS used there. Doug Moffat (Mechanical Engineering).

    Liverpool John Moores University

    Laura Bishop (Palaeoantropologist) and Clare Milsom (Geologist) are considering introducing EVS use there.

    University of Salford

    PRS used there. Elizabeth Laws, Engineering.

    Kingston University

    George Masikunas, Andreas Panayiotidis, and others at the Kingston University (Kingston Upon Thames) business school introduced PRS in 2003-4 and use it for first year classes of about 250 students, where small groups are required to discuss and agree answers to the questions posed.

    University of Edinburgh

    A set of PRS kit for loan to lecturers has been purchased by the Media and Learning Technology Service. Contact Nora Mogey. Alistair Bruce is considering significant teaching improvements in Physics that might include PRS, and has written a short article on this.

    University of Central Lancashire at Preston

    Mick Wood is leading the introduction of EVS (using IML not PRS kit) there, with a first application in Sports Psychology.

    University of Wolverhampton

    Apparently use PRS to teach computing. Alison Halstead.

    University College London

    Martin Oliver (Education and Professional Development; martin.oliver@ucl.ac.uk; 020 7679 1905) has produced a report on whether using handsets might be worthwhile at UCL.

    University of Surrey

    Vicki Simpson (Centre for Learning Development; v.simpson@surrey.ac.uk) has been advising on possible adoption of the handsets. They now have an Interactive Presenter system with 250 handsets that they have recently started to pilot for the delivery of lectures in the School of Biomedical and Molecular Sciences.

    University of Keele

    Stephen Bostock in the Staff Development Centre got interested in using PRS, meantime introduced the use of coloured cubes as a substitute, but now has radio-connected voting handsets working with interactive whiteboards, around campus.

    University of Northumbria

    Chris Turnock (or here) is currently co-ordinating staff development in the use of the system before undertaking further evaluation of its use within the university. Paul Barlow (paul.barlow@unn.ac.uk) in the School of Humanities is considering applying EVS in the Arts area.

    University of Leeds

    Tony Lowe at Leeds is looking into using them in the school of computing. Leeds now has a simple introductory website for EVS.

    University of Aberdeen

    Phil Marston of the learning technology unit has bought a small set, and is evaluating them. http://www.abdn.ac.uk/diss/ltu/pmarston/prs/. (or if you have already registered, then here).

    Robert Gordon University (in Aberdeen)

    Roger McDermott in the school of computing started to use them in various classes from October 2004. The faculty of health and social care has also taken up their use.

    Kings College London

    Ann Wilkinson is looking into EVS use. Professor Simon Howell in Biomedical Sciences is believed to have used an EVS.

    Bangor University

    Paul Wood (r.p.wood@bangor.ac.uk) is installing a 120 set PRS system in a lecture theatre in November 2004 and plans to start trials with enthusiastic academics.

    Coventry University

    Anne Dickinson has been investigating possible use, particularly of the Discourse equipment, and has written a report. She also has a project on it: "Investigation of the potential use of a classroom communication system in the Higher Education context"

    Lewisham College, London

    Raja Habib and Raul Saraiva are looking into using PRS on behalf of their college.

    Brooklands College, Weybridge, Surrey

    Theresa Willis

    National University of Ireland, Galway

    Ann Torres has started (October 2004) using PRS for teaching Marketing Principles to a class of over 300.

    Bournemouth University

    Kimberley Norman in the Learning Design Studio is interested.

    Brighton University

    Gareth Reast is looking into purchasing a set of CPS (eInstruction); perhaps for use in the business school and applied social sciences.

    University of Hertfordshire

    Andy Oliver is leading an attempt likely to happen, buying EVS for audiences of 250; at the moment likely to go for E-instruction.

    Roehampton University

    Andy Curtis is going to purchase some kit, and get it used.

    Have made enquiries

    Brian Whalley (b.whalley@queens-belfast.ac.uk), Geography, Queens, Belfast.
    Mike Watkinson (m.watkinson@qmul.ac.uk), Chemistry, Queen Mary, University of London
    Stuart Jones, Geology dept., University of Durham