Last changed 19 Sept 2008 ............... Length about 3,000 words (26,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/qpurpose.html.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [EVSmain] [this page]
Question design: [one question] [purposes] [contingent sessions] [feedback] [whole sessions]

Pedagogical formats for using questions and voting

(written by Steve Draper,   as part of the Interactive Lectures website)

EVS questions may be used for many pedagogic purposes. These can be classified in an abstract way: discussed at length elsewhere and summarised here:

  1. Diagnostic SAQs i.e. "self-assessment questions". These give individual formative feedback to students, but also both teacher and learners can see what areas need more attention. The design of sets of these is discussed further on a separate page, including working through an extended example (e.g. of how to solve a problem) with a question at each step. SAQs are a good first step in introducing voting systems to otherwise unmodified lectures.
  2. Initiate a discussion. Discussed further below.
  3. Formative feedback to the teacher i.e. "course feedback".
    1. In fact you will get it anyway without planning to. For instance SAQs will also tell you how well the class understands things.
    2. To organise a session explicitly around this, look at contingent teaching;
    3. To think more directly about how questioning students can help teachers and promote learning directly, look at this book on "active assessment": Naylor,S., Keogh,B., & Goldsworthy,A. (2004) Active assessment: Thinking, learning, and assessment in science (London: David Fulton Publishers)
    4. The above are about feedback to the teacher of learners' grasp of content. You can also ask about other issues concerning the students' views of the course as in course feedback questionnaires (which could be administered by EVS).
    5. Combining that with the one minute paper technique would give you some simple open-ended feedback to combine with the "numbers" from the EVS voting.
    6. A more sophisticated (but time consuming) version of this would combine collecting issues from the students, and then asking EVA survey questions about each such issue. This is a form of of having students design questions where this is described further.
  4. Summative assessment (even if only as practice) e.g. practice exam questions.
  5. Peer assessment could be done on the spot, saving the teacher administrative time and giving the learner much more rapid, though public, feedback.
  6. Community mutual awareness building. At the start of any group e.g. a research symposium or the first meeting of a new class, the equipment gives a convenient way to create some mutual awareness of the group as a whole by displaying personal questions and having the distribution of responses displayed.
  7. Experiments using human responses: for topics that concern human responses, a very considerable range of experiments can be directly demonstrated using the audience as participants. The great advantage of this is that every audience member both experiences what it is to be a "subject" in the experiment, and sees how variable (or not) the range of responses is (and how their own compares to the average). In a textbook or conventional lecture, neither can be done experientially and personally, only described. Subjects this can apply in include:
  8. Having students design questions: this is relatively little used, but has all the promise of a powerfully mathemagenic tactic. Just as peer discussion moves learners from just picking an answer (perhaps by guessing) to arguing about reasons for answers, so designing MCQs gets them thinking much more deeply about the subject matter.

However pedagogic uses are probably labelled rather differently by practising lecturers, under phrases like "adding a quiz", "revision lectures", "tutorial sessions", "establishing pre-requisites at the start", "launching a class discussion". This kind of category is more apparent in the following sections and groupings of ways to use EVS.

SAQs and creating feedback for both learner and teacher

Asking test questions, or "self-assessment questions" (SAQs) since only the student knows what answer they gave individually, is useful in more than one way.

A first cautious use of EVS

The simplest way to introduce some EVS use into otherwise conventional lectures is to add some SAQs at the end so students can check if they have understood the material. This is simplest for the presenter: just add two or three simple questions near the end without otherwise changing the lecture plan. Students who get them wrong now know what they need to work on. If the average performance is worse than the lecturer likes, she or he can address this at the start of the next lecture. Even doing this in a simple, uninspired way has in fact consistently been viewed positively by students in our developing experience, as they welcome being able to check their understanding.

Extending this use: Emotional connotations of questions

If you put up an exam question, its importance and relevance is clear to everyone and leads to serious treatment. However, it may reduce discussion even while increasing attention, since to get it wrong is to "fail" in the terms of the course. Asking brain teasers is a way of exercising the same knowledge, but without the threatening overtones, and so may be more effective for purposes such as encouraging discussion.

Putting up arguments or descriptions for criticism may be motivating as well as useful (e.g. describe a proposed experiment and ask what is faulty about it). It allows students to practise criticism which is useful; and criticism is easier than constructive proposals which, in effect, is what they are exclusively asked for in most "problem solving" questions, and so questions asking for critiques may be a better starting point.

Thus in extending beyond a few SAQs, presenters may like to vary their question types with a view to encouraging a better atmosphere and more light hearted interaction.

Contingent teaching: Extending the role of questions in a session

Test questions can soon lead to trying a more contingent approach, where a session plan is no longer for a fixed lecture sequence of material, but is prepared to vary depending upon audience response. This may mean preparing a large set of questions, those actually used depending upon the audience: this is discussed in "designing a set of questions for a contingent session".

This approach could be used, for instance, in:


Designing for discussion

Another important purpose for questions is to promote discussion, especially peer discussion. A general format might be: pose a question and take an initial vote (this gets each person to commit privately to a definite initial position, and shows everyone what the spread of opinion on it is). Then, without expressing an opinion or revealing what the right answer if any is, tell the audience to discuss it. Finally, you might take a new vote, and see if opinions have shifted.

The general benefit is that peer discussion requires not just deciding on an answer or position (which voting requires) but also generating reasons for and against the alternatives, and also perhaps dealing with reasons and objections and opinions voiced by others. That is, although the MCQ posed only directly asks for an answer, discussion implicitly requires reasons and reasoning, and this is the real pedagogical aim. Furthermore, if the discussion is done in small groups of, say, four, then at any moment one in four not only one in the whole room is engaged in such generation activity.

There are two classes of question for this: those that really do have a right answer, and those that really don't. (Or, to use Willie Dunn's phrase, those that concern objects of mastery and those that are a focus for speculation.) In the former case, the question may be a "brain teaser" i.e. optimised to provoke uncertainty and dispute (see below). In the latter case, the issue to be discussed simply has to be posed as if it had a fixed answer, even though it is generally agreed it does not: for instance as in the classic debate format ("This house believes that women are dangerous."). Do not assume that a given discipline necessarily only uses one or the other kind of question. GPs (doctors), for instance, according to Willie Dunn in a personal note, "came to distinguish between topics which were a focus for speculation and those which were an object of mastery. In the latter the GPs were interested in what the expert had to say because he was the master, but with the other topics there was no scientifically-determined correct answer and GPs were interested in what their peers had to say as much as the opinion of the expert, and such systems [i.e. like PRS] allowed us to do this."

Slight differences in format for discussion sessions have been studied: Nicol, D. J. & Boyle, J. T. (2003) "Peer Instruction versus Class-wide Discussion in large classes: a comparison of two interaction methods in the wired classroom" Studies in Higher Education. In practice, most presenters might use a mixture and other variations. The main variables are in the number of (re)votes, and the choice or mixture of individual thought, small group peer discussion, and plenary or whole-class discussion. While small group discussion may maximise student cognitive activity and so learning, plenary discussion gives better (perhaps vital) feedback to the teacher by revealing reasons entertained by various learners, and so may maximise teacher adaptation to the audience. The two leading alternatives are summarised in this table (adapted from Nicol & Boyle, 2003).

Discussion recipes
"Peer Instruction":
Mazur Sequence
"Class-wide Discussion":
Dufresne (PERG) Sequence
  1. Concept question posed.
  2. Individual Thinking: students given time to think individually (1-2 minutes).
  3. [voting] Students provide individual responses.
  4. Students receive feedback -- poll of responses presented as histogram display.
  5. Small group Discussion: students instructed to convince their neighbours that they have the right answer.
  6. Retesting of same concept.
    [voting] Students provide individual responses (revised answer).
  7. Students receive feedback -- poll of responses presented as histogram display.
  8. Lecturer summarises and explains "correct" response.
  1. Concept question posed.
  2. Small group discussion: small groups discuss the concept question (3-5 mins).
  3. [voting] Students provide individual or group responses.
  4. Students receive feedback -- poll of responses presented as histogram display.
  5. Class-wide discussion: students explain their answers and listen to the explanations of others (facilitated by tutor).
  6. Lecturer summarises and explains "correct" response.

Questions to discuss, not resolve

Examples of questions to launch discussion in topics that don't have clear right and wrong answers are familiar from debates and exam questions. The point, remember, is to use a question as an occasion first to remind the group there really are differences of view on it, but mainly to exercise giving and evaluating reasons for and against. The MCQ, like a debate, is simply a conventional provocation for this.

"Brain teasers"

Using questions with right and wrong answers to launch discussion is, in practice, less showing a different kind of question to the audience and more a different emphasis in the presenter's purpose. Both look like (and are) tests of knowledge; in both cases if (but only if) the audience is fairly split in their responses then it is a good idea to ask them to discuss the question with their neighbours and then re-voting, rather than telling them the right answer; in both cases the session will become more contingent: what happens will depend partly on how the discussion goes not just on the presenter's prepared plan; in both cases the presenter may need to bring a larger set of questions than can be used, and proceed until one turns out to produce the right level of divisiveness in initial responses.

The difference is only that in the SAQ case the presenter may be focussing on finding weak spots and achieving remediation up to a basic standard whether the discussion is done by the presenter or class as a whole, while in the discussion case, the focus may be on the way that peer discussion is engaging and brings benefits in better understanding and more solid retention regardless of whether understanding was already adequate.

Nevertheless optimising a question for diagnosing what the learners know (self-assessment questions), and optimising it for fooling a large proportion and for initiating discussion are not quite the same thing. There are benefits from initiating discussion independently of whether this is the most urgent topic for the class (e.g. promoting the practice of peer interaction, generating arguments for an answer probably improves the learner's grasp even if they had selected the right answer, and is more related to deep learning, and promotes their learning of reasons as well as of answers, etc.).

Some questions seem interesting but hard to get right if you haven't seen that particular question before. Designing a really good brain teaser is not just about a good question, but about creating distractors i.e. wrong but very tempting answers. In fact, they are really paradoxes: where there seem to be excellent reasons for each contradictory alternative. Such questions are ideal for starting discussions, but perhaps less than optimal for simply being a fair diagnosis of knowledge. In fact ideally, the alternative answers should be created to match common learner misconceptions for the topic. An idea is to use the method of phenomenography to collect these misconceptions: the idea here would be to then express the findings as alternative responses to an MCQ.

Great brain teasers are very hard to design, but may be collected or borrowed, or generated by research.

Here's an example that enraged me in primary school, but which you can probably "see through".

"If a bottle of beer and a glass cost one pound fifty, and the beer costs a pound more than the glass, how much does the glass cost?"
The trap seems to lie in matching the beer to one pound, the glass to fifty pence, and being satisfied that a "more" relation holds.

Here is one from Papert's Mindstorms p.131 ch.5.

"A monkey and a rock are attached to opposite ends of a rope that is hung over a pulley. The monkey and the rock are of equal weight and balance one another. The monkey begins to climb the rope. What happens to the rock?"
His analysis of why this is hard (but not complex) is: students don't have the category of "laws-of-motion problem" like conservation of energy problem. I.e. we have mostly learned Newton without having really learned the pre-requisite concept of what IS a law of motion. Another view is that it requires you to think of Newtons 3rd law (reaction), and most people can repeat the law without having exercised it much.

Another example on the topic of Newtonian mechanics can be paraphrased as follows.

Remember the old logo or advert for Levi's jeans that showed a pair of jeans being pulled apart by two teams of mules pulling in opposite directions. If one of the mule teams was sent away, and their leg of the jeans tied to a big tree instead, would the force (tension) in the jeans be: half, the same, or twice what it was with two mule teams?
The trouble here is how can two mule teams produce no more force than one team, when one team clearly produces more than no teams; on the other hand, one mule pulling one leg (while the other is tied to the tree) clearly produces force, so a second mule team isn't necessary.

Another one (taken from the book "The Tipping Point") can be expressed:

Take a large piece of paper, fold it over, then do that again and again a total of 50 times. How tall do you think the final stack is going to be?
Somehow even those who have been taught better, tend think it will be about 50 times the thickness of a piece of paper, whereas really it is doubled 50 times i.e. it will be 2 to the 50th power thicknesses, which is a huge number; and probably comes out as about the distance from here to the sun.

Brain teasers seem to relate the teaching to students' prior conceptions, since tempting answers are most often those suggested by earlier but incorrect or incomplete ways of thinking.

Whereas with most questions it is enough to give (eventually) the right answer and explain why it is right, with a good brain teaser it may be important in addition to explain why exactly each tempting wrong answer is wrong. This extra requirement on the feedback a presenter should produce is discussed further here.

Finally, here is an example of a failed brain teaser. "Isn't it amazing that our legs are exactly the right length to reach the ground?" (This is analogous to some specious arguments that have appeared in cosomology / evolution.) At the meta-level, the brain teaser or puzzle here is to analyse why that is tempting to anyone; something to do with starting the analysis from your seat of consciousness in your head (several feet above the ground) and then noticing what a good fit from this egocentric viewpoint your legs make between this viewpoint and the ground.

May need a link here on to the page seq.html about designing sequences with/of questions. And on from there to lecture.html.

Extending discussion beyond the lecture theatre

An idea which Quintin is committed to trying out (again, better) from Sept. 2004 is extending discussion, using the web, beyond the classroom. The pedagogical and technical idea is to create software to make it easy for a presenter to ship a question (for instance the last one used in a lecture, but it could be all of them), perhaps complete with initial voting pattern, to the web where the class may continue the discussion with both text discussion and voting. Just before the next lecture, the presenter may equally freeze the discussion there and export it (the question, new voting pattern, perhaps discussion text) back into powerpoint for presentation in the first part of their next lecture.

If this can be made to work pedagogically, socially, and technically then it would be a unique exploitation of e-learning with the advantages of face to face campus teaching; and would be expected to enhance learning because so much is simply proportional to the time spent by the learner thinking: so any minutes spent on real discussion outside class is a step in the right direction.

Direct tests of reasons

One of the main reasons that discussion leads to learning, is that it gets learners to produce reasons for a belief or prediction (or answer to a question), and requires judgements about which reasons to accept and which to reject. This can also be done directly by questions about reasons.

Simply give the prediction in the question, and ask which of the offered reasons are the right or best one(s); or which of the offered bits of evidence actually support or disconfirm the prediction.

Collecting experimental data

A voting system can obviously be used to collect survey data from an audience. Besides being useful in evaluating the equipment itself, or the course in which it is used (course feedback), this is particularly useful when that data is itself the subject of the course as it may be in psychology, physiology, parts of medical teaching, etc.

For instance, in teaching the part of perception dealing with visual illusions, the presenter could put up the illusion together with a question about how it is seen, and the audience will then see the proportion of the audience that "saw" the illusory percept, and compare what they are told, their own personal perceptual experience, and the spread of responses in the audience.

In a practical module in psychology supported by lectures, Paddy O'Donnell and I have had the class design and pilot questionnaire items (questions) in small groups on a topic such as the introduction and use of mobile phones, for which the class is itself a suitable population. Each group then submited their items to us, and we then picked a set drawing on many people's contributions to form a larger questionnaire. We then used a session to administer that questionnaire to the class, with them responding using the voting equipment. But the end of that session we had responses from a class of about 100 to a sizeable questionnaire. We could then make that data set available almost immediately to the class, and have them analyse the data and write a report.

A final year research project has also been run, using this as the data collection mechanism: it allowed a large number of subjects to be "run" simultaneously, which is the advantage for the researcher.

In a class on the public communication of science, Steve Brindley has surveyed the class on some aspects of the demonstrations and materials he used, since they are a themselves a relevant target for such communciation and their preferences for different modes (e.g. active vs. passive presentations) are indicative of the subject of the course: what methods of presentation of science are effective, and how do people vary in their preferences. He would then begin the next lecture by re-presenting and commenting on the data collected last time.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [EVSmain] [this page]
[Top of this page]