Last changed 6 Aug 2003 ............... Length about 1,600 words (10,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/contingent.html.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [EVSmain] [this page]
Question design: [one question] [purposes] [contingent sessions] [feedback] [whole sessions]

Degrees of contingency

(written by Steve Draper,   as part of the Interactive Lectures website)

Besides the different purposes for questions (practising exam questions, collecting data for a psychological study, launching discussion on topics without a right or wrong answer), an independent issue is whether the session as a whole has a fixed plan, or is designed to vary contingent (depending) on audience responses. The obvious example of this is to use questions to discover any points where understanding is lacking, and then to address those points. (While direct self-assessment questions are the obvious choice for this diagnosis function, in fact other question types can probably be used.) This is to act contingently. By contingency I mean having the presenter NOT have a fixed sequence of stuff to present, but a flexible branching plan, where which branches actually get presented depends on how the audience answers questions or otherwise shows their needs. There are degrees of this.

Contents (click to jump to a section)

Implicit contingency

First are simple self-assessment questions, where little changes in the session itself depending on how the audience answers, but the implicit hope is that learners will (contingently i.e. depending on whether they got a question right) later address the gaps in their knowledge which the questions exposed, or that the teacher will address them later.

Whole/part training

Secondly, we might present a case or problem with many questions in it; but the sequence is fixed. A complete example of a problem being solved might be prepared, with questions at each intermediate step, giving the audience practice and self-assessment at each, and also showing the teacher where to speed up and where to slow down in going over the method.

An example of this can be found in the box on p.74 of Meltzer,D.E. & Manivannan,K. (1996) "Promoting interactivity in physics lecture classes" The physics teacher vol.34 no.2 p.72-76. It's a sample problem for a basic physics class at university, where a simple problem is broken down into 10 MCQ steps.

Another way of looking at this is that of training on the parts of a skill or piece of knowledge separately, then again on fitting them together into a whole. Diagnostically, if a learner passes the test for the whole thing, we can usually take it they know it all. But if not, then learning may be much more effective if the pieces are learned separately before being put together. Not only is there less to learn at a time, but more importantly feedback is much clearer, less ambiguous if it is feedback on a single thing at a time. When a question is answered wrongly by everyone, it may be a sign that too much has been put together at once.

In terms of the lesson/lecture plan, though, there is a single fixed course of events, although learners contribute answers at many steps, with the questions being used to help all the learners converge on the right action at each step.

Contingent path through a case study

Thirdly, we could have a prepared case study (e.g. a case presented to physicians), with a fixed start and end point; but where the audience votes on what actions and tests to do next, and the presenter provides the information the audience decided to ask for next. Thus the sequence of items depends (is contingent) on the audience's responses to the questions; and the presenter has to have created slides, perhaps with overlays, that allows them to jump and branch in the way required, rather than trudging through a fixed sequence regardless of the audience's responses.

Diagnosing audience need

Fourthly, a fully contingent session might be conducted, where the audience's needs are diagnosed, and the time is spent on the topics shown to be needing attention. The plan for such a session is no longer a straight line, but a tree branching at each question posed. The kinds of question you can use for this include:

Designing a bank of diagnostic questions

If you want to take diagnosis from test questions seriously, you need to come with a large set, selecting each one depending on the response to the last one. A fuller scheme for designing such a bank might be:
  1. List the topics you want to cover.
  2. Multiply these by several levels of difficulty for each.
  3. Even within a given topic, and given level of difficulty, you can vary the type of question: the type of link, the direction of link, the specific case. [Link back]

Responding to the answer distribution

When the audience's answers are in, the presenter must a) state which answer (if any) was right, and b) decide what to do next:

Selecting the next question

Decomposing a topic the audience was lost with

While handset questions are MCQs, the real aim is (when required) to bring out the reasons for and against each alternative answer. When it turns out that most of the audience gets it wrong, how best to decompose the issue? My suggestion is to generate a set of associated part questions.

One case is when a question links instances (only) to technical terms e.g. (in psychology) "which of these would be the most reliable measure?" If learners get this wrong, you won't know if that is because they don't understand the issues, or this problem, or have just forgotten the special technical meaning of "reliable". In other words, a question may require understanding of both the problem case, and the concepts, and the special technical vocabulary. If very few get it right, it could be unpacked by asking about the vocabulary separately from the other issues e.g. "which of these measures would give the greatest test-retest consistency?". This is one aspect of the problem of technical vocabulary.

Another case of this was about the top level problem decomposition in introductory programming. The presenter had a set of problems (each of which requiring a program to be designed) {P1, P2, P3}. He had a set of standard top level structures {S1,S2, ... e.g. sequential, conditional, iteration} and the problem the students "should" be able to do is to select the right structure for each given problem. To justify/argue about this means to generate a set of reasons for {F1,F2, ...} and against {A1,A2...} each structure for each problem. I suggest having a bank of questions to select from here. If there are 3 problems and 5 top level structures then 2*3*5=30 questions. An example of one of these 30 would be a set of alternative reasons FOR using structure 3 (iteration) on problem 2, and the question asks the audience which (subset) of these are good reasons.

The general notion is, that if a question turns out to go too far over the audience's head, we could use these "lower" questions to structure the discussion that is needed about reasons for each answer. (While if everyone gets it right, you speed on without explanation. If half get it right, you go for (audience) discussion because the reasons are there among the audience. But if all get it wrong, support is needed; and these further questions could keep the interaction going instead of crashing out into didactic monologue.)

Web site logical path: [www.psy.gla.ac.uk] [~steve] [EVSmain] [this page]
[Top of this page]