Last changed 24 Feb 2005 ............... Length about 4,000 words (29,000 bytes).
(Document started on 15 Feb 2005.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/ilig/td.html. You may copy it. How to refer to it.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [ilig] [this page]

Transforming lectures to improve learning

By Steve Draper,   Department of Psychology,   University of Glasgow.

Contents (click to jump to a section)

Introduction

Some of the most successful uses of EVS (Electronic Voting Systems) have been associated with a major transformation of how "lectures" have been used within a HE (Higher Education) course. Here we adopt the approach of asking how in general we might make teaching in HE more effective, and keeping an open mind about whether and how ICT (Information and Communication Technology) could play a role in this. The aim then is to improve learning outcomes (in quantity and quality) while only investing about the same, or even fewer, teaching resources. More specifically, can we do this by transforming how lectures are used.

Replacing exposition

The explicit function of lectures is exposition: communicating new concepts and facts to learners. In fact lectures usually perform some additional functions, as their defenders are quick to point out and as we shall discuss below, but nevertheless in general most of the time is spent on exposition and conversely most exposition (in courses based on lectures) is performed by lectures. Clearly this could be done in other ways, such as requiring learners to read a textbook. On the face of it, this must be not only possible, but better. Remember, the best a speaker, whether face to face or on video, can possibly do in the light of individual differences between learners is to speak too fast for half the audience and too slowly for the other half. Reading is self-paced, and is therefore the right speed for the whole audience. Furthermore reading is in an important sense more interactive than listening: the reader can pause when they like, re-read whatever and whenever they like; pause to think and take notes at their own pace, before going on to try to understand what is said next -- which is likely to assume the audience has already understood what went before. So using another medium for the function of exposition should be better. Can this be made to work in actual undergraduate courses?

Yes. Here are several methods of replacing exposition and using the face to face large group "lecture" periods for something else.

It seems clear that lectures are not needed for exposition: the Open University (OU) has made this work for decades on a very big scale. Another recurring theme is the use of questions designed not for accurate scores (summative assessment), but to allow students to self-diagnose their understanding, and even more, to get them thinking. A further theme is to channel that thinking into discussion (whether with peers or teachers). This requires "interactivity" from staff: that is, being ready to produce discussion not to some plan, but at short notice in response to students' previous responses.

Should we expect to believe the reports of success with these methods, and should we expect them to generalise to many subjects and contexts? Again the answer is yes, which I'll arrive at by considering various types of theoretical analysis in turn.

The basic 3 reasons for any learning improvements

Many claims of novel learning success can be understood in terms of three very simple factors.

  1. The time spent by the learner actually learning: often called "time on task" by Americans. The effect of MacManaway's approach is to double the amount of time each learner spent (he studied how long they took reading his lecture scripts): first they read the scripts, then they attended the classes anyway. In fact they spent a little more than twice as long in total. Similarly JITT takes the same teacher time, but twice the student time.

  2. Processing the material in different ways. It probably isn't only total time, but (re)processing the concepts in more than one way e.g. not only listening and understanding, but then re-expressing in an essay. That is why so many courses require students not just to listen or read, but to write essays, solve written problems etc. However these methods are usually strongly constrained by the amount of staff time available to mark them. Here MacManaway got students to discuss the issues with each other, as do the IE and JITT schemes. Discussion requires producing reasons and parrying the conflicting opinions and reasons produced by others. Thinking about reasons and what evidence supports what conclusions is a different kind of mental processing than simply selecting or calculating the right answer or conclusion.

  3. Metacognition in the basic sense of monitoring one's degree of knowledge and recognising when you don't know or understand something. We are prone to feeling we understand something when we don't, and it isn't always easy to tell. The best established results on "metacognition" (Hunt, 1982; Resnick, 1989) show that monitoring one's own understanding effectively and substantially improves learning. Discussion with peers tests one's understanding and often leads to changing one's mind. The quizzes in the OU, JITT and the IE methods also perform this function, because eventually the teacher announces the right answer, and each student then knows whether they had got it right.
    Brain teaser questions also do this, partly because they frequently draw wrong answers and so force the learner to reassess their grasp of a concept, but for good learners the degree of uncertainty they create, even without the correct solution being announced, is alone enough to show them their grasp isn't as good as it should be.

The Laurillard model

The Laurillard (1993) model asserts that for satisfactory teaching and learning, 12 distinct activities must be covered somehow. Exposition is the first; and in considering its wider place, we are concerned with the first 4 activities: not only exposition by the teacher, but re-expression by the learner, and sufficient iteration between the two to achieve convergence of the learner's understanding with the teacher's conception.

Re-expression by learners (Laurillard activity 2) is achieved in peer discussion in the MacManaway and Interactive Engagement schemes, and by the quizzes in the OU and JITT schemes. Feedback on correctness (Laurillard activity 3) is provided by peer responses in the IE schemes and by the quiz in the JITT and IE schemes. Remediation more specifically targeted at student problems by the teacher (a fuller instantiation of Laurillard activity 3) is provided in the JITT scheme (because class time is given to questions sent in in advance), and often in the IE schemes in response to the voting results.

Thus in terms of the Laurillard model, instead of only covering activity 1 as a strictly expository lecture does, these schemes offer some substantial provision of activities 2,3 and 4 in quantities and frequency approaching that allocated to activity 1, while using only large group occasions and without extra staff time.

The management layer

I argue elsewhere that the Laurillard model needs to be augmented by a layer parallel to the one of strictly learning activities: one that describes how the decisions are made about what activities are performed. At least in HE, learning is not automatic but on the contrary, highly intentional and is managed by a whole series of decisions and agreements about what will be done. Students are continually deciding how much and what work to do, and learning outcomes depend on this more than on anything else. In many cases lectures are important in this role, and a major reason for students attending lectures is often to find out what the curriculum really is, and what they are required to do, and what they estimate they really need to do. One reason that simply telling students to read the textbook and come back for the exam often doesn't work well is that, while it covers the function of exposition, it neglects this learning management aspect. Lectures are very widely used to cover it, with many class announcements being made in lectures, and the majority of student questions often being about administrative issues such as deadlines.

The schemes discussed here (apart from the OU) do not neglect this aspect, so again we can expect them to succeed on these grounds. They do not abolish classes, so management and administrative functions can be covered there as before. In fact the quizzes and to some extent the peer discussion offer better information than either standard lectures, a textbook or lecture script about how a student is doing both in relation to the teacher's expectations and to the rest of the class. They also do this not just absolutely (do you understand X which you need to know before the exam) but in terms of the timeline (you should have understood this by today).

In addition to this, these schemes also give much superior feedback to the teacher about how the whole course is going for this particular class of students. This equally is part of the management layer. However standard lectures are never very good for this. While a new, nervous, or uncaring lecturer may pick up nothing about a classes' understanding, even a highly skilled one has difficulty since at best the only information is a few facial expressions and how the self-selected one student answers each question from the lecturer. In contrast most of the above methods get feedback from every student, and formative feedback for the teacher is crucial to good teaching and learning. What I have found in interviewing adopters of EVS is that while many introduced it in order to increase student engagement, the heaviest users now most value the way it keeps them in much better touch with each particular class than they ever had without it.

This formative feedback to teachers is important for debugging an exposition they have authored, but is also important for adapting the course for each class, dwelling on the points that this particlar set find difficult.

Other functions of lectures

Arguments attacking the use of lectures have been made before (Laurillard, 1993). Those seeking to defend them generally stress the other functions than simple exposition that they may perform. One of these is learning management, as discussed in the previous section. Some others are:

Conclusion

We began by considering some schemes for replacing the main function of lectures -- exposition -- and then used various pieces of theory to discuss whether the proposed schemes would be likely to be successful at replacing all the functions of a lecture. Overall, while providing exposition in other media alone might be worse than lectures because of neglecting other functions, the proposed schemes should be better because they address all the identified functions and address some important ones better than standard lectures do.

Thus we can replace some or all exposition in lectures. Furthermore, we can re-purpose these large group meetings to cover other learning activities significantly better than usual. We can feel some confidence in this by a careful analysis of the functions covered by traditional lectures, and the ones thought important in general, and show how these are each covered in proposed new teaching schemes. This in turn leads to two further issues to address.

Firstly: which functions can in fact be effectively covered in large group teaching with the economies of scale that allows, and which others must be covered in other ways? Besides exposition, and the way the schemes above address Laurillard's activities 1 to 4, other functions that can be addressed in large groups in lecture theatres include:

Secondly, some aspects of a course can use large group teaching (see above), but all the rest must be done in smaller groups. How small, and how to organise them? One of the most interesting functions to notice is that many of the schemes above use peer discussion, coordinated by the teacher but otherwise not supervised or facilitated by staff. For this the effective size is no more than 5 learners, and 2 or 4 may often be best. Both our experience and published research on group dynamics and conversation structures support this. Instead of clinging to group sizes dictated either by current resources or by what staff are used to (which often leads to "tutorial" group sizes of 6, 10, or 20), we should consider what is effective. When the learning benefit is in the student generating an utterance, then 2 is the best size, since then at any given moment half the students are generating utterances. Where spontaneous and flowing group interaction is required, then 5 is the maximum number. For creating and coordinating a community, then it can be as large as you like provided an appropriate method is used e.g. using EVS to show everyone the degree of agreement and diversity on a question, or having the lecturer summarise written responses submitted earlier.

However forming groups simply by dividing the number of students by the number of staff is a foolish administrative response, not a pedagogic one. What is the point of groups of 10 or 20? Not much. If the model is for a series of short one to one interactions (which may be relevant for pastoral and counselling functions), then consider how to organise this. Putting a group of students in the same room is obviously inappropriate for this, and ICT makes this less and less necessary. If the model is for more personalised topics e.g. all the students with trouble over subtopic X go to one group, then we need NOT to assign permanent groups, but should organise ad hoc ones based on that subtopic. In general, what the schemes above suggest for the future is to consider a course as involving groups of all sizes, not necessarily permanent, not necessarily supervised; and organised in a variety of ways, including possibly pyramids and unsupervised groups. This is after all only an extension of the eternal expectation that learners will do some work alone: the ultimate small unsupervised group.

In the end, we should consider:

References

Draper, S.W. (1997) Adding (negotiated) learning management to models of teaching and learning http://www.psy.gla.ac.uk/~steve/TLP.management.html (visited 24 Feb 2005)

Dufresne, R.J., Gerace, W.J., Leonard, W.J., Mestre, J.P., & Wenk, L. (1996) Classtalk: A Classroom Communication System for Active Learning Journal of Computing in Higher Education vol.7 pp.3-47 http://umperg.physics.umass.edu/projects/ASKIT/classtalkPaper

Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand student survey of mechanics data for introductory physics courses. American Journal of Physics, 66, 64-74.

R.R. Hake (1991) "My Conversion To The Arons-Advocated Method Of Science Education" Teaching Education vol.3 no.2 pp.109-111 online pdf copy

Hunt, D. (1982) "Effects of human self-assessment responding on learning" Journal of Applied Psychology vol.67 pp.75-82.

Laurillard, D. (1993), Rethinking university teaching (London: Routledge)

MacManaway,M.A. (1968) "Using lecture scripts" Universities Quarterly vol.22 no.June pp.327-336

MacManaway,M.A. (1970) "Teaching methods in HE -- innovation and research" Universities Quarterly vol.24 no.3 pp.321-329

Mazur, E. (1997). Peer Instruction: A User’s Manual. Upper Saddle River, NJ:Prentice-Hall.

Meltzer,D.E. & Manivannan,K. (1996) "Promoting interactivity in physics lecture classes" The physics teacher vol.34 no.2 p.72-76 especially p.74

Novak,G.M., Gavrin,A.D., Christian,W. & Patterson,E.T. (1999) Just-in-time teaching: Blending Active Learning and Web Technology (Upper Saddle River, NJ: Prentice- Hall)

Novak,G.M., Gavrin,A.D., Christian,W. & Patterson,E.T. (1999) http://www.jitt.org/ Just in Time Teaching (visited 20 Feb 2005)

Resnick,L.B. (1989) "Introduction" ch.1 pp.1-24 in L.B.Resnick (Ed.) Knowing, learning and instruction: Essays in honor of Robert Glaser (Hillsdale, NJ: Lawrence Erlbaum Associates).

Web site logical path: [www.psy.gla.ac.uk] [~steve] [ilig] [this page]
[Top of this page]