20 Jan 1997 ............... Length about 2100 words (14000 bytes).
This is a WWW version of a document. You may copy it. How to refer to it.

Types of tutorial provision / material

Contents (click to jump to a section)


A Technical Memo
by
Stephen W. Draper
Department of Psychology
University of Glasgow
Glasgow G12 8QQ U.K.
email: steve@psy.gla.ac.uk
WWW URL: http://www.psy.gla.ac.uk/~steve

Preface

Notes for the MANTCHI project on classifications for things that go on under "tutorial provision" in a course.

(Tutorial) Activity types

We want to define "tutorial" very broadly to exclude primary exposition (like lectures) and probably lab. classes, but to include most other things that involve feedback or other input from the teaching staff to students.

Theoretical activity types

In principle, we could regard the space of things to do educationally in terms of the Laurillard model of T&L activities. In that case, activities 2,3,4 i.e. not primary exposition, but everything else at the conceptual level. But actually, we seem to need more layers in the L model, and in particular a place to put dialogue about organising learning, as well as the subject material itself.

Relationships to the L-activities

The Laurillard model has a set of activities. Here are some:
1. Primary exposition (T --> L) (e.g. a lecture)
2. Learner re-expression (e.g. an essay)
3. Teacher correction / feedback on this
5. Student does lab exercise.

We can start to classify tutorial events by their relationship to these L-activities: either "tutorials" are being used to actually perform these, or as backup to ensure that those activities are satisfactorily completed.

1a. Extra lectures (really not tutorial but L-activity 1): material missing in the lecture slots.
1b. Re-explanations that the primary expositions failed at for that student: supplementing lectures
1c Answers to conceptual questions
1d Further reading: further issues and topics.

2a. Exercises, done in a tutorial slot instead of in students' time.
2b. Scaffolding for key exercises: to get them launched, provide various kinds of instant support.
2c Teachback practice; activity 2 re-expressions: usually exercises.

3. Debugging student re-expression = feedback from T to L; handback marked exercises (and explain comments).

Learning management

However not mentioned in the above nor in Laurillard's model, but very important, is the interactive management aspect of all of the above: students don't just get feedback in the sense of information about content, but also information on how well they have learned the material, how well they now understand it (after the extra feedback). AND so do teachers: they learn how well material has got across, which material has not got across, which activities have failed. Any technology that failed to support this management information flow would endanger the course it was used on.

This is a missing layer that I think should be added to the Laurillard model, and needs to be considered here as tutorials may often be one of the primary occasions for such activities. The essence of it is the management of the learning: what activities will be performed, and the logistics of these activities; but also, how students can judge when they have learned something, which actions are worth doing, why something is worth learning at all. Some might think that teachers control this and students do what they are told, but at least in HE this is clearly an interactive, negotiated, dialogic business, just as the conceptual activities are. Hence (as noted above) feedback that teachers get about how the course is going and what activities are failing and succeeding is as much part of this layer as administrative announcements to students about assignment deadlines.

Terry's "process centered dialogue": dialogue (whether T&L, or with peers i.e. L&L) about the experience of learning refers to the major part of learning management, and particularly emphasises the dialogic aspect of it. However in my view the real issue is how learning gets managed, and this can be assisted or conveyed effectively enough that little peer discussion is needed, and learners do not need to spend much time worrying about it. This would be true, for instance, if they are already able to judge what competence is.

Classification dimensions of tutorial items

Types of content

A. Subject content
A1 Low level factual e.g. what error 99 means. [FAQ; max utility for answerweb?]
A2 (High level) Conceptual issues in the subject matter being taught
A3 Demonstration by example of seeking out an answer to their problem i.e. meta-method not facts.
A4 Meta-level issues of the status of the knowledge e.g. how certain is it, what are alternative theories. [Perry]

B. Learning management
B1. Low level e.g. when an exercise must be handed in. [Roughly speaking, this stuff will be local to a course, and not re-usable on other courses; although it will show what kinds of things must be communicated to students.]
B2 High level management [should be re-usable by other courses covering same subject]
Why do this course at all.
How it feels to do this topic, tackle this exercise or activity
How to judge and mark work.
How well other students are doing (calibrate one's own self-estimates).

Who provides the content: T or L (in peer feedback, teachback, etc.)

Medium: Electronic, face to face, ...

Pace: Synchronous, near-synchronous e.g. 1 day turnround, archived

Type of tutor: Course organiser, other member of the main teaching staff, temporary teaching assitants, postgraduate "demonstrators" or whatever.
That is a classification by status, but it might be more useful to classify tutors by a) the amount of knowledge of the subject area; b) their knowledge of this specific course. Some assistants have never done a course in the area they are "tutoring"; c) their skill (training? experience?) at tutoring itself.

Kinds of occasion or activity

We might use the above to classify what a given tutorial type of activity is really doing, but these activities may appear in a number of guises. Any meeting between teacher and one or more learners, particularly scheduled meetings, should be examined. Each such event has a name in a particular institution, and will more often than not turn out to be carrying exchanges of several kinds of content. Here is an ad hoc list of the occasions that should be included for study.

Non-LT activity types

Tutorials
Seminars
Office hours meetings Email, phone dialogues
Questions in/after lectures
Lab supervisions?

Specific LT educ activity types/resources

Frozen T&L dlogs
Frozen peer dlogs
Live audio/video dlogs
Live email Q&A

More

For my class: some email backup?....
Cancel some PG tutorials; replace by email/answerweb
Telephone office hours
Ditto with CuCme video
NetSem: i.e. organised seminars, with student-led papers.
Show a lecture class one of our videos; follow by live video Q&A session.
Try a remote lecture: OHPs on web; students at machines OR in LT; audio link

Summary

The main distinction is between conceptual content and learning management content. In the context of tutorial provision (in the broad sense), there may be three main divisions within the former:
a) Correcting bugs in the primary exposition
b) Going beyond it (and the curriculum), for eager students, or those with individual interests.
c) Feedback (on student re-expressions).

Possible learning technology techniques

1. For factual content, this really means correcting bugs in the primary exposition. Answer Garden seems to be mainly about this: about collecting and organising such corrections, pending in principle a re-write of the primary material (textbook, minimal manual for software, etc.). Since such debugging of material is a permanently iterative process, haivng a whole technology to support it is probably worthwhile.
2. Extra content etc.: really, just more primary exposition, though perhaps in the form of a reading list indexed by topic.
3. Almost all kinds of feedback can be automated, at least if students are not allowed to write free form essays in natural language. The only kind that is probably beyond systematic automation is the last type: explaining why the student's answer is wrong in the rare but important cases where they continue to believe it is correct. That is because this requires what human teachers often can't supply, an understanding not just of the truth but of the students' alternative (mis)conception and why it remains plausible to them in this case.
4. Stored past student work, plus marks and critique by teachers. This is not hard to do, but has many benefits. In most cases most of the benefit comes from making a single case available (not the complete set): I would pick a piece that just makes an A grade, but is not outstanding, and show also the mark, together with a short critique about why it did not get 100%. This shows students an example of a standard within their reach, plus indicates what further they could aim for. It covers at one stroke most of their need for "learning management" information to judge what is needed.

Feedback

[Check in earlier TMs on this; re-read L. book on feedback.]
L's first distinction is between intrinsic and extrinsic feedback.

Here is a sequence of types of feedback that a learner might get. (This is only one of the reasons for interaction, not indepedent roles for T&L.) Only the last cannot usually be done by computer. Thus most feedback probably can be done without real interaction between T&L; but there is this key residual that does usually require a human and expert teacher.

Short

1. Internal judgement of success by the learner.
2. Information on success or failure.
3a. Information on learner's output and its effects
3b information on what the correct output should be,
3c and hence on the difference.
4. Diagnosis of which part of student output was wrong.
5. Explanation of why correct answer is correct.
6. Explanation of why student answer is incorrect

Long

1. Internal judgement of success by the learner. E.g. just writing a paper teaches me a lot with no external feedback; and similarly for students writing an essay. Our own internal criteria for understanding and adequate argument and explanation do it.
2. Information on success or failure. E.g. just getting a mark back, yes/no, throwing a ball over a wall and hearing whether it hit hoop or target, but not in which direction the failure was.
3. Information on learner's output, correct output, and the difference. E.g. answer in back of book; seeing where the ball went compared to target. Note that there are two bits of information here which could possible be given or witheld independently: what the right answer was, and what the student's output and its consequences were: e.g. if throwing a ball over the wall or shooting, you don't automatically see your result, nor if you are given a prediction task do you necessarily see the output of the simulation.
4. Diagnosis of which part of student output was wrong. ProgLang compilers do most of their work in just reporting on the line number; and in fact human Prog. tutors do this a lot too. Critiquing techniques can do this automatically.
5. Explanation of why correct answer is correct. Can be canned in a CAL test easily.
6. Explanation of why student answer is incorrect. Rarely needed, but crucial when it is.

*Feedback may come from learner ([a] internal), the task itself ([b] intrinsic feedback), or teacher ([c] extrinsic feedback).

*Feedback may be about the level of concepts, or of personal experience.

The above levels applied best to doing e.g. maths examples, not to marking essays. One objection was: should give positive feedback too. But mere positive reinforcement not enough and not needed: success if there is a definite answer is enough; if not, then you shouldn't just encourage but say which bits are good and why. With essays you have to do a comparison with a model, convey the differences. E.g. publish marking scheme, and give marks breakdown; circulate best student answer and critique of it, so others can compare their answers with it.

It is quite easy to get a computer to provide types 2-5. It is important to train learners to do 1 as quickly as possible. Only type 6 seems to require having a human teacher in the overall ensemble.

Feedback is a very important part of CAL and T&L design. It is not clear that it is given quite enough room in the L-model, but it is the rationale behind one of the 3 generating principles: of doing iteration not just 2 T&L parts at each level.

References


Laurillard, D. (1993) Rethinking university teaching: A framework for the effective use of educational technology (Routledge: London).