31 Aug 1997 ............... Length about 5200 words (35000 bytes).
This is a WWW version of a document. You may copy it.

Evaluating new technologies for teaching and learning in distance education: current and future developments

Contents (click to jump to a section)

Dr. Jennifer Lesley Hall
Lecturer in Psychology

Department of Psychology
The Open University
Walton Hall
Milton Keynes
MK7 6AA
U.K.

Tel: (+44) (0)1908 654499
email: J.L.Hall@open.ac.uk
Fax: (+44) (0)1908 654488

Paper presented at ICDE World Conference, 1997, Pennsylvania State University.
Published in The New Learning Environments, a global perspectives, proceedings from the 18th ICDE World Conference, June 2-6, 1997, The Pennsylvania State University, University Park, Pennsylvania.

Abstract

The very nature of distance learning has for a long time imposed certain restrictions on the methods that may be employed in evaluation studies. In conventional campus-based higher education, recent trends in evaluation have seen a movement to a greater use of qualitative ethnographic methods. The defining features of distance education mean that these sorts of methods can not be transferred to the distance education arena. However, the recent growth of the use of new technology in distance education (as well as in conventional education) provides new possibilities for conducting evaluations, enabling teaching and learning (using new technology) to be better understood. The new opportunities can be examined in terms of the type of data one may wish to collect, (i) attitudes, opinions, perceptions and experiences, (ii) behavioural logging data, (iii) learning content. The pros and cons of using these methods for evaluation purposes need to be considered in evaluation planning, and it is suggested that as new technology is further developed for teaching and learning in distance education, evaluators should exercise a cautious awareness of the opportunities it offers.

Introduction

In distance higher education students are now using computers and the wide variety of programs available for learning. Resource-based learning (databases, hypertext systems and the world wide web, for example), can provide for students a variety of means for accessing information resources; tutorial programs guide students through new material in a more structured fashion; simulations enable students to put their decision-making abilities to the test; and computer-mediated communication allows for discursive activity among students and students, and students and tutors.

It is always necessary to evaluate developments and innovations in educational technology through the conventions of formative testing procedures. This has often meant employing a panel of testers; evaluations are however often of greater use if a sample of users representative of the intended population can participate. At the Open University (UK) formative testing of new courseware often takes place with students whilst they are attending a residential component of their course (see for example, McCracken & Laurillard, 1993). However, in the cases where greater restrictions have been imposed, the worth of the results of such testing has been questioned. Draper (1996) highlights the time consuming nature of formative testing, and the difficulties of employing a representative sample of users. However, distributed networks and the world wide web now offer the opportunity for formative testing to take place at a distance, and to be carried out by people who may be more representative of potential users (or at least a wider diversity of people), than has been the case with conventional formative testing. It also enables rapid feedback from users. Technology offers a new potential for developmental trials, as a supplement or alternative to traditional formative evaluation processes.

When it comes to evaluations of learning, where students are using the resources provided in the intended context, conducting an effective and useful evaluation is no mean feat in the case of distance education. Saxon (1988) highlights this through acknowledging the people who helped her in "...overcoming the difficulties of `at-a-distance' research" (p.2). Evaluations in conventional education are arguably reaping most benefit from ethnographic approaches as favoured within illuminative (Parlett & Hamilton, 1972), integrative (Draper, Brown, Henderson & McAteer, 1996), and grounded (Hall, 1997a) evaluation approaches. At present such closeness of evaluators to the context of learning is generally precluded in distance education, as defined by the nature of the learning. Students take to their studies asynchronously, and at many miles from one another.

This is not to say, of course, that evaluations of distance learning that require proximity are an impossibility. There are many valuable studies in the distance education arena that have employed face-to-face interviewing; however, the samples are often small, the costs in time and money high, and timing since use of the resource in question may be very variable between participants. Interviews conducted via telephone have often proved more feasible. Another popular conventional method, one that does not require proximity to the participants, is the postal survey. The postal survey is a method that has been developed to a high degree of sophistication at the Open University (UK). This institution houses its own Survey Office and Student Research Centre, and conducts annual evaluations of courses (see for example Courses Survey Project Team, 1996). Of particular interest here, an Annual New Technology Survey (ANTS) is also administered through a Programme on Learner Use of Media (PLUM), surveying attitudes and access to new technology of Open University students (see for example, Taylor & Jelfs, 1995).

The postal survey is a useful method for collecting views from representative samples of students who are working at a distance. It is however both a costly and time consuming method of evaluation. Possibilities are now arising for carrying out evaluations of the use of new technology for learning at a distance, by taking advantage of the technology at hand. This paper explores some of the developments that are currently taking place. It is interesting to see that, as the use of technology becomes commonplace for both students working in conventional education as well as distance learners, it is not simply the case that distance learning comes to more closely resemble the practices of conventional education. Although for example, the idea of the lecture or tutorial has been adopted more fully into distance education through the use of new technology than was considered possible before, we are also seeing conventional education, where full-time and part-time students are present in a single institution of study, adopting many of the characteristics of distance learning. New technology has provided the possibility for teaching and learning to become more flexible within conventional education, for example, support for asynchronous study is now more widely provided for through the use of educational technology. Reflecting this merging of practices, some of the examples of new evaluation procedures described within this paper come from the arena of full-time conventional education. Many of the examples however, come from innovations in evaluation approaches conducted by the Open University (UK).

In the past, purposes of evaluation have often been confused with method (see Hall, 1997b, for further details regarding such confusions in the evaluation literature). In order to explore what new technology can offer to evaluators of distance learning, a classification will be used based upon the type of data one wishes to collect:

Although there is a relationship between the type of data one wishes to collect and the methods employed for obtaining that data, it is also the case that certain data require specific methods, whereas for others there is a large choice and range of tools. Examples of the ways in which it is possible to obtain certain types of data using new technology within the context of distance education is explored within this paper using the three categories outlined above.

Attitudes, Opinions, Perceptions and Experiences

As explained above, survey data has been collected for many years at the Open University (UK). However, it is now possible to design forms for fully electronic handling of questionnaire data for those students with access to the appropriate technology. There are various studies presently taking place at the Open University (UK), investigating the opportunities for handing large scale collection of questionnaire data on-line (Kirkup, personal communication). There are several possible options available for the total electronic handling of survey forms. Students can access forms downloaded from a server (as on the web), they can reply to forms sent through e-mail or put up on a conferencing system, or, if the computer is stand alone, they can access a form sent by disc. In the later case, the forms could, for example, be sent to the students along with computer-based course material.

An example of a successful large scale disc by mail (DBM) survey was carried out by Van Hattum and De Leeuw (1996). The survey concerned bullying and was administered to 228 teachers and 6428 pupils across schools in the Netherlands. A number of advantages for the computer-based version of the survey were reported. One such advantage is that computer-based surveys are more successful in presenting questionnaires that have a complex structure than are paper-based versions, as nonresponse is a problem with complex paper-based questionnaires. In addition, in comparisons made between the computer-based survey results from the 98 schools and results of the same survey administered in a paper-based format to eight schools, the computer-based survey results were found to be of higher quality. Specifically they resulted in less socially desirable answers, and more openness and self disclosure (Van Hattum & De Leeuw, 1996).

Other advantages of electronically administered, and in particular on-line surveys include costs being much reduced in comparison to paper-based surveys, and the survey material being in a format which can be directly converted for data analysis, in other words, it does not need to be keyed in. The data collected may be suitable for statistical analysis, or qualitative analysis of textual responses. In addition, if using e-mail or a conferencing system for this purpose, it is possible to automate the system to send out reminders and thanks for submitted forms. This all generally means that surveying attitudes and experiences can be done a lot faster and a lot more cost effectively.

There are however also cautions to be considered regarding a total electronic handling of survey material. Where the use of technology is to any extent optional, there is always the possibility of obtaining a biased sample of replies to an on-line survey. Those who particularly like using new technology may be more willing to reply. Indeed it is important not to exclude those students who do not use the software because of feelings of intimidation, of being monitored, or of damaging the hardware (Jones, Scanlon, Tosunoglu, Ross, Butcher, Murphy & Greenburg, 1996). When all students are required to, and all are using new technology, then this is less of a problem. Another concern however, is anonymity, and although it is possible to set up returns that are anonymous, it may be more difficult to assure respondents of their anonymity. In addition, companies that permit their employees to use workplace computers for study may express concerns regarding the security and confidentiality of the material sent through networks. It may also in some cases be necessary to guarantee security and confidentiality to employees such that they can be assured that their employer will not see their personal responses if they are using work-based computers.

It is not just closed questionnaires that can be administered electronically, just as in conventional surveys, open-ended questions can be used. This was a procedure used by Wegerif (1995) as part of a multi-method approach to examining the experiences of, and collaboration between, educationalists and trainers who were involved in a small on-line course about teaching and learning on-line. His on-line open-ended questionnaire consisted of questions such as, "what did you like most about the course?" (p. 5), and "which parts of the course did you feel benefited you most from working with others" (p.14).

It is interesting to note that this qualitative approach has the characteristic of being highly structured. At present we are not really in the position to adapt the characteristics of conventional synchronous, semi-structured or unstructured, dynamic interviews, whether face-to-face or carried out via telephone, to a computer-based format. The methods we are proposing here are asynchronous. However, the interactive nature of semi-structured or unstructured interviews, or focus groups, can be approached through computer-mediated conferencing (CMC). Jenison (1996), a senior counsellor of the Open University (UK), working in the London region, reports on a computer conference using CoSy(TM) conferencing software, "...in which students discussed general access to CoSy 4" (Jenison, 1996, p. 2). Jenison conducted an analysis on the content of the conference, for which there had been no formal or externally posted questioning. The issues that arose were entirely at the discretion of the students. Jenison found that the phrase, "CoSy takes the distance out of distance learning", which was used by one conference member, most appropriately summarised the support that students experienced through the use of the conferencing system. This idea of analysing the messaging itself will be returned to later when we look at using electronically represented learning content for evaluation purposes. Jenison (1996) acknowledges that the conference that she examined may not have contained views that were representative of the total student group. She wrote the paper solely for the purpose of demonstrating the degree of support that a number of students had gained through using conferencing. Indeed, this sort of data collection method is perhaps most usefully employed when supplementary personal `subjective' experiences are sought. As with on-line surveys, this method can suffer much bias in the sample of students who volunteer their views. In addition when considering the use of such reflective conferences for evaluation purposes, issues such as presentation of self and anonymity need to be considered. Such considerations may well render more formal versions of these very open evaluation methods to be too problematic, and only suitable for use with specifically selected samples.

Finally in this section, we can consider an evaluation tool that conventionally has partly aimed to elicit data similar to that sought through the use of questionnaires and interviews. The journal diary has also been designed with the purpose of eliciting perceptions, attitudes and experiences. These, used in their conventional paper-based format have proved very useful for evaluators of distance learning because they have provided a rich description of distance learners practices and experiences. The conversion of such diaries into electronic versions seems quite feasible (although possibly not suitable for new users of educational technology) however, this author has of present, not come across any such examples. It would be possible for a journal to be placed on a disc and sent back to the evaluator, or parts of the journal could be submitted over a network. Similarly, it would also be possible to send particular questions out to students through e-mail week by week, so that perceptions and experiences can be examined over particular time periods or particular learning activities, for instance. The other major conventional use that journal diaries have been designed for is monitoring behavioural work patterns, what students do, and how long they spend on each activity. We turn to new possibilities for collecting such data next.

Behavioural Logging Data

A second possibility for exploiting electronic capabilities for evaluation purposes is through logging students' interactions with programs. Evaluators of distance learning have conventionally had to rely on self-report regarding the use that students make of their study time. Studying with the aid of a computer enables data to be collected both about what programs students have accessed and for how long, and also where they went inside a program and what they did with the program. It is indeed possible for every key press to be recorded. Monitoring programs can store this information as a separate file. It may be possible for example, if students are using a network, to eventually enable all of this material to be sent back to the evaluator for subsequent analysis.

The ability to log interactions with a computer has been around for sometime; for example, the analysis of navigation strategies that users adopt for traversing through hypertext systems, examined through logging, has been an active research area (see for example, Misanchuk & Schwier, 1992; Hutchings, Hall, & Colbourn, 1993). Goodfellow (1996) provides an interesting example of logging where data collected were used specifically for examining the effect of students' growing IT literacy on the kinds of programs they found beneficial to use. Rather than focusing upon students' uses of a single package, the study involved a logging survey of the types of computer programmes students accessed and how long they used them for. This study took place in a conventional higher education setting; but the survey method could be used for distance learners if students were downloading programs from a network or running them off the web. In the study, the use that language students made of computer-assisted language learning (CALL) programs was monitored. Goodfellow found a disappointing level of use of CALL programs. In addition, a questionnaire administered through e-mail revealed that students had found e-mail and Word Perfect to be the two most useful programs that they had used in the term. On the basis of this study, Goodfellow was able to suggest that CAL is, "vulnerable to the growth of IT sophistication in learners" (p. 30), since students cannot make use of the IT skills that they had been developing through using the CALL programs. In comparison, other IT programs such as e-mail, can be considered to be more social, or like Word Perfect, more obviously productive for the students using them.

Caution should be excercised however, since logging has often proved particularly problematic to analyse and interpret. Firstly, an overwhelming amount of data can be collected this way, and secondly, since this data is purely behavioural, then there is difficulty in assigning judgements of worth to findings. Several studies at the Open University (UK), where students have been using computer conferencing in their course have involved the collection of behavioural statistics (e.g. Mason, 1990, 1992a, 1995; Pearson and Mason, 1992). Typically the data collected includes the number of accesses made, for how long, in which conferences, the number and length of messages submitted, and the times at which the messages are submitted. These data have been collected in order to examine the activity of both students and tutors of the Open University. In conjunction with other data, such as that received through survey methods, it is possible to use the analysis of usage patterns to inform decision-making concerning future provision of conferencing support, for example, in which sorts of conferences it is or it is not appropriate to provide a moderator. Data regarding gender differences and the effects of guidelines and assignments in CMC use (Pearson & Mason, 1992) has also been collected at the Open University (UK), and such data could also be used to assist decision-making regarding support and provision.

Learning Content

As computer-based assessments gain popularity in distance learning, it becomes possible to use the techniques employed in this, and in some cases, the data themselves, for evaluation purposes. Tests can be sent out through a network, and returned by the student. It is therefore possible to carry out the pre and post tests of conventional evaluation procedures, however, as for all evaluations, the methodological problems of pre and post testing (see Anderson & Draper, 1991), should be considered. Dr. Mike Fitzpatrick and Dr. George Weidmann (personal communication) of the Open University (UK) plan to use pre and post testing as part of a multi-method approach towards considering whether an externally available software package is suitable for inclusion in an Open University (UK) course. Students in one particular region will complete pre and post tests which can be downloaded from the FirstClass(TM) conferencing system. Changes in performance will be monitored, and in addition an on-line questionnaire will be provided so that students can report their experiences and perceptions of using the software. The decision was made to use these methods as a process of rapid delivery to and from the students was sought by the course team, particularly as students from other European countries are to be included in the evaluation. This is just one way of examining students' learning using new technology in the data collection process. It should be noted however, that this method provides little insight into the process of learning.

How then is it possible to examine process, when we are carrying out evaluations with distance learners? If we wish to evaluate learning, we have to be clear about what we consider learning to be. Recent models of learning have suggested that it is the interactive nature of coming to understand that is crucial in the process of learning. Laurillard (1993) suggests that the nature of the activities that arise in conversation, such as those that take place between students and tutor, form the framework for learning. Computer conferencing may be considered to be a suitable environment for supporting students in their learning to a certain extent. Conferences may therefore contain content suitable for use in evaluations of distance learning. Such evaluations may indicate the opportunities and support (or lack of these) that conferencing and different models of using conferencing provides. Mason (1992b) comments that the content of computer conferencing contains the most obvious; but the least utilised material for evaluation.

The number of studies that use the content of conferences is however growing. Such material can be used in evaluation and research for different purposes, and indeed, different sorts of analysis may be carried out on the same material. The questions asked determine to some extent the type of analysis employed and the amount of material used. Yates (1996) used a large corpus of educational conferencing messaging to examine its oral and written aspects. The analysis of this corpus was compared to an analysis of spoken and written corpora with regard for example, to linguistic composition. This numerical statistical approach to examining large data sets can be used for the evaluation of theories of language use (Yates, 1996), and could also be employed to assist in the design of specific tools for supporting conferencing.

Other evaluations have been more concerned with examining the messaging for evidence of learning. Mason (1992a) reports on an evaluation of an on-line assignment, designed as part of a course run by the Open University (UK). Among other methods of assessment, three aspects of the content of the messages were assessed and marked, these were (and I quote in full from Mason (1992a, p. 3-4) here):

- the extent to which students use the issues raised in the course material to develop their arguments

- the way in which the students' messages build on and critique the ideas and inputs of other contributors to carry the discussion forward

- the succinctness with which the students' arguments are conveyed

These aspects of the content were evaluated alongside other material, including data logs, comments from a sample of tutors as to the success of the conference, examination of assignments and tutors comments, and students' perceptions and experiences obtained through a conference set up to elicit these. In a previous study of the course, Mason (1990) undertook a detailed examination of one conference that took place in a single region, and she identified six prominent themes of the conference, of which the three main themes were key topic areas of the course. Further interesting analyses indicated that one theme appeared to have had little input on the basis of the number of messages; but in fact the contributions had been far more substantial than for other topics. The analysis also involved categorising the various means by which students took, "considerable responsibility for the quality of their interactions" (Mason, 1990, p. 12).

Moving away from conferencing, innovations in new technology have the potential for providing more opportunities for the examination of students' understanding of their course material, when they are working at a distance. In the Knowledge Media Institute (KMI) at the Open University (UK) a system is being designed that enables both tutors and students to develop animations of representations, view each others animations, and have access to animations that others have made. These representations will be used to assist in the processes of describing and explaining (Domingue & Mulholland, 1997a&b). The system is being developed presently to support students and tutors in a masters level course which involves learning the programming language, Prolog. The development is an Internet Software Visualisation Laboratory (ISVL) which is designed to run on any Java` enabled browser, such as NetScape Navigator`. The environment is designed to augment textual or discursive explanation (typically for the distance learner this is restricted to course materials, e-mail contact and telephone tutorials), such that animations (or 'movies') of representations that are made can be used in productive synchronous work (between tutor and students), where the representations and animated indicators are mapped from one screen to another, at a distance. Additionally resources, in the form of animations and 'movies', can be kept on a server to be downloaded for use in asynchronous study (Domingue & Mulholland, 1997a). In the light of findings from previous studies in which students have interpreted programming procedures in terms of their own misconceived models, ISVL is being developed to "...both allow students to be able to interact with the SV and also be usable in collaborative ways with the tutor to counter circumstances when personal exploration can run into difficulties." (Domingue & Mulholland, 1997a).

As all the interactions with ISVL are logged, it will be possible to evaluate students' learning using the system in depth and at a distance. It will be possible to view a series of animations developed and used in an interactive manner between students and tutor, and examine the dynamics that lead to the development of understanding. Using multiple representations of conceptual understanding, we can assess the worth of tools such as ISVL as to how well they are able to assist in the process of teaching and learning at a distance.

At present ISVL is being integrated with an existing course. Testing materials in this way provides a new route for developmental evaluation, for course modifications or the addition of new components.

Conclusion

This paper has described how innovative methods of evaluation are evolving alongside new uses of technology in distance education. Some of the various possibilities for collecting three broadly different sorts of data for use in evaluation studies have been explored. As technology evolves further and becomes accessible to a wider diversity of students, so opportunities for evaluations will expand. However, concerns that have always arisen in evaluation studies, such as the representativeness of samples, the ethics of method, anonymity, confidentiality, non-response, social desirability of responses, and cost considerations, will not disappear. As we have seen, some of these concerns may be alleviated in certain cases through the use of particular media; but new obstacles may also arise. As evaluators we need to be sensitive to the advantages and disadvantages of evaluation methods, and who these pros and cons will effect. Our awareness will also need to grow as more opportunities for conducting new sorts of evaluation studies using new methods become available, enabling a closer examination of teaching and learning at a distance.

I would like to thank the colleagues who have provided examples for inclusion in this paper, particularly Paul Mulholland and Mike Fitzpatrick, and Stuart Watt for his comments regarding the changing nature of distance education.

References


Anderson, A. & Draper, S. W. (1991). An Introduction to Measuring and Understanding the Learning Process. Computers and Education, 17(1), 1-11.

Courses Survey Project Team (1996). Courses Survey 1995: Full Results. Students Research Centre, Institute of Educational Technology, The Open University, UK.

Domingue, J. & Mulholland, P. (1997a). Teaching Programming at a Distance: The Internet Software Visualization Laboratory. Paper submitted to: Journal of Interactive Media in Education.

Domingue, J., & Mulholland, P. (1997b). The Internet Software Visualization Laboratory. Paper submitted to: Psychology of Programming Interest Group.

Draper, S. (1996). Observing, Measuring, or Evaluating Courseware. Available at URL: http://www.psy.gla.ac.uk/~steve

Draper, W., Brown, M. I., Henderson, F. P., & McAteer, E. (1996). Integrative Evaluation: an emerging role for classroom studies of CAL. Computers and Education. Vol. 26., no. 1-3, pp. 17-32, 1996.

Goodfellow, R. (1996). Learners' I.T. Strategies - will they be the death of CAL? In: L. Alpay & H. Solanki, Proceedings of the CALRG Annual Conference. Institute of Educational Technology, The Open University, UK.

Hall, J. L. (1997a). Towards a Thematic Evaluation Process. In: S. Draper (Ed.), Evaluation and integration of CAL in HE: collected experiences. (in preparation).

Hall, J. L. (1997b). Using Computers to Support Learning in Higher Education: studies of students' uses and perceptions of CBL. PhD Thesis, University of Southampton, UK.

Hutchings, G. A., Hall, W., & Colbourn, C. J. (1993). Patterns of Students' Interactions with a Hypermedia System. Interacting with Computers, 5(3), 295-313.

Jenison, K. (1996). "CoSy takes the distance out of distance learning": the computer-mediated student campus. London Papers, The Open University, UK.

Jones, A., Scanlon, E., Tosunoglu, C., Ross, S., Butcher, P., Murphy, P., & Greenburg, J. (1996). Evaluating CAL at the Open University: 15 years on. Computers and Education. Vol. 26, No. 1-3, pp. 5-15.

Laurillard, D. (1993) Rethinking University Teaching. London: Routledge.

McCracken, J., & Laurillard, D. (1993). T102 CAL Numeracy: Formative Evaluation Report. CITE Report No. 189, Institute of Educational Technology, The Open University, UK.

Mason, R. (1990). Computer Conferencing: An Example of Good Practice from DT200 in 1990. CITE Report No. 129. Institute of Educational Technology, The Open University, UK.

Mason, R. (1992a). Evaluation of the DT200 On-line Assignment. CITE Report No. 170, Institute of Educational Technology, The Open University, UK.

Mason, R. (1992b). Methodologies for Evaluating Applications of Computer Conferencing. PLUM Paper No. 31. The Institute of Educational Technology, The Open University, UK.

Mason, R. (1995). Computer Conferencing on A423: Philosophical Problems of Equality. CITE Report No. 210, Institute of Educational Technology, The Open University, UK.

Misanchuk, E. R., & Schwier, R. A. (1992). Representing Interactive Multimedia and Hypermedia Audit Trails Journal of Educational Hypermedia and Multimedia, 1, 355-372.

Parlett, M. R., & Hamilton, D. (1972). Evaluation as Illumination: a new approach to the study of innovatory programmes. University of Edinburgh, Centre for Research in Educational Sciences, Occasional Paper no 9.

Pearson, J & Mason, R. (1992). An Evaluation of the Use of CoSy on B885 in 1992. CITE Report No. 171, Institute of Educational Technology, The Open University, UK.

Saxon, C. (1988). To Compute or Not to Compute. CITE Report No. 84, Institute of Educational Technology, The Open University, UK.

Taylor, J., & Jelfs, A. (1995). Access to New Technologies Survey (ANTS) 1995. PLUM Paper No. 62. Institute of Educational Technology, The Open University, UK.

Van Hattum., M., & De Leeuw, E. (1996). A Disk By Mail Survey of Teachers and Pupils in Dutch Primary Schools; Logistics and Data Quality. Methods & Statistics Series. MLS Publication No. 57, Department of Education, University of Amsterdam.

Wegerif, R. (1995). Collaborative Learning on TLO'94: creating an online community. CITE Report No. 212, Institute of Educational Technology, The Open University, UK.

Yates, S. (1996). Oral and Written Linguistic Aspects of Computer Conferencing: A Corpus Based Study. In: S. C. Herring (ed.)., Computer-Mediated Communication: Linguistic, Social and Cross-Cultural Perspectives. Amsterdam/Philadelphia: John Benjamins Publishing Company.