Last changed 20 Aug 2001 ............... Length about 2,000 words (14,000 bytes).
This is a WWW document maintained by Steve Draper, installed at You may copy it. How to refer to it.

Web site logical path: [] [~steve] [grumps] [this page]

Ways to writeup the Computing Lab study

Contents (click to jump to a section)

Stephen W. Draper


Here are some sketches of how we could view the computing lab study as a success, in a number of distinct ways.

1. As a project-gelling exercise

Current plan

Who leads?: Julie
Target?: some management journal

It's about how the first study acted as project-gelling. We could send it to a management journal. Julie knows something about this, and should be able to write the abstract lessons and punchlines for such a paper.

I did a very crude first draft and handed it over to Julie, since when nothing hopeful has been heard. My idea was to spend 2-4 hours each on it in turn, passing it swiftly round the whole team.

Early Ideas on content

The most fundamental reason for this study, which was a limited first project for Grumps, was to get us started, to get the project team working as a whole, and to exercise as many of the component aspects as possible to get information on what we need to address next. It could be said that it was already successful at this before the first day of data collection, as all of the team members had collaborated to agree and advance the plans that far, and to create a plan involving two separate pieces of software being introduced into one client setting. However much additional value for the project was gained as the study progressed in terms of practical lessons in constructing the software, and dealing with clients and participants.

In one way or another it actively involved everyone: the 3 PIs, our associate in Oz, the 3 RAs, and an RA on a related project. In fact, it could probably be said that it has reinvigorated the Revelation/TLC project, and by cheering up Julie Cargill saved a valuable RA for the university. It has also managed real links with RT in Perth, since he will use our software there in classes, with the connection embodied by sending students over this June-Sept.

It has addressed one of the two application areas named in the grant proposal (education); but equally it hasn't exercised the other (bioinformatics).

It has exercised: collection of real user data; in big volumes; by instrumenting their pre-existing systems; and collecting it over a network. It has exercised some mobile computers for both collection and delivery. It has (only) slightly exercised retrievals from that data, and re-presentation of retrieved results to the researchers (us) and the end-users. It hasn't exercised the notion of dynamically configurable filters at the collection end; and particularly not, changing the filters in the light of the iteratively refined questions researchers ask of the data. Although it has already confronted us with the gap between what is easy/natural to collect, vs. what is meaningful and hence more directly useful (e.g. window events vs. task changes by the user).

We might say, therefore, that this initial study is only half done so far: it needs a phase 2 focussed on data mining/retrieval/analysis, and ideally that should draw on RT's expertise in such analysis which also hasn't yet been exercised in Grumps. And furthermore, that its single biggest omission from the viewpoint of exercising all aspects in the proposal were: a) the bioinformatics application area; b) a proper closed loop from refining retrievals and questions of the data, to modifying the data collection to address them better.

2. The LSS as an educational application

Current plan

Who leads?: Steve
Target?: some educ. journal: "Computers and education"?

Clearly I should lead on this; hope to offer something coherent soon.

I have quite a few ideas, and quite a lot of lumps of prose; but am still not clear about the overall message. A summary message should probably be in the newsletter articles. On the other hand, Margaret is only now finishing writing up the evaluation data (never mind interpretation).

Early Ideas on content

The LSS springs from the question: How/could DIM enhance educational benefit in the context of a computing science lab class? A simple question that in retrospect it seems surprising hasn't been asked routinely and repeatedly by the department; but in any case is being answered in the affirmative by this software. Thus putting the dept. ahead in teaching innovation; and removing the institutional embarrassment of a department whose income and main political and social purpose is teaching failing to apply its own discipline to doing that job as best it could.

As a pilot study it does suggest that the LSS can deliver benefits to its users. It is beginning to help students: it saves arm raising and spending a lot of attention in getting a help request seen. Some students feel more able to ask for help. Some tutors already feel it gives them a better grip on distributing their attention over their group. This would, as has been repeatedly remarked, be a much greater benefit at the start of the academic year. All of this is modulated by: getting the tutor and students introduced to the software (it doesn't sell itself instantly with no introduction), the great differences in interaction feel and style in different groups, the fact that some of the gains will be proportional to the length of use and so the amount of past data available to re-present.

Suggested gains, that should be measured in any future study of the LSS, include:

The LSS, as a design project, looks at one of the main teaching activities (level 1: big class; computing labs/tutorial groups: a major investment in teaching resources). It asks how could computing in general, and information management in particular, help. The students are distributed on a network; the tutors are mobile. (A future redesign here would look at having tutors remote, and save them walking around physically. But this might not be able to retain the central human value of interaction that feels personal to the students.) The tutors (and perhaps the students) need JIT ("just in time") information delivery. The role of computers in supporting telephone retail services is supporting the provider (here the tutor) with everything the organisation (here the dept.) knows about interacting with that client (here the student): the history of past interaction, and everything digitally recorded about that client. But an enhancement over the retail model (and so an original feature) would be to provide the student with everything too, including information on interactions and on the tutor. What is interesting, and draws on the CSCW research area, is that even with very small groups or 1:1 interactions, ICT can nevertheless sometimes and in some ways augment the interaction, even when distance is not a factor (as it is in phone retail services, and in some classic CSCW cases). The notion of recording user actions, and re-presenting this record as a resource for further interaction, is natural here, and is close to the heart of Grumps' research too.

Thus the LSS is essentially the result of looking at computing lab teaching, and asking how it could be enhanced by:

There are a number of types of data here:

  1. Signalling for help is one feature.
  2. Presenting relatively static records of the student (and tutor) is another, and could be further augmented if more such data is integrated into the software (set worksheets and answers, records tutors are required to keep/create).
  3. Histories of tutor-student interactions within a class we have done; but keeping and using such histories over a year will require another such study.
  4. "Milestones": records of student achievement at each exercise is in the near future (version 2).
  5. Records of student activity with software (e.g. using retrievals on UAR data and re-presenting these in LSS) is another natural addition.

From this perspective, our first study was a rather small pre-cursor of what is easily possible if we continue on this path. It also means we could view the LSS as (in part) the design of one delivery system in which end users receive and use retrievals of Grumps data: insinuating these into actual user work patterns.

4. Mobile workshop paper on the handhelds

Who leads?: Phil
Target?: Workshop paper

Current plan

One version of this might be a paper by Phil and Murray for the ? Florida workshop, focussing on the handhelds. It will be an HCI paper, based on the practical experience of using mobile/wireless technology; describe the map and orientation issues in the user interface, ...

6. As a DIM study

Current plan

Who leads?: Huw-3
Target?: CACM journal first; then perhaps the Computer Journal

This may be the major paper from the first study; focus will be the study as DIM, making clear that the study covers a large part of the space. Original notion of centering on the contrast of UAR and LSS will probably not be used much by Huw. Key messages listed below; but this paper would stress DIM first, education and specific design only in passing.

Early Ideas on content

As a DIM (distributed information management) study, it had just about everything. Many of these features are brought out by contrasting the two pieces of software: UAR and LSS.

UAR: bulk data (several million items), little prior thought about how to get utility from it, instrumented OS and so put up with raw events in OS terms (instrument OS to get data from all applications), slow turnround from collection to retrieval (20 mins. top speed), no payback to endusers (yet), data collection and redistribution over existing fixed network.

LSS: small data, some volatile and not retained? only collected for pre-designed reasons, design new application (instrument application), fast turnround (help request must be "instantly" visible e.g. 30 seconds, actually much faster), designed for direct benefit to endusers (maybe to researchers later), data collection and redistribution over wireless and fixed network to fixed and mobile units. Data redesigned during study to benefit its use (e.g. changing buttons, and changing properties displayed in queue).

Both: can mine data now collected; all data stored subject to filter at source, educational benefit to end user aimed for in long term; privacy issues voiced by users, though with huge variations in attitude.

Major issues to emerge/discover:

  1. Giving users privacy controls, and visibility on the collection activity.
  2. The huge range of time scales for response times; and how this should be explicitly attended to.
  3. The contrast/ complementarity of UAR and LSS (see above).
  4. From an educational application viewpoint, this is what you get when you decide to design DIM/database technology specifically to assist education (as opposed to doing data mining later on what happens to have been collected).

Thus, as we can see is implicit in the contrast between UAR and LSS, we have exercised over a large range of the space of DIM activity. Furthermore, this has raised some meaty, and probably novel, issues to report on and to develop further. This is the heading under which I would then proceed to write up these "big" lessons.

8. As a GRUMPS study: using data

Current plan

We probably aren't able to write this yet. Perhaps this should be the aimed for deliverable from the mining developments we are centering around Quintin (and a student or 2): so hope to write this later?

Early Ideas on content

What distinguishes Grumps from DIM and persistence in general is: a closed loop study where we begin with data collected on spec; start to ask questions; then (dynamically) modify the data collection in the light of what we discover we want to know from the data.

We haven't started on this yet.

In a sense, though, we could perhaps view the development of the LSS during the study as an example. Both collection and display (retrieval) changed under the impact of what users said they wanted, and what was required by this application. This is a very slender argument, perhaps specious. Yet it is true that Murray was modifying the software during the study, in its collection, retrieval, and display aspects; and this was based not just on being late in the design, but also on observations of and interactions with the end users, and with our (i.e. researchers') evolving understandings.

5. A DIM errors note

Current plan

Who leads?: Huw-2
Title: "Distributed information mis-management".
Target?: A SIG magazine? A short note in a journal?

Early Ideas on content

Learned a few lessons, though perhaps only new to us not to the literature:

3. Architecture design

Current plan

Who leads?: Huw-1
Target?: OOPSLA workshop paper

Early Ideas on content

See Huw's documents.

7. Newsletter article(s)

Current plan

Who leads?: Steve-2
Target?: Dept and/or university newsletter

I promised; I haven't delivered.

Early Ideas on content

Key issue here is how to pitch it?

Web site logical path: [] [~steve] [grumps] [this page]
[Top of this page]