Last changed
10 May 1998 ............... Length about 1,000 words (10,000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/grenoble.html.
You may copy it. How to refer to it.
(Back
up to current central page)
Report on my group's work on the exercise at the Grenoble MIRA workshop
by
Stephen W. Draper
GIST
University of Glasgow
Glasgow G12 8QQ U.K.
email: steve@psy.gla.ac.uk
WWW URL:
http://www.psy.gla.ac.uk/~steve
This is my write up of the group I was in at the Grenoble MIRA workshop
(30 March to 1 April 1998) doing the exercise set by Annelisa and Raya.
We had a long presentation in the morning. In afternoon, divided into 4
groups to work on the exercise. Our task was to apply a subset of the onion
models for A) analysis B) evaluation to a case study of information retrieval
on the WWW. The materials included: a statement of our task, a video of a
user and investigator (Raya), a transcript and some screen dumps of this
recorded session, the set of OHP slides from the morning's presentation, and a
case study by Annelise and Raya applying the framework to the same retrieval
task but done by school pupils. Our group comprised Tore Bratvold,
Steve Draper, Mark Dunlop, Joemon Jose, and Eero Sormunen.
Our case was an expert using the WWW to perform a a sample expert search for
MIRA (as a demo / test of the web), for info. on "a plant of your choosing from
the Pacific North West".
The framework has 7 layers. Our exercise task was to focus on 4 of them.
In trying to do the task, we had several different specifications of each
layer:
- The labels on the summary diagrams of the framework
[E.g. layer 3 counting from the outside is labelled "Activity analysis; task
situation; in work domain terms" and "Does system support task repertoire of a
work situation?" respectively in the two diagrams.]
- The wording on our task sheet referring to 4 of the 7 layers. [e.g.
"Activity analysis, task situation"]
- The OHPs (reproduced in a handout we had) describing each layer [pp.12-16 in
the set of OHPs]
- The section of their case study document which included questions to ask
corresponding to each layer. [e.g. p.16 gives 6 questions about this layer]
In our group, the key breakthrough was when, after we had puzzled a lot about
how to answer the questions posed by the framework, Tore said we should ask
ourselves how the subject (the expert searcher) would himself judge his own
success or failure: that that exposes the task that is actually directing his
actions. This has a number of aspects, but the most important is to realise
that this is not just about doing a web search to satisfy an information need,
but at least as much about giving a demo. (So it had to make him look good,
not use methods that would look confusing, it had to succeed, it didn't have to
get any particular information: or rather he could choose what the goal
(Camellias) would be, ....)
In some of the items below we give alternative answers, thus reporting on the
ambiguity we experienced in trying to use or understand the framework.
Nevertheless, this report is of course a cleaned up version of our actual
discussions. Ideally, it would consist of a short answer to each of the
framework items, along with evidence for that answer such as a reference to
part of the transcript.
- Activity analysis, task situation, in work domain terms.
We had more than one take on this.
- What we saw him do:
Raya explains the task
Raya quizzes him on his expertise (at searching, at teaching, ...)
Expert does search task on WWW
Expert leaves the record on videotape and says hallo to MIRA (p.10 of
transcript).
- Summary, following Tore's key insight.
To perform a sample expert search for MIRA
- Actually, expanding on that, he was trying to accomplish an "activity" (in
the Activity Theory sense) that satisfied multiple constraints:
The task of giving a demo
The search task specified on the worksheet he was given
His personal preference (p.9)
His search expertise (p.5)
Note that there is scarce direct evidence for any of these ways of answering.
Methodologically, this may be poor practice: going beyond the data. But it is
typical of research on activities, particularly work activities, where it is
usual for people not to be explicit about the intentions that in fact shape
their actions.
- Activity analysis, task situation, in decision making terms
(cognitive decision task)
We had several interpretations of this:
- Our intuitive response
Choose which plant: Camellia
Choose which search engine: Alta Vista
Choose which documents (to open)
- Imitating the example on p.18 of the reproduced OHPs we were given.
Analyse the assignment
Analyse the information needs
Plan the search
Evalaute the search results
Choose information to use in "result"
- Our view on reflection
(He gets the assignment)
Classifies the query. (i) into his private categories? (ii) To satisfy
multiple constraints (see below)
Executes a standard search plan of his.
- Activity analysis, task situation,
in terms of mental strategies that can be used.
- He knew in advance (p.5; and the fact that he had been told the task the
previous day) a plant that satisfied the multiple constraints given above. So
he picked his "task" i.e. his goal for the search, "Camellia", that would do
that: "empirical strategy".
- He used a fixed procedure:
Alta Vista is his favourite engine (the "empirical" strategy)
He did "+keyword1" and some other keywords (p.5): that was his procedure for
designing a query Selected from the returned hit list the first document with
keyword1 in its title
The transcript shows he repeated this procedure several times, and didn't use
any other. Again, then, the "empirical strategy".
- Analysis of user characteristics
See pp.1-4 of transcript.
(Following the numbering of questions on the second onion diagram).
- Presentation.
He scrolled a lot because of the tiny window (a small penalty).
He made an error with AltaVista's "refine" command: this puzzled him but no
real penalty (p.7).
- Are all relevant strategies supported?
Yes.
- Does the system "support" relevant decision tasks?
Yes: in the sense of no obstruction to an expert rather than comprehensive
explicit support.
- Does it support the task repertoire of the work situation?
Window is small. This may hamper the search task, although no big penalty;
but helped the task of making a demonstration because the window then fitted on
to the video.
Demo task: the video didn't show the searcher ever; so perhaps the setup did
not support the demo task well in this respect.
- Does it support CSCW?
Yes: Raya was right there, and so able to cooperate fully.
BUT: transcription is expensive; and our copy of the videotape was defective.
- Evaluation in the work context. [Does this mean "Did it really achieve the
work?"?]
Yes: we got the demo to analyse.
Is this for experts or for new users?
Degree of penalty / optimisation. E.g. the search engine could be redesigned
to have a mode where a search term must be in the title (not just in the body)
of the document.
Thus the repeated use of the word "support" in the evaluation onion framework
conceals this repeated issue of amount of support, amount of penalties, and how
the latter will be different depending on the expertise of the user.
- Abstract terms and descriptions were too ambiguous to use
- (But examples cannot be generalised without understanding the abstraction.)
- Means-ends analysis is an infinite regression: how to decide which levels of
it to use?
- Perhaps it is most useful to focus on questions such as "What will make a
difference to the user's actions?"
I felt it was a key insight to ask ourselves how the subject
(information searcher) would judge their own success or failure. It made a lot
more sense out of the subject's actions and choices than what he happened to
say on the tape. I would probably try asking this directly of subjects: less
of "why did you do that" and more of "would it be OK if you did this, or that
happened?" to expose what they do and don't care about.
Looking over this, it seems a lot of analysis for a pretty small yield
of bugs or recommendations for how to improve the design: basically the only
concrete idea is to design browsers with a special facility so that some of the
keywords given must be in the title (not just the body) of the document, and
that AltaVista has a small problem with its "refine" command.
Of course this may be because we are only just learning the technique and only
worked on the case for a couple of hours.
(Back
up to current central page)