01 Mar 1997 ............... Length about 4900 words (31000 bytes).

Outline answers for HCI exam questions

Contents (click to jump to a section)

What follows are outline answers (and the questions) from the last 2 years' HCI exams. I hope they well be of some use to students as indications of the level of knowledge required, and the style of questions and answers in this exam. However be warned about the following ways in which they could be misleading:

Thus what follows are outline answers, NOT perfect model answers.


Exam for the HCI module. IT course. May 1996


Human Computer Interaction

Answer 3 out of the 4 questions. You have 2 hours.

1a. List the types of interaction style, with an example of each. Can one be seen as more general than the others? [8 marks]
1b. Illustrate the main aspects of any interaction object using the Print dialogue box of a Macintosh application as an example. [6 marks]
1c. What are the 3 kinds of dependency or inheritance between interaction objects that are important in user interfaces? Illustrate these with reference to the Print dialogue box example. [6 marks]

Answer to Qu1

1a. [5 marks] There are 5 types of interaction style: command languages (e.g. csh Unix shell), function keys (e.g. OK and cancel buttons), direct manipulation (e.g. scroll bar sliders), form filling (e.g. Print dialogue box), and menus (e.g. Mac pull down menus). However all can be seen as specialisations of menus in the sense that user input is really only interpreted as a selection between fixed alternatives [3 marks].
1b. [6 marks] The display aspect consists of the visible display: the bounding box with text labels, the component boxes and buttons. The state aspect consists of variables describing the current selections e.g. number of pages, whether output is to be to printer or file, etc. The control aspect has to determine which subpart user input is directed towards (or give an error indication), and finally send a compound message to the underlying code when the "Do it" i.e. "OK" button is pressed.
1c. [6 marks] One type of dependency is between the 3 aspects e.g. when user input changes a selection button from Printer to File, then the display aspect must be made to change the display. A second type of dependency is the IS-A link e.g. the Print dialogue box is a standard type, much of whose appearance e.g. its border is inherited from the standard dialogue box style and would be changed if that standard was changed. The third type is the PART-OF dependency between the component buttons and the Print dialogue box e.g. when the position of the dialogue box is changed then the component buttons should move with it.

2a. In Macwrite, rulers control the margins (among other things), and there are two separate markers controlling the left margin and the paragraph indent. Many users, it has been observed, find it easy to discover by trial and error (in perhaps 2 seconds) which is which, but never learn: they simply experiment again each time. How would you describe this case in terms of a user's learning curve and its associated costs? [6 marks]
2b. What kind of task analysis is the theory of action in comparison to other possible types of task analysis? [6 marks]
2c. What are the issues dealt with in the planning step of the theory of action? Give a non-trivial example plan from the Macintosh Finder. What is the difference between the planning and translation steps? Give an example from the Finder of two translations for one plan. What are the corresponding steps on the perception side? Give brief descriptions of what they do. Give an example of each of these drawn from the Macintosh Finder. [8 marks]

Answer to Qu2

2a. [6 marks] It has good guessability i.e. first time use (only 2 seconds); quite good Experienced User Performance (2 seconds is not bad, but it would be only a fraction of a second if they learned which was which); but infinite (i.e. bad) learnability since they never learn it.
2b. [6 marks] Unlike many types of task analysis, it covers mental actions as well as physical; it deals with perceptual actions as well as motor actions; it covers getting information as well as the main material goals i.e. how things are learned and how done the first time not just skilled performance; and it may cover error recovery as well as skilled action.
2c. [8 marks]
[1 mark] What are the issues dealt with in the planning step of the theory of action? Assembling the set of actions needed to achieve a goal; plus the order they must be done in.
[1 mark] Give a non-trivial example plan from the Macintosh Finder. E.g. to print, first select a printer in the chooser, then use Page Setup, then use the Print command.
[1 mark] What is the difference between the planning and translation steps? Planning assembles the set of actions and their order; translation retrieves their "names" (menu item names, icon appearance, etc.).
[1 mark] Give an example from the Finder of two translations for one plan. File: Copy or <cmd>C for the plan of copying an item.
[2 marks] What are the corresponding steps on the perception side? Give brief descriptions of what they do. Recognition (translating appearance to the machine object) and Interpretation (linking multiple pieces of the display to a single compound meaning).
[2 marks] Give an example of each of these drawn from the Macintosh Finder. Recognition e.g. recognising icons as representing documents or applications, recognising words like "File" as menu headers. Interpretation: understanding that a window and a disk icon are different representations of the same machine object.

3. There is one basic fact or lesson acting as a starting point for HCI. What is it? [2 marks]
What is the basic approach adopted in HCI as a response to this, and how do the various techniques in HCI relate to it? [16 marks]
How are users directly involved in this? [2 marks]

Answer to Qu3

3 [2 marks] HCI has as a central lesson that we cannot design user interfaces right first time, so we must design iteratively.
[4 marks] This question is an invitation to describe the prototyping cycle which has 4 main stages, each relating to a block of lectures about techniques relating to it.
[12 marks] In brief: Observing the interaction relates to the methods for observing user interaction e.g. thinkaloud protocols. Interpreting the symptoms relates to theories and task analysis methods such as the theory of action. Implementing the design, and the decisions on how to modify it given the interpreted symptoms, depend on User Interface Implementation Support software. Marks will be given for mentioning the stages of the cycle and their outputs, for mentioning the areas of techniques, for linking the two correctly, and for showing why each type of technique helps the cycle e.g. the point of UIIS software is to make modifications faster to implement so that more cycles can be performed and hence a better design achieved before resources are exhausted. The easy marks for mentioning key terms are the reward for having the confidence that they meet the bald question, which itself does not contain clear key phrases.

[2 marks] How are users directly involved in this? Not only as subjects in the observation stage, but also in the fifth stage of developing work practices, which depends wholly on users and their work place, and not on designers.

4. Compare and contrast questionnaires and thinkaloud protocols. Why isn't one used for all purposes? What is each best for? For each of these, what would be your choice as the next-best substitute instrument, and why are they only next-best?

Answer to Qu4

4. [14 marks] Key dimensions of comparison are: retrospective vs. on the spot; cost to the investigator; cost to the subject (user); open-ended vs. comparative data gathering. 1 mark for mentioning each of these, but more marks for explaining them (preferably with an example). They are not all-purpose because of differences on these dimensions. 2 extra marks for valid but less important dimensions: e.g. whose judgement decides the data.
[2 marks] Questionnaires are best for measuring user attitudes; thinkalouds for finding bugs in user interfaces.
[4 marks] Semi-structured interviews are a close substitute for questionnaires (but more expensive to the investigator); incident diaries for thinkalouds (but hard to get subjects to remember to fill them in on the spot).


Exam for the HCI module, IT course. May 1995


Answer 3 out of the 4 questions. You have 2 hours.


1. At first glance, a feature checklist is a questionnaire similar to others. How is it similar to, and how is it different from, questionnaires about user attitudes? Begin by giving a miniature example of each, based on one or two items [5 marks].
Describe similarities and differences, covering typical purpose and kind of measure sought, relationship to human memory, possible alternative instruments for obtaining the information, costs [10 marks]; and any other issues [5 marks].
Finally, consider this exam question as a questionnaire: what is its relationship to other types of questionnaire? [5 marks].

Answer to Qu1

1a. [5 marks] Questionnaire: a question might be "How confident do you feel of producing a neat and business-like letter using a word processor?" with responses as (No confidence) 0-1-2-3-4-5 (Completely confident).
Feature checklist: a list of items that are the names of commands arranged by menu e.g. File: New, Open, Close ... Against each item is a set of columns, asking: Have you ever used this command? Is/would it be useful to you if you used it?

1b. [10 marks] Taken in pairs of feature checklist vs. questionnaire:
Purpose: usage of commands (behaviour) vs. feelings i.e. both use self-report but to give measures of behaviour vs. of attitude or affect.
Alternative instruments: computer logging vs. interviews.
Recognition based, recall based. Both technically use recognition rather than recall; however checklists stimulate recognition much more directly by using the names and layout of the commands they ask about, whereas questionnaires describe feelings and attitudes, hoping that subjects will connect these descriptions to their experience.
Costs: Both are cheap for the subjects, and relatively cheap for the investigator.

1c. [5 marks] Both use comparable as opposed to open-ended data so you can add the data and produce a survey. Both are retrospective not on the spot instruments (rely on subjects' memories). With checklists, design efforts should be concentrated on maximising recognition by maximimising visual similarity to the interface; whereas with questionnaires, testing and refining wordings to reduce ambiguity is the focus.

1d. [5 marks] The exam question is basically a questionnaire with no fixed response categories (while a multiple choice question would be like the fixed response categories used in the type of questionnaire asked about). However the second part of the question almost approaches this by hinting at the points to be covered. The reliance on recall makes the exam question more difficult to answer, and the time cost to subjects (candidates) is very much greater.

2.
2a. In what ways is a form filling dialogue box (e.g. the Macintosh Print... dialogue) similar and dissimilar to issuing a command through a pulldown menu system? [5 marks]
2b. What are the main parts or aspects of an interaction object? Illustrate by describing what these are for a field in Hypercard. [5 marks]
2c. Hypercard has (very) limited inheritance mechanisms where a change to a single item is immediately reflected in multiple dependent descendents. Give an example of this for each of the 3 main aspects of interaction objects. [5 marks]
2d. Discuss Hypercard with respect to the framework given in the lectures for critiquing and classifying user interface implementation support. [10 marks]

Answer to Qu2

2a. [5 marks] Once opened, the user can fill in the form in any order, whereas they must follow the menu hierarchy strictly. The form shows all the options at once and leaves them in view, whereas pulldown menus are only visible once opened, and once an option is set it is hidden again. Because of the free order, there must be a special "do it" key in the form, whereas selecting a terminal item on a menu automatically signals completion of the sequence. The error potential is the same: probably no invalid selections will be possible. Feedback is probably the same.
2b. [5 marks] An interaction object has 3 main apsects: state, control, display / presentation. (Control could be subdivided into dispatch, interactors, coordination with other objects.) Hypercard fields: the fields contents (text in lines) is the main state. Its visible presentation is its display (controlled by various parameters). The obvious control aspect would be any script attached to the field, and this can make fields sensitive to mouse clicks for instance. More fully: event handlers e.g. for mouse clicks are interactors; coordination can be done by sending events to other objects; dispatch is handled by the general Hypercard event / message system.
2c. [5 marks] Inheritance of display is done by the background feature: graphics and other visible objects on a background are visible in all descendent cards; a change to a background object immediately shows on all child cards. Inheritance of state can be done by global variables or field contents. Background fields show more structured inheritance, as their contents show on all cards that are children of that background. Inheritance of control is done by sharing routines: handlers of user-defined events, placed for instance on the stack, can be called by any object, and changes to them will be reflected everywhere they are used. If you write a special interactor, placing it centrally and calling it from the event handler of specific object instances would achieve this effect.
2d. [10 marks] The framework for critiquing support basically consists of 3 properties: level of abstraction, scope or generality, and the usability of the interface to the designer. Each of these applies to the 3 aspects of an interaction object, and again to the height or level of arenas within the user interface being designed.
Presentation: Hypercard has rather outdated graphics support by modern standards (lower abstraction and lower generality than you would wish) e.g. you cannot group graphic objects, no colour.
State: either untyped global variables or fields, which have the structure of lines, words, characters. Hence some, but pretty limited, abstraction for state. On the other hand it is fully general: you could with effort implement any storage you needed.
Control: The event passing system means that the dispatch aspect of control is well handled (high abstraction, good generality, although keyboard events are not represented). Interactors (procedures for processing user input e.g. mouse gestures) can be written and used easily. However coordination within and between interaction objects is defective. You cannot associate an arbitrary graphic with input sensitivity (i.e. with a button) except manually, so that when you move the graphic you must move the button separately. This also means you cannot easily build composite interaction objects e.g. group a set of buttons into a unit.
Usability: Hypercard presents a very usable interface to its features. The interpreted language means no compile delays; direct manipulation approach to screen layout; readable script language, with online help to find new keywords. Difficulties are usually due to lack of facilities rather than poor presentation of them.
Arenas: Hypercard is best at the level of simple interaction objects. It is possible but difficult to build the effect of composite objects. It is possible to implement effects in lower areans e.g. mouse gesture recognition; however the low speed due to the interpreted language greately affects the usefulness of this in practice. It is not possible to experiment with window handling effectively. On the other hand, the facility for "XCMDs" i.e. imported compiled code does allow arbitrary extensions at all levels, although without the usability of directly supported features.
Separability: (apart from XCMDs) there is no support for separating the user interface from core functionality.

3a. What are the general kinds of sources of information for users? [5 marks]
3b. For each stage in the ToA that requires knowledge to perform it, give an example of a user interface design problem due to a failure in information delivery, and an example of a design technique to supply the missing information on the display. [20 marks]

Answer to Qu3

3a [5 marks] The user's memory; immediate perception of the display; information requests e.g. online help; experimenting with the interface.
3b. [20 marks]
Decision: Forgetting (some of) one's goals. Remedy: Electronic diaries and reminders; displays of text in a word processor remind you of spelling and formatting errors. In general, displays of a program's state.
Planning: Not knowing what set of actions you need to achieve your goal e.g. moving text by combining cut and paste. Remedy: command names like "print" that refer to the overall goal, and lead you via dialogue boxes to the sequence of actions.
Translation: Not knowing the name of a command you need. Solution: menus and recognisable command names.
Execution, segmentation: do not require information.
Recognition: Need to know the meaning of items like icons and command names, or at least the type of thing they represent. For instance, buttons need stylised borders to be recognised as something to operate on. Attempt to choose recognisable icons and names: but this is not in general possible.
Interpretation: Failure to pick up the relationship between objects e.g. between window and icon. Solution: animations of window opening and closing; "hollow" appearance of an icon with an open window.
Evaluation: Failure to realise that an action has succeeded or failed. Solution: better feedback. In general, the constant display of state is adequate, but details also matter e.g. automatic scrolling to the point of a change (e.g. after a find and change command) to make sure it is in view.

4. In HCI, a crucial approach is to modify designs in response to detecting bugs in the user interface. This process of going from tests to design changes can be divided into stages. How might these stages be referred to in a) an analogy with medicine and the detection and cure of illness, b) in the prototyping design cycle? [5 marks]. Describe each stage, giving examples of methods for each stage, and illustrated by an example of a problem [15 marks].

What problems do you think there are in giving a specific number in answer to the question "How many bugs have you found in this user interface?". [5 marks].

Answer to Qu4

4a [5 marks] Symptom detection, diagnosis, proposing a remedy (treatment). Observation of user performance, interpretation, (re)design decisions.
4b [15 marks] Observation: many methods can be used of observing users and their problems, but thinkaloud protocols are perhaps the single most important. The choice of a command name e.g. "Chart" that does not spark recognition in a given user would appear as a user searching in a puzzled manner, saying that they are trying to find a command to draw graphs, and eventually giving up. (They might even consider "chart" but say that it seems to be about maps.)
Interpretation: This could be done using the theory of action. In this example, failure is at the translation stage: the user correctly believes there to be a suitable command but cannot find its name. Another useful classification is in terms of user experience: this is a problem of guessability (first time use), not of experienced use.
Redesign: Try changing the command name. Perhaps the user revealed how they were thinking of the command which would be a start. However names are chosen partly in contrast to others, so changing the menu title or the other names on the same menu could have an effect. If there is online help, then this must have failed too, so modifications to it might be useful: did the user not find the help facility? or did it not have an entry for the topic of "graphs" e.g. it repeated the problem of calling them "charts"?
4c [5 marks] Although symptoms and interpretations of problems can in principle be done objectively, counting the number of distinct bugs depends on what remedies (redesigns) are proposed, and that depends in part on costs. For instance, changing a command line interface to a menu one is "one" change, yet may solve very many bugs. Another problem is that typically information is made available to users through several routes, but only one need work. Consequently if a user does not understand a command name, but successfully uses the online help or learning by exploration, then no serious problem will be observed. Is this a bug? If seat belts do not save anyone's life in a given car because it never crashes, is this a design bug of a wasted feature?


HCI module exam, May 1994

1 (a) [1/2 the marks] Describe the 5 main steps in the prototyping cycle, and the output of each step.
(b) [ 1/4 of the marks] How might you begin the cycle for a new design?
(c) [ 1/4 of the marks] What might you use to help you go from understanding a problem to proposing a design change?

Answer to 1

This outline answer is abbreviated to the terms used in the course; in the student answers, I expect some demonstration of understanding of the points, but not necessarily reproduction of the terms used here.

1a] Equal marks for each of the 5
1) Observe the interaction --> observed symptoms
2) Interpret symptoms --> diagnosed bugs
3) Decide design modifications --> specifications
4) Implement the design --> new prototype
5) Develop work practices --> user skills

1b] Equal marks for each of the 2 ways:
Either: Pick the nearest existing design, and join the cycle at observing the interaction, this time with new ideas about what to optimise for, and how to achieve it;
OR: If there is no comparable existing design, then do a pre-design survey asking potential users to imagine the design; do a first design; join the cycle at the implementation step.

1c] Equal marks for each of the 3 points:
Published guidelines are supposed to help go from problems to solutions, though they are often too general to help much in practice. The second major consideration is the programming environment of toolkits, libraries etc., which have a big effect on how easy it will be to implement various solutions, and so should be taken into account in picking one. Costs to users of not fixing a bug is a third consideration if, as is often the case, only some of the problems can be addressed within the available resources.

2. Of the 6 measurement instruments covered in the course, which is the only one that may not involve asking the user any questions at all? [3% of the marks]
What are the ways that the other 5 instruments vary in what the questions typically ask about, how they are asked, and how answers are recorded?

Answer to 2

A controlled experiment may not use questions at all. [1 mark]

30 marks for the question. A standard answer is sketched below, in terms of 5 points for each of 5 instruments. The remaining 5 marks are bonus points: 1 for the opening question, others for discussing additional relevant features, or showing how notionally separate instruments shade into each other in practice. The main test for students will be deciding which features to use in a comparison, as the question does not list them all, and recalling the nearest relevant handout gives some points not asked for and others that need rephrasing for the question.

Questionnaire: Usually about attitudes. Fixed (printed) wording. Fixed response categories; categorised by the subject; no help or prompting.

Semi-structured interview: Usually about attitudes; wording may be varied by interviewer. Fixed response categories; categorised by the interviewer; interviewer helps to establish understanding of both question and answer categories.

Feature checklist: Usually about past behaviour (which commands used). Fixed (printed) wording. Fixed response categories; categorised by the subject; no help or prompting.

Incident diary: Usually about incidents: machine and user behaviour; Fixed (printed) wording. Mostly fixed response categories; categorised by the subject; the event itself (interaction with the machine) is the prompt, so it doesn't rely on memory.

Think aloud protocols: About behaviour, and reasons for it; no fixed wording. no fixed response categories; categorised by the observer; interaction with the machine is the main prompt, plus observer prompting to speak, and perhaps more directed questions especially of clarification.

3. A company wants to hire you as an HCI consultant expert in connection with a public information system about transport in Glasgow. They have a version nearly working, and hope you can look at it quickly, so they can release it. What is wrong with this suggestion? Provided you can get them to agree to a sensible plan of work, what would you suggest to them? Outline what tests you would do, and how your work should interact with their team's work.

Answer to 3

The other 3 questions can be passed by someone memorising the handouts, and probably done excellently by someone with enough understanding to realise which bits to pick out and reproduce. This question is the opposite: someone who has forgotten most of the details but imbibed the main spirit of the course will do well, while someone who only knows the handouts separately will probably be lost, as there was no lecture pulling together the points required here. Because of this, the following marking scheme should be applied leniently: the major test is whether they realise which basic points are being asked for here.

Half marks for: saying that the consultant should push for a prototyping approach, not looking at it once. So after testing it on a few users, they should get the team to implement modifications, then retest, and iterate several times. The consultant's role will be to report observed problems to the design team, and participate in discussions about what modifications to attempt.

Half marks for outlining the tests: thinkaloud protocols on users using the system. Ideally in field conditions (a transport information system might be located at underground stations). But recording which bits are most used would also be valuable (logging, feature checklists). It might be worth running a lab. experiment, giving subjects specific things to discover using the system, and measuring time and errors. This can force testing of all parts of the system, is a good test of whether people can get that information at all, but is unrealistic in that they may try harder than they would in normal life, and it doesn't tell you what information people in practice actually require.

4. In general, what are the alternative sources of information available to a user of a program? [20%]
In the theory of action, what are the four stages on the side that provides information to the user, and what are the products that emerge from each? Give examples. [40%]
One kind of information is about state: e.g. which files exist, whether a program is running. What is the other important kind of information? Give an example of this kind of information in each of the three arenas for the case of an electronic mail program. [40%]

Answer to 4

Memory or knowledge; immediate perception of the display; information requests to other people or to help facilities; experimentation with the program. 5% for each of the 4 points.

For each of 4 subparts, 3% for each stage name, 3% for result, 4% for convincing examples.
a) Segmentation --> display items e.g. a symbol or word
b) Recognition --> computer items e.g. icon, label, error message
c) Interpretation --> complex states e.g. my file, the copy command has completed, ..
d) Evaluation --> which of the user's goals have failed or are now achieved e.g. it copied 3 of the 4 files, the machine is now shut down OK.

10% for the information type, 10% for each of 3 arena examples.
Affordance or how-to information: not about what state the machine is in, but about what actions will have what effects. Central arena e.g. what the "send", "read" commands do; articulatory arena e.g. how to issue those commands (menus, keyboard, whether to press <return>); semantic arena e.g. you use sending a message as part of a larger plan that includes waiting till tomorrow for a reply, keeping a copy of the message in case it has to be resent, etc.