(Back up to central page on the debate)
Stephen W. Draper
GIST (Glasgow Interactive Systems cenTre)
University of Glasgow
Glasgow G12 8QQ U.K.
WWW URL: http://www.psy.gla.ac.uk/~steve
My earlier essay on this is here.
A report on other views brought out in the debate is here.
Essentially the same argument supports the idea that at university level, teachers need to be researchers. They are in fact performing the same role as interfaces: students could learn without them in principle, but in practice a good teacher acts as an interface to the subject and the literature for the students. Students, like other clients, need to be told what, of the sea of material, is important, and how it matters to them. That is a lot of what being a teacher in HE is.
It implies that for researchers to benefit others -- for technology transfer to occur -- you need to get basic researchers to spend some time on research but some other time on thinking about applications and communicating the results of that to companies or other bodies concerned to develop applications. In fact learning about what might be useful as an application may take as much learning as the research literature: prolonged immersion in both is necessary, and this argument suggests that the same mind should do both. That is because neither side by itself can predict what would be useful to the other, and so cannot ask the right questions or offer the right unsolicited information. It implies that at least some people should spend part of their time on basic research and another part on applications (although it doesn't mean that everyone must do both), but it does not mean that these are the same activity. On the contrary it suggests that they are distinct, and need to be done and funded separately.
The time frame for product development is very short, and getting shorter. This isn't because society is organised badly, but because commercial success rightly depends in part on satisfying customers, and their needs change rapidly. Thus demand for ideas and for trained workers varies on a short time scale. In contrast, the time scale for basic theories to be translated into important applications can be 100 years. The time scale for a theoretically based application to become widely used is also nearly as long, and certainly takes decades (e.g. relational database technology, television, ....). The time scale for training workers such as researchers and developers is also long (e.g. 3-6 years), and the timescale for setting up educational courses or organisations to train them is longer still. This is in contrast to how the demand varies (month to month, one year in many small companies, certainly seldom more than 2 years). In large companies sacking and hiring is rarer, but projects typically have the same short time horizon, revealing the real underlying timescale of the pressures. The most productive background for such commercial life seems to be single areas, perhaps the size of a city, with a lot of similar companies and a university all pooling workers in one area: then demand can successfully be short term, because when one job ends, each worker has a good chance of finding another without moving house, or schools for their children, and without leaving their partners.
Providing trained workers is often cited as the main contribution of academics; and to the extent that this depends upon their being active researchers, is a reason for funding their research independently of results.
It is possible the normal arrangement for this could and should change. If the university is good, its students will leave with up to date training in knowledge and research, but this will decay from then on (although they will learn other things they need for their job in their workplace). As we take the notion of lifelong learning more seriously, this may lead to a more fluid interchange and retraining of workers. Again, having it all located in one area is likely to best support this; and supporting academic research in order to maintain academics as up to date teachers will be important.
This is a fundamental argument about having academic specialists for each area. The point is to have such specialists on that area, not specialists who may eventually benefit that area. HCI is typical of many areas in that other areas contribute to its practical development: modern interfaces depend upon software architectures, graphics, hardware among other things. But although one valid justification for funding graphics work is that it may well lead to better human computer interaction in future, that is not the same as funding HCI research itself. We shouldn't pretend that contributing technologies ARE HCI: e.g. speech recognition is no different from graphics or hardware design. They do contribute, they are worthwhile in their own right, but they are not HCI; any more than funding maths is the same as funding chemistry, even though there is a sense in which chemistry is specialised physics, which is almost wholly dependent on applied mathematics.
The government story will have some funds for technology transfer, and for applications that seem (to researchers) likely to be valuable but to companies too uncertain. They might also support secondments of academic researchers to industry to work periodically on applications. But these are separate activities, requiring separate support. Calling them all "research" does not make them all interchangeable instances of the same activity.
For researchers in HCI, some of the justifications for research do not require eventual benefits (keeping up to date and practiced to support teaching, keeping up to date so that when doing other work on applications, they are able to contribute). But especially in this applied area, they can ask themselves whether what they are doing is likely to contribute to society.
Thus work on yet another interface technique is not very likely to affect the world. It will only do so if it is taken up by companies, yet it is the kind of work they are most likely to do themselves anyway. It may be more useful to consider what kind of work is most neglected by commercial interface builders.
*Codified design methods do seem to (slowly) get transferred to software companies. These may be worth working on: in effect, codifying aspects of design and production that give rise to severe problems when neglected.
*Chris Johnson (HCI 97): Look at emerging technologies, not yet mass marketed, predict the usability problems / issues in designing the user interface, do and publish research on this in time to influence the manufacturers. He claimed this in his conference talk, but the printed paper seems only to deal with the example of radio links to central computers from mobile devices (Johnson, 1997).
*Identifying problems of a new kind. There is a sense in which the whole HCI field is just that: identifying and articulating a problem that had always been there in principle, but was not focussed on. For instance, taking usability (costs in time and errors to humans) seriously used to be the key issue; and now, the focus is more often how devices do or do not fit into wider workplace interactions typically involving several people and several kinds of machines: how a new one fits into the whole pattern is often more important than its usability in isolation. (This is in effect taking the argument above for methods a step further. That is, analysing and codifying problem-driven experience may be the valuable function here, and one which is not so well done by commerce alone, as companies do not have the motivation to put resources into sharing experience with each other.) Where the chokepoints are is still shifting. And hence the kinds of concepts and methods and solutions that matter are still shifting.
*Finally: taking Thimbleby's points seriously (his HCI'97 panel contribution, discussed in my first essay). He advocated training the public to complain about usability. After all, although average usability has enormously improved, still users are needlessly troubled by all sorts of things, and still the market does not seem effective in pushing producers into doing things better. Working on better articulating this might indeed be a key issue. For instance, having easy to use measurement methods, so that criticisms of designs can be made easily yet more objectively.
Thus perhaps the key role here is one of a public service in reflecting on and codifying communal experience in HCI. This will be used by industry, but will probably not be created by industry, as it costs resources to do it properly but the main beneficiaries are other people benefitting from your experience. Such experience might be expressed in design methods or aids; in better usability measures to capture bad experiences as measurements; or in concepts developed to describe new issues that emerge as important, as limiting the overall success of HC interaction.
Product success depends upon many things, and it will depend on the niche and
the market conditions which of these factors is crucial in each case:
It should be remembered, therefore, that success can often be determined by other factors; that free markets are like evolution, and do not guarantee quality, only relative "fitness". To the extent that HCI does represent users' interests, we should perhaps not take market success as a measure of usability, but rather consider how to make usability have more effect in the marketplace. It could be that usability for computer products is like health for food products: customers get what the government makes compulsory minimum standards, not what would be best for them. They are almost never offered a choice in which everything is the same except for the level of health safety for food or usability for software.
Kealey,T. (1996) "You've all got it wrong" New Scientist vol.150 no.2036 29 June pp.22-26
Newman,J.H. (1853/1959) The idea of a university (Doubleday: New York).