Last changed 1 Sep 2006 ............... Length about 3,000 words (23,000 bytes).
(This document started on 1 Sep 2006.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/qqa.html. You may copy it. How to refer to it.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [this page]

Quantity, Quality, and Accounting

Contents (click to jump to a section)

by Steve Draper

Preface

Draft response to a paper by Nicol & Coen "A model for evaluating the instiutional costs and benefits of ICT initiatives in teaching and learning in higher education" presenting a model for cost-benefit analysis to appear in ALT-J.

The importance of CBA

Nicol & Coen argue for developing cost-benefit analysis (CBA). Any discussion should begin with at least a brief review of what the point, and the limitations, of this is. There are three reasons for this: firstly, because many people do not accept it is possible, on the grounds that it involves comparing unlike things (apples and oranges); secondly because by understanding the purpose of the analysis, it may be possible to justify the necessary simplifications for special cases and limited applications while being explicit about the limitations, and thirdly (and conversely) this can expose errors and omissions in the analysis even after accepting its general principles.

The need for cost-benefit analysis does not come, fundamentally, from a crass desire to ignore distinctions of quality and value in order to reduce everything to money, and so exalt money above other values. It comes from the need to make decisions. In making a decision the decider in effect orders the alternatives into a single order in order to select one as best. The alternatives usually have many dimensions, which are not equivalent to each other, yet have somehow to be collapsed for the purposes of the decision. In practice, if not intellectually, everyone is familiar with this. Whether choosing a new HiFi, a place to live, a spouse, or a job, multiple incommensurable factors, along which the alternatives differ, must somehow be reduced to a single figure of merit in order to make the decision. Only people with no alternatives avoid this. For a decision, the figure need only be ordinal rather than a "ratio scale" quantity like most scores i.e. only good enough to put them in an order. But this still means most of the information has been discarded by treating apples and oranges as comparable. The most familiar example in education is assessment: whenever we assign a mark or grade, whether for a whole degree course or a single piece of work, we are ignoring the fact that a learner has many different kinds of quality, and there are a large number of quite different ways to get most of the grades awarded. Using money as the unit is just one convention. Reducing things to a single figure of merit ignores important qualitative knowledge; refusing to do it prevents rational decision making; being explicit about the process allows progress along with a firm statement about the limits of the applicability of the analysis.

A pure researcher probably does not want to do this: after all, it loses information, and pure research is not meant to support practical decisions. However anyone engaged in actual learning and teaching, or in applied research that wishes to provide the grounds for practical use of educational methods, does have to make decisions about what is best to do. They are thereby obliged to reduce things to a single figure. If they do it covertly rather than explicitly, they are simply more likely to make, or to recommend, bad decisions, and they cannot give a rational basis for the choice.

An example from another field is assigning a money value to each human life in reasoning about whether to pay for further safety enhancements on railways. (Current UK figures used for Values for Preventable Fatalities are £1.2 million for road traffic, £2.8 million for rail fatalities, although the TWPS installed recently had a value of £10 million, according to an article by the Rail Freight Group (2003).) Many people don't like to equate a human life to a money value. However this doesn't get around the need to decide whether to pay for each possible new safety feature, nor the fact that money is a limited quantity. If the same money can buy 5 saved lives on railways, but 10 saved lives by applying it to a road safety scheme and 50 by spending it on a cancer screening programme, then many would see the last as the place to spend it. By limiting the applicability of the analysis to comparing life-saving schemes, it is possible to do useful calculations that, by applying the same assumptions to each one, avoid trouble even though those assumptions are questionable; whereas it probably could not be extended to reason usefully about how many foreign holidays each taxpayer should sacrifice in order to fund all the safety schemes i.e. comparing holiday pleasure for millions to saving a few dozen of other people's lives.

The field of education, and still more that of learning technology, may be particularly overdue for attempting this. Very few of the numerous projects applying ICT (information and communication technology) to education admit that the materials they produce entail a prioritisation and tradeoff between time, money, and quality. Because in almost all cases the budget was fixed, in decision making on these projects in fact money was given top priority over quality and timeliness — but it would be easier to get a Victorian to write about sex than to get most educational researchers to write about this.

But money probably isn't the most important resource requiring educational decisions. Every day teachers make decisions about what to spend their time on. In universities, this is if anything more pronounced, where careers depend on research, and teaching gets limited attention. Would it be better to spend this on better written feedback on the last piece of student work, on working on introducing a new teaching technique (these always require learning time by the teacher and creating new materials and lesson plans), or on developing a "reading party" or other group interaction in order to increase student "integration" (Tinto, 1975)? This requires comparing unlike things in order to make a decision. The widespread refusal to do this can only mean that decision making is done badly, and learners consequently usually disadvantaged. Those most eager to say they are learner-centered are often the least prepared to make the best decisions for learners.

Is CBA worthwhile in researching learning techology? Yes because it forces attention to factors that determine educational outcomes, but which many people like to avoid thinking about such as the cost, time, quality tradeoff (it is a pity the authors didn't give an example of using the model on this).

Qualitative over quantitative: identifying the important factors

Nicol & Coen offer a model for evaluating costs and benefits of CAL projects: that is, of any use of ICT in HE (Higher Education) institutions. The overt aim of such a model is to calculate costs and benefits quantitatively in order to make decisions between alternatives. The actual but implicit benefit, they argue, of employing the model lies in the process of trying to identify the existence of costs and benefits and then of trying to measure each of them.

In effect, the advantage of this approach is not quantitative (being able to make close decisions more accurately) but qualitative (identifying what the key issues are). Another implication is that the value of their model, if any, lies in the value and accuracy of the categories or factors in their model: if these do not in fact correspond to sensible and useful divisions, then while their general approach and its procedure may stand, their specific model and its categories will not. Conversely, their approach will gain value to the extent that it is developed to have a library of important factors for users of their model to consider applying.

A third implication is that there should be a phase added to their "method" in which the user actively searches, using open-ended methods, to discover what the more important factors are (the more important costs and benefits). After all, on their own account, this is where its main value lies. This would further shift the value from the "model" (the list of categories and factors) towards the (extended) method.

Pessimism about making the quantitative aspect work (and so relying on the qualitative to make it worthwhile) is reinforced by Landauer (1995). His book, based on his work on a panel advising the US government, argues that they had been unable to find any evidence that investment in IT had brought any benefits to the US economy so far except in a few special technical areas, in contrast to the kind of clear economic indicators of the benefits of other inventions. If nearly 50 years of IT investment in all industries in the USA do not show clear quantitative benefits, then the prospects of measuring these in an ICT project in one university seem remote.

My general view independently of this paper is that we need above all in education to understand what the important factors are: we are far from identifying let alone quantifying them. We need this for theory, and to structure evaluation methods. The CBA attempt is another way to force ourselves to uncover and list these factors. It is thus aligned with my general beliefs about what we as educational researchers need to be doing. Is their approach in fact focussed enough on this? In general I wish to emphasise their key qualifying statements, criticising their approach only for not going far enough, not fully facing the implications of their own statements.

Accountancy and research

They say that on the whole costs are "hard" measures (count the money) while benefits are "soft" and difficult to measure. That is more nearly true than its opposite would be, but it doesn't go nearly far enough. In particular, it fails to bring out the general difficulty in accountancy of inventing and applying categories that give a useful analysis.

For example, every year many small businesses fail, and one of the leading causes is a failure to appreciate the notion of cash flow. People who naively believe that money is money, a hard measure that is just a fact, are likely to believe that as long as they get in as much or more money to their business as they pay out on costs, then all is well. However if the money coming in is always months after the money going out, then balance is not enough (they have to borrow money, and pay for that). Businesses like airlines and insurance whose customers must pay in advance do not have this problem, retails shops and services do: there are no universally appropriate methods.

Another classic issue is whether to count an expense as an investment or a recurrent cost. Tax laws mean companies profit from reporting it one way rather than another, but the problem is a fundamental one that in many cases you could see it in either way. The cost of computing facilities for students is ambiguous in this way too, since desktop machines have only about a three year life before they are obsolete. Special initiatives that in effect regard this as a one-off long term investment like a new building have often failed to have a lasting impact. Essentially, naive accountancy has alone been enough to frustrate numerous otherwise promising ICT applications.

Another example, alluded to in the paper being discussed, is attempts to classify staff time as either teaching or research. For most staff, many activities may contribute to both and this classification is fundamentally unsound as an accounting measure. It is true that we can measure time (like money) accurately. It is not true that it is unproblematic to assign each unit of time to either teaching or research. For example going to a conference I may hear ideas which I may use in my own research or refer to in courses I give, or both. I may have funded my conference attendance by giving a tutorial there, but it is not only possible but likely that materials I develop for that tutorial will then be used in a university course. Furthermore, I may not know until a few years later what the uses of an idea, or the days spent at a conference, are: so the data collection exercise involving these categories could not be done until some years later. Conversely supervising student projects usually counts as teaching, yet many grant applications are based on preliminary work done in such projects.

Still more relevant is the case of how to classify the cost of a student learning to use email. Is this a skill they will need later for employment, a skill to make the course administration (e.g. announcements) work, or a skill they will use in their social life? The fact that it is actually all three in most cases is what makes it worthwhile in reality, but is also important in that individuals may only perceive one initially when deciding whether to cooperate: more benefits increases the number of initially cooperative individuals before all are generally perceived. It also makes a difference in whether the institution can push the cost off to students.

These simple examples bring out not only the inappropriateness of apparently obvious cost categories, but also an issue not faced by Nicol & Coen: about how the same action will be perceived (i.e. classified) differently at different distances in time from the action. A student signing up for email training, or an academic going to a conference will very often see the benefit differently before going, just after going, and some time later. So too would an analyst, since more information about the true beneficial consequences is available later; yet decisions have to be made on the basis of advance information. Thus these measures are not objective, nor stable over time. Furthermore, it suggests that CBA done purely to support management decisions may be rather poor at understanding what the real benefits are, and so poor at developing understanding of education, and poor at supporting management decisions in the longer term.

The general problem with accountancy categories or conventions, such as those in their model, is that they try to assign costs under headings, where each heading is in fact the name of a benefit. In very many cases, a given cost produces benefits under several headings, and this advantage is usually fundamental to the viability of the activity. Thus the categories used for costs, far from being "hard", are wholly questionable, yet central to the whole business of CBA. Accountancy is creative, not in the sense of being deceptive, but in the sense that it is crucial but difficult to identify the categories that are illuminating and useful for making decisions. Their brief acknowledgement of the general issue does not admit that their whole model may be wrong, and the categories they have imported from other areas could be wholly useless here. In fact if a proposed category is familiar from other areas of accountancy and management, then — far from having added authority — it should be regarded as a bad sign that very likely it has not had its validity and usefulness for this purpose examined, but been uncritically and so probably inappropriately adopted.

Overall there are three major problems with the influence of traditional accountancy underlying the Nicol & Coen paper. Firstly, much of the accountancy profession has to do with applying categories that are general and fixed by others (e.g. by tax or laws governing accounts for public companies): these legal requirements in turn stem from forcing companies to apply the same categories in order to allow investors to compare (unlike) companies and prevent those companies from evading scrutiny. However here this predilection for fixed, generally applicable cateogories is irrelevant. On the contrary, because of the extraordinarily primitive state of the field of CBA of education, we should build cautiously up from small cases, ready to use different categories each time, with limited validity, as in the case of trying to reason about safety improvements. Generalisation may only become possible later. Instead the first thing is to discover the useful categories in each case. The preference for standard categories, so important in other areas of accountancy (and in a mature science), is a liability here.

Secondly, though they say the paper comes from trying to combine an accountant's approach to costs with an educational evaluator's approach to benefits, in fact the accountant too is identifying benefits not costs. It is not difficult to measure how much money has been spent: the difficulty is in identifying what benefits have been bought by it. Categories such as research vs. teaching, perhaps even infrastructure vs. value-added activity, are categories of benefit bought by money. But is accountancy the kind of expertise most likely to identify educational benefits?

On the other hand — thirdly — money is the only cost (in the sense of negative benefit or limited resource) being analysed in their model, and this is a huge defect. If accountants can only measure money, then other cost experts must be brought in to identify and measure the other costs such as staff stress (soon to become a legal liability to organisations), and student time (not paid for yet a fixed and limited resource, around which the organisation's business is structured).

Learning costs

My final point is to give an example of this failure to analyse costs: the issue of learning costs. Though it may seem paradoxical, the most neglected factor in managing HE is the way in which the learning (or "personal development") of the people involved is ignored. In ICT projects, it is often actually a dominating factor even though usually ignored. The failure of many expensive initiatives for promoting ICT in schools has been attributed to "forgetting" to consider and then fund the need for the training of school staff. Ditto in HE. Its absence in their model is a huge defect. Of course it isn't there because it doesn't seem to be "like" the traditional accounting headings that are in their model. But as noted above, this itself is a major symptom of the model looking in the wrong direction, and failing to examine the educational domain.

Learning costs are particularly tricky because they change fast over time. In the USA, HE is beginning to be able to assume that the majority of its arriving students already have a basic familiarity with computers. This means that training in basic computer literacy has moved from being a feature that must be offered to the whole intake, to being a remedial activity for a minority.

However the basics of learning costs are nothing to do with ICT but are completely general in HE decisions. I have, for example, seen a project on an innovation in HE teaching that was essentially the introduction of essay writing into a degree on accountancy. It had all the benefits that anyone from a discipline that routinely requires essay writing would expect. However in order to achieve this innovation, enormous amounts of support had to be provided because this WAS an innovation. So the students had no experience of what writing an essay involved, and had to get a lot of practice with a lot of instruction and formative feedback (from staff). In other disciplines this skill can be assumed on entry to HE. This example revolves around the learning cost of students acquiring a particular skill. The issue is whether the cost is borne by schools or HE; by a degree programme as a whole, or by a single course within that programme (i.e. if your module requires it and no other does, you will have to provide all that support, and also greatly reduce your curriculum to allow time for it.). Thus the "cost" of this innovation is dominated by the learning cost; and is completely different depending, not on the content of the innovation, but on the context and the scale of the innovation (spreading the cost across a single module, a degree programme, or the school-HE track for a discipline).

Learning costs are crucial to analyse because they have a major effect on a) when a given project may become worthwhile (perhaps not now but in three years time); b) which disciplines it will and won't work for; c) economies of scale: it may work only for large scale change where a single learning cost gives benefits on repeated occasions, not small scale change.

Conclusion

There is no space for more detailed discussion: I have stayed with the biggest, because most general, issues.

To support decision making and so to do anything of practical usefulness to learners, we must be able to reduce all alternatives to a single order of preference. This does not necessarily require expressing their merits in terms of money or any other numerical score; but it does require comparing qualitatively unlike things — which is the most frequent reason given for rejecting CBA. Nicol & Coen, then, are tackling one of the most important issues in the education field, and one with enormous potential benefits to learners. However the benefits may be a long way off still, because the difficulties are substantial. I thus fully endorse their overall aim, but is progress possible?

The most important thing is to discover what the important factors are. Their method should be modified to stress an open-ended enquiry phase of active search for the important factors in each case.

The "model" they describe, i.e. the accountancy categories they propose, may be largely or wholly wrong, precisely because they are borrowed from other areas. Both as educational researchers and from a management accountancy viewpoint it is on the contrary vital to discover the important categories, not blindly accept ones because they are traditional (even useful) in other fields.

One example of a vital missing category is learning costs. Another simple test of when a model is even beginning to qualify is whether it can reason about the cost-time-(learning) quality triangle that implicitly dominates most practical educational decision making in both research projects and daily practice; or alternatively if you follow Phillips (1996) and Reeves (1992) then you will require it to reason about the quantity-quality-cost tradeoff.

References

Landauer,T.K. (1995) The trouble with computers: Usefulness, usability, and productivity (MIT press; Cambridge, MA)

Phillips,R. (ed.) (1996) Developer's guide to interactive multimedia: A methodology for educational applications (Computing centre, Curtin university of technology, Perth, Western Australia)

Rail Freight Group (2003) "Have we gone mad?" Modern Railways vol.60 no.658 p.12

Reeves,T.C. (1992) "Evaluating interactive multimedia" Educational technology May, pp.47-52.

Tinto,V. (1975) "Dropout from Higher Education: A Theoretical Synthesis of Recent Research" Review of Educational Research vol.45, pp.89-125.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [this page]
[Top of this page]