Last changed 13 Sept 2015 ............... Length about 900 words (7,000 bytes).
(Document started on 13 Mar 2008.) This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/jimeEntry.html. You may copy it. How to refer to it.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [this page] [Q&Q note]

Including the quantitative: Experiment, surprise seeking, theory

By Steve Draper,   Department of Psychology,   University of Glasgow.

This is the entry point for a paper done in 2008 on educational evaluation methods/ methodology.

The reader is warned that this paper was rejected by JIME, essentially (if I understood the feedback) for not being pompous enough.

See also this note on quantitative vs. qualitative.

Abstract 1

In this paper I briefly describe my approach and views on evaluating e-learning cases. As will be seen, the key elements I revolve around are comparisons (e.g. systematic surveys or controlled experiments), seeking out surprises that overturn our expectations, and the importance of theory in understanding those surprises and reducing the number of future ones. I have put the word "quantitative" prominently in the title because addressing that was my assigned role in the original collection of papers. However in my view, it relates mainly to only one of the three key elements we need, and by itself is insufficient.

The journal issue was intended to embody a discussion of methods and approaches. In this paper, then, I will present a series of three "landscapes" on e-learning research, although all of course from my own viewpoint. First, the methods as I see them: what I would recommend and why: what is good. Secondly, what kinds of method are common in published e-learning papers -- and their characteristic weaknesses as seen from my perspective. Thirdly, how my approach might look from another position: the framework proposed for this journal issue to relate the papers, drawn from a social sciences textbook on qualitative methods. Of course it is in one way absurd to use a qualitative methods framework to position a paper whose commissioning requirement was to speak about quantitative methods. On the other hand, since my real position is that a mixture of methods is not only allowable but essential, this is not a priori a fatal move; and furthermore, to understand the full set of e-learning research methods may involve commenting on how each method looks from other methods' viewpoints.

Abstract 2

Looking back on this in 2015, I wouldn't now bother with the originally-required questions, and the Denzin & Lincoln discussion. I would say there are three big points in my views, which are expressed, though not optimally, in the paper:
  1. People are just wrong to discuss quantitative vs. qualitative. The real issue is comparable vs. open-ended data gathering. (See also this note on quantitative vs. qualitative.)
  2. I furthermore have trouble seeing the answer to this question as either-or: I continue to feel that the right answer is always "both", using generally a 2-pass method with the first pass trawling for open-ended data, and the second using comparable data.
  3. Still more important than open-ended vs. comparable data, however, is surprise-seeking. I have never seen a justification for this, and yet all applied science disciplines do this. In text books, the testing that engineers are told they need is all about checking for known problems; but the test they all intuitively and practically know is the most important one is just simply using the artifact for the first time with the tacit question: does it work as we expect? or does it behave in some completely unexpected way? You cannot design, and apparently no-one knows how to justify philosophically, a test for the unexpected: it's a paradox. But everyone does it. Petroski's argument about this amounts to saying that our theories are always incomplete. We discover we need a new theory generally when some artifact collapses: not because of impurities, noise in the data, or slight omissions in the equations, but because some fundamentally new phenonmenon which was unimportant in all previous designs has become important in the new context.

    The best advocates of this point today are the astronomers (so don't imagine that this truth is limited to engineering and applied science). They said before the Pluto mission that what they most hoped for was something utterly unexpected, because this had happened on all the previous planetary missions. The unexpected was the most scientifically important outcome of each mission. And sure enough, it was again for Pluto.

More on surprise-seeking

Open-ended vs. comparable data is itself about two types of data, and data collection. But the intention and mind-set of the researcher is what makes point 3 (surprise-seeking) distinct from 1 (quantitative vs. qualitative). Some research is about checking predictions; other research, and all exploration, is about looking for something new and unpredicted. This aligns with the contrast between deduction and induction.

In statistics for psychology, there are two uses for Factor Analysis: confirmatory, and exploratory (CFA vs. EFA). We may similarly use Qualitative research in either way. E.g. in Thematic Analysis we might pre-decide the themes, and see to what extent the actual data may or may not be grouped using these; or we could look for new emerging themes.

The trouble with EFA, of course, is that the stats come up with mixtures that are hard (usually impossible) to interpret in terms of our everyday understanding.

Note again how this issue (the enormous importance of surprise-seeking) cuts across the quantitative vs. qualitative distinction, and applies to both.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [this page]
[Top of this page]