Last changed 31 Oct 2001 ............... Length about 2,500 words (16,000 bytes).
This is a WWW document maintained by Steve Draper, installed at

Web site logical path: [] [~steve] [grumps] [this page]

Grumps data mining

Here's some notes on ideas towards getting a data mining and interpretation initiative going within Grumps. We need, within Grumps as a whole, to close the circle between collecting data, discovering interesting information, answering questions, (re)designing data collection dynamically. This is a first small "study" to get started on this side of the overall Grumps subject.

Contents (click to jump to a section)

Overall Plan

Our current plan is:

Message 1. [2 Aug. 2001]

I've been chatting to Phil and here for the email archives is a possible way to go for Grumps. The biggest "idea" is at (3) below.


Our current "plan" for progressing the data mining and interpretation aspect is led by Julie, who over the summer will be doing data cleaning, and looking at interpreting the data we have from study 1. Then we have Quintin signed up as a client with a genuine independent interest over the next year in interpreting data on level1 DCS students from multiple sources. Finally, we agreed to collaborate in putting together a small but high value reading list, to share some baseline knowledge of mining.

The problem

But there's a lot of specialists in data mining; and more than one kind of mining each with its specialists. Are we sensible to compete with them, given that none of us know much, and our main resource in Grumps is Murray who has not declared a decision to change his main research interest to this? Perhaps we need in the medium term (not this month) to develop a strategy about this.

An idea

Perhaps we could decide to major in a different angle on this, with three parts.

Smaller features

We can probably get Karen Renaud interested in work closely related to this, perhaps exploring ways to interpret the data we've warehoused.

We might look at a strategic alliance with other researchers who are relatively expert in mining: e.g. get them to analyse our data, while we adapt our collection mechanisms to suit them better.

Phil and I will mull over this. We can consider progressing this a bit more in a month or two.

Message 2. [3 Sept. 2001]

I don't have time to consider this further right now, but some of you should probably take note of a person I met last week at a workshop: Jem Rashbass: I introduced myself, and warned him Grumps would probably be in contact soon.

He creates CAL for medical students, and in some ways seems to have Malcolm's attitude to this: collecting huge amounts of data, mining it, and presuming this must yield gold any time now.

In his talk, I remember:
MIT has put all its "materials" on line: his ambition is to build what he calls a "learning management system" (I think he already has a version running) that adds in what the materials alone don't: personal records, matching curriculum demands to materials ... and generally keep track of every little thing students do. I think he's nuts, but you probably don't. Also interesting: he automated a substantial test for some part of the medical course (anatomy?) (having already got 3D visualisation for anatomy to make it more learnable), and had a rushed but interesting account of how students started to cheat on this by collaborating and how he used the detailed records to show when this happened, backed up by webcams (after the cheating had got established) that turned out to be important in showing that the fastest time wasn't due to cheating after all but was by the best student.

So: we probably want to see and hear more of this guy; and discuss educ-related data mining with him.

Message 3. [26 Sept. 2001]

We will aim to submit a paper by 3 Dec to the EASE conference at Keele, and use this as a self-imposed first milestone in getting this aspect going. In fact, we'll aim to submit 2 things: at least one paper, and a "workshop" type non-compliant proposal for a session where we bring along a small bit of data and lead/provoke a discussion on the different way sit could be interepreted. (David Budgen was taken with this suggestion I made to him; I've seen it done successfully a couple of times in other kinds of conference. A pretty small bit of data is more than enough; people get actively involved; and the presenters get lots of alternative views and suggestions about approaches.)

The subject will be: all the data we gathered in the first study. The aim:

We'll form a subgroup within Grumps to pursue this. First meeting on Friday in Julie's office for me and Murray, at least, to start getting familiar with the data and imagining what we might want to ask. Next meeting in about 10 days, invite Peter Hay to come and advise/lead us doing analyses and/or using the mining tool Julie has now installed.

Dimensions of data investigation

Triggered by Phil's starting notion and diagram, some important ideas emerged in the meeting of 18 Oct. These follow from trying to understand or frame the process we are muddling into.

My personal view is that Phil's first pass at this and his diagram is hopeless because it mixes several quite different types of thing: in my view there are several interacting, but logically independent, dimensions or aspects:

  1. What is the activity and what to call it?
    Experiment / investigation / study / information need.
    It isn't an experiment (as we called it in Grumps) because this data's collection in particular was NOT designed to answer a pre-conceived question or hypothesis. We used "study" already. We might call it an investigation.

    Note too that we need one set of terms to describe software configurations in Grumps: particular designed deployments of data collection units; and another term to describe attempts to analyse the data. Because these two activities can be (and are in this, our first little example activity) separate and not united by a single goal or prior question.

    But the real insight is the quite close analogy to "information need" as used in the IR field to distinguish, in Mizzaro's terms, between real, perceived, expressed, and formal information needs:


  2. The people, the actions, the roles.
    • Main group of information-need Inquirers; perhaps defined by their question i.e. information need (e.g., for us here and now, "Why do students drop out of or fail the level 1 CompSci course?").
    • Consultants, who have related information needs now or in the past, and who will therefore have stimulating related questions and answers (for us e.g. Bill Patrick, Alison Mitchell, Richard Thomas).
    • Intermediaries (this is IR/ library terminology): technical experts who can help you formulate your search, operate the resources, tell you where to look and what to ask, run your searches for you. (For us: possibly Peter Hay?)

  3. Top down / bottom up (TD/BU).
    Whether an investigation is organised from the question to the data collection; or vice versa. In real life, that is in terms of the real historical chronology of human actions, there is probably always a mixture. But in logic, and so in the structure of the arguments published and in the explicit plans people try to organise their activities with, there is a big difference. Classical database work is at a TD extreme: the structure of the data is designed before any is collected, and the analysis methods are all to do with getting that design right; and the technology is famously poor at dealing with the unexpected questions, needs, and cases that crop up after design time. Data mining is at the opposite BU end: you have this data collected for quite other purposes: now what if anything can you infer or extract from it? Grumps is about 3/4 the way towards BU, but addresses being dynamic (unlike both the others): focus first on instrumenting something more than having a prior question, but then organise to be able to change the collection as easily as possible, presumably under the impact of changing (understandings of) questions you would like the data to answer.

    Presumably, we should really be able to characterise any particular investigation on this TD/BU dimension, and be able to say how to use Grumps for each case and for those in between.

  4. Meaning
    I want to suggest, as a major lesson for me already from this study, that a very important issue, aspect, dimension here is that of restoring meaning to the data collected. This is a first and essential step before any other analysis can be done. There is a classic distinction between data / information / knowledge (/ wisdom). And there is a notion of doing data cleaning before analysis. But I want to add enormously more emphasis here.

    What we have is data collected mainly because it was there: and so collected in terms of the software on which it is parasitic e.g. keystrokes, "window events" which turn out not to be what users see as windows but part of the Microsoft software architecture. The first job is to re-attach human meaning to it wherever possible.

    Examples: missing UAR timestamps or rather their most significant parts, so can we recover these by inference from other logs? Some recorded times are only relative to the start of a session: can we calculate absolute times by consulting the different log of concentrator startups/sessions? On a broader scale, we may have login IDs but these are only valid for a year: the table of users must be captured by us within this time limit. IDs for tutors and students overlap (re-use the same ID space) so we must save both tables and also get a record of whether a record relates to a tutor or to a student (from machine ID?: lab PC vs. handheld for tutor).

    I want to argue that most of this job is about recovering meaning independent of the particular questions and information needs. It's a separate job or stage in our process. It is often not informed by what we think we want to know; but just about what is going on in the situation that generates the data i.e. the human understanding of that situation (context), that the software doesn't understand but the people do. And this meaning can and should be reconstructed first, early on, partly in order to allow BU and data mining spotting of new patterns.

    In general, I believe this amounts to writing an ER diagram of the (human-informed) situation in which the data is generated, relating the data actually collected to that diagram, and then as far as possible arranging for extra data to be acquired in order to relate the data to the entities that are in the diagram and are humanly meaningful. This amounts to capturing enough from the context to restore meaning; and using analysis such as ER diagrams as part of the method for doing this. Thus we do analysis after the design of the data collection, while classically it is done in advance; although we may then have to change our collection or alternatively do mass data restructuring before any real analysis guided by our information need/questions can be done.

    I want to argue the independence of recovering meaning for grumps data (i.e. converting data into information) from the information need /Question. But the retort to this is that, well: we think perhaps the faculty the student is in might be predictive of failure (within our Question), so we redo our data capture to record or retrieve that. But conceiving that, and executing it, relies on what all the humans involved know, but the software does not, about this domain independently of the particular Question. So perhaps what we want is to view ER elicitation as part of this process step: that is, not only retrieving meaning from the context but from the human heads. This analysis, classically done as design elicitation, will have to be part of the Grumps process: eliciting domain knowledge as part of setting up an "experiment" i.e. an investigation deploying grumps collections.

  5. Time scale.
    The time scale over which the analysis needs to be done and re-displayed. Data mining classically this is years; but for detecting students at risk of failing in future this must be in a month or two; and from some LSS applications it might be hours or minutes. Also, capturing enough additional contextual data to give meaning usually has a time limit to it.

Web site logical path: [] [~steve] [grumps] [this page]
[Top of this page]