Last changed 8 June 2002 ............... Length about 600 words (4000 bytes).
This is a WWW document maintained by Steve Draper, installed at http://www.psy.gla.ac.uk/~steve/localed/mrodd.html.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [localed] [this page]

Prediction of failure in dcs L1 programming course

This page gives my personal highlight conclusions from a Psychology final year undergraduate project by Matt Roddan, completed in March 2002, and titled "The determinants of student failure and attrition in first year computing science". This report is available in pdf format.

Its main purpose was to look for factors which might predict which students were most likely to fail an introductory computer programming course, with a view in future to targeting staff intervention in time.

The full value of this study may only emerge from future work for which it lays the ground (and for which I already have a student committed), but it is already of interest for at least these reasons (which are my own, possibly idiosyncratic, views on what is important about it), which have a mixture of local and universal interest.

  1. It makes the point that most of the available literature in the area looks at things at the level of organisations or whole countries. However much of the action, both possible attempts at correction and the active causal factors, are not at the level of countries or universities, but at the department (and hence subject) level. Consequently research should, like this study, be focussed at this level. The reasons for this include:


  2. Of the many factors tested, including many measures of attendance, few showed a statistically significant relationship with exam performance, and the few that did showed too low a correlation to explain much of the variance. These will not be of great practical use for identifying students at risk of failing with a view to early intervention.

  3. The sole exception was the student's own self-estimate of how well they understood the material (correlation 0.7, next biggest correlation was 0.39).

  4. Learning computer programming really does seem to require understanding, which is the defining mark of deep learning as opposed to shallow learning. Those who did not understand the material, particularly the early material, gradually "lost it" and did poorly. Effort and hours spent may or may not be necessary, but were no substitute for actual, achieved understanding.

  5. Revision late on does not help (unlike for many other subjects where this strategy succeeds): understanding as you go seems to be crucial.

  6. Most staff believe that previous teaching in computing (e.g. at school) is of no benefit. This project showed some indications both for but also against this view, suggesting another look at the issue may be worthwhile.

  7. The (new) lab exam in the course studied seems to fail to test what was intended (contrary to the original expectation and intention of the course organisers).

  8. An attempt was made to get students to reflect on their time management by filling in a personal timetable to show how their time went. This largely failed as a data gathering instrument for the project due to very low response rate. Yet interviews showed that at least for one student, it was a powerful and beneficial prompt to reflection.

Web site logical path: [www.psy.gla.ac.uk] [~steve] [localed] [this page]
[Top of this page]