Seminar Series

Individual Differences Psychology: The end of an era. Where do we go from here?

With the deaths of Raymond Cattell and Hans Eysenck in 1997, it is asserted that the research methodology and philosophy, upon which much of their own and associates work was predicated, has also come to a natural end. I would like to propose that the 1940s through to the early 1990s saw some of the most dramatic increases in our knowledge and conceptualisation of individual differences and the methodologies for exploring them. However, in the field of personality research, it was already clear by the early 1980s that the area was stagnating. Further, our understanding and investigation of cognitive abilities was also virtually stagnant by this period. However, two new theoretically driven investigative research domains began to come into existence during the 1980s, the analysis of human performance through chronometric tasks, and the investigation of possible biological bases for personality and intelligence. This work was pioneered by Arthur Jensen and Ian Deary (chronometrics), and by Eysenck, Gray, and perhaps a little of myself (in the field of the biological correlates of cognitive ability and personality). However, it became clear to some working in the area, including Hans Eysenck, that this work could only ever be exploratory. Apart from perhaps Eysenck’s and more especially Gray’s theories of personality, there was no substantive theory about many of the observed relationships being observed between psychometric measures, chronometric, or biological measures. Instead, many speculations were brought forward at different points in time to explain sometimes completely polarised results from successive experiments. Some interesting causal conceptual frameworks were also brought into discussion around this period. For example, Weiss’s theory of intelligence based upon quantum interactions at the membrane-level, Robinson ’s dynamic modulation arousal theory, and Lehrl and Fisher’s Basic Information Processing parameter. The work by Kosselyn, based upon the “working memory” concept from Baddeley and his colleagues, was also very influential. Further, as Eysenck (1997) himself noted, continuing psychometric investigations into the taxonomy of personality or intelligence were no longer of any real value to the greater understanding of personality or intelligence. In 1997, a paper appeared in the British Journal of Psychology, authored by a Sydney-based philosopher of science, Joel Michell. This paper defined precisely the constituent properties of a quantitative science and made crystal clear what was required from both the theory and measurements made within a quantitative science. With specific reference to measurement, some of these arguments has been propounded earlier by three psychometricians, Wim van der Linden (1994), Ben Wright (1992), and David Andrich (1988). Apart from a few individuals around the world, the logic and theory of axiomatic, fundamental measurement has never even made it into the consciousness of many research methods lecturers and methodologists. Suffice it to say that most areas of psychology, including the vast majority of work within individual differences research, do not accord to the constituent properties of a quantitative science. The essence of scientific measurement is the “unit of measurement”. In 1998, Paul Kline’s new book finally sealed the fate of conventional psychometrics (and a large proportion of individual differences research) as having anything to do with the practice of a “quantitative science”. Given the definition of measurement, and the necessity for a unit of measurement against which to make comparisons of “objects”, it can be concluded that without a defined or proposed standard unit, there can only be ordinal measurement. This fact was recognised as far back in the 1940s/50s by Louis Guttman, who accepted this position as the only tenable one for psychological measurement. My talk will put some flesh on the bones of the above, briefly explain Michell’s logic and the axioms of quantitative measurement, exemplify with some empirical data the kinds of problems awaiting anyone doing conventional work on individual differences, and outline how and where the new research is now going to take place, IF researchers in this area are to aspire to scientific research instead of the mere practice of numerical or qualitative methodologies. For example, within the realm of measurement, psychologists now have Rasch or 1-parameter IRT available (as a means of attaining a probabilistic realisation of Luce and Tukey’s additive conjoint measurement). Note, however, that my own work has shown that the unit of measurement in Rasch models is quite arbitrary, albeit a Rasch scale of an attribute will give the researcher a probabilistic equal-interval, ratio scale of measurement of the “latent attribute”. From Michael Maraun’s arguments (based upon Wittgenstein’s propositions), it is clear that the meaning to be assigned to a hypothesised unit is not an empirical task, but one that demands prior deductive theorising. Within the realm of theory-generation, Jeffrey Gray’s work represents the most useful causal theory in the realm of personality to date. However, we must also consider the work of computational neuroscientists of Quartz and Sejnowski, and the now burgeoning body of evidence concerning the brain as a non-stationary representational system. Finally, stepping back from conventional reductionist thinking, the recent work by Steven Wolfram and John Holland on complexity and cellular automata, and the concept of non-computability introduced by Roger Penrose, must surely give any psychological scientist cause to question whether the entire approach being undertaken by conventional investigations into human psychology is fatally flawed. In short, it is proposed that the latest era of conventional individual differences research has now reached its own in-built constraints of explanatory coherence. Further, it is proposed that the only productive way forward is that based upon the principles of a quantitative science, whether from a primarily reductionist or emergent complexity standpoint.