Home | Teaching | SVG | People | Jobs | CV

Martin Lages - Room 413

telephone: 0044 (0)141 330 6842

facsimile: 0044 (0)141 330 4606

e-mail: m.lages@psy.gla.ac.uk

 

Research Projects

 

Social Interaction: A Cognitive Neurociences Approach

 

Abstract

Early

Staff

N N- Postdoctoral RA.

Collaborators

Jamie Hillis, Philippe Schyns, Rob Jenkins, Klaus Kessler and 9 staff at Glasgow University.

Results

[1] Kessler, K., Gordon, L., Kessford, K., & Lages, M. (submitted). Characteristics of motor resonance predict pattern of flash-lag effects in biological motion.

[2] Lages, M., Hillis. J., & Jenkins, R. (in preparation). Egocentric and alocentric perspective of a falling object.

 

 

Bayesian Models of Binocular 3-D Motion Perception

 

Abstract

Psychophysical studies on 3-D motion perception have shown that perceived trajectory angles of a small target traveling in depth are systematically biased. Here predictions from Bayesian models are investigated that extend existing models of motion-first and stereo-first processing. These statistical models are based on stochastic representations of monocular velocity and binocular disparity input in a binocular viewing geometry. The assumption of noise in these inputs together with a plausible prior for 3-D motion leads to testable predictions of perceived trajectory angle and velocity. Results from two experiments are reported suggesting that disparity rather than motion processing introduces perceptual bias.

Staff

Martin Lages.

Collaborators

Suzanne Heron - PG at University of Glasgow (EPSRC studentship)

Robbe Goris - University of Leuven, Belgium.

Results

Publications

[1] Lages, M., & Heron, S. (2008). Motion and disparity inform Bayesian 3D motion estimation. Proceedings of the National Aacademy of Science USA,

[1] Lages, M. (2006). Bayesian models of binocular 3-D motion perception. Journal of Vision, 6(4), 508-522. http://journalofvision.org/6/4/14/

[2] Lages, M. (2006). Bayesian modelling of perceptual bias. Perception, 34 Suppl., 53.

[3] Lages, M. (2005). Modelling perceptual bias in 3-D motion. Journal of Vision, 6(6), 628a.

 

Software Tools (MatLab)

This MatLab code illustrates the Bayesian models for the integration of motion signals in the left and right eye. A prior for slow motion in depth is combined with likelihoods for velocity and disparity input to derive posterior distributions. The maximal value in the posterior indicates the perceived trajectory and velocity of a small target moving in x-z space. The simulations are based on intersection of constraints in a binocular viewing geometry.

New approach to human stereo-motion integration

Ref.: GR/R61215/01

 

 

Abstract

Early stereo-motion integration and its neural correlate is a debated issue. Although evidence from single-cell recordings indicates that binocular motion perception occurs early in visual processing, it is still unknown how images from the left and right eye are combined to retrieve motion and depth information. Employing new motionn stimuli in carefully designed psychophysical experiments should reveal basic properties of the human visual system thus providing an answer to a number of fundamental questions about stereo-motion integration. It is tried to develop a computational model of early human stereo-motion processing that is grounded on psychophysical performance and physiological characteristics. It is explored how the human visual system integrates dichoptic and stereoscopic s timuli to extract motion in depth. Simulations of activation single cells and cell populations should advance the understanding of human stereo-motion integration. The modelling may provide milestones for the development of efficient software tools that can extract motion and depth from dynamic stereoscopic images.

Staff

Alexander Dolia - Postdoctoral RA, Marc Becirsphahic - Programmer PT.

Collaborators

Erich Graf - Southampton University, Pascal Mamassian - Glasgow University.

Results

Integration of motion and stereoscopic depth information is essential for the perception of dynamic events in a 3-D environment. Motion and stereo share a similar geometry to infer the distance between an object and the observer. Multiple images are in both cases used to triangulate the object's features, either over time or across eyes. Therefore, it is natural to expect that neural structures involved in motion and stereo processing overlap. In collaboration with my colleagues we have investigated human stereo-motion integration combining psychophysical methods and computational modelling. We studied early stereo-motion integration [4], spatial and temporal tuning properties of motion in depth [6,8,9] and tested a new model of early motion and disparity integration [4,5,A,B]. In addition we researched the effect of prior depth ordering information on perceived motion direction [2] and we made contributions to related projects on image and signal processing [1,9,10].

We successfully tackled the notoriously difficult problem of how the visual system encodes motion and disparity processing and we were able to make significant advances. For the first time we measured spatial and temporal thresholds of human 3-D motion perception together in a specifically designed discrimination task. The results suggest band-pass temporal tuning and temporal integration of disparity information rather than mechanisms based on motion or velocity difference.
In another series of experiments we have shown that dichoptic motion perception is restricted to the zero depth plane [4,5]. This limitation questions the biologically plausible assumption of early binocular integration to perceive motion. Simulation of cell activation in computational models of early binocular integration revealed that the hybrid energy model (Qian & Andersen, 1997) predicts opposite motion on different depth planes for counterphase flicker stimuli. We developed an alternative architecture for energy-based motion and disparity detection that is based on fewer units than the hybrid energy model. We simulated activation for single cells and cell populations in MatLab (A). The software tools do not comply with industry standards but illustrate possible applications.

Publications (pdf-files)

[1] Dolia, A.N., Lages, M., & Kaban, A. (2003). An adaptive novelty detection approach to low level analysis of images corrupted by mixed noise. Lecture Notes in Artificial Intelligence 2773, 570-577.
[2] Graf, E.W., Adams, W.J., & Lages, M. (2004). Prior monocular information can bias motion perception. Journal of Vision, 4, 427-433.
[3] Lages, M., & Dolia, A. (2004). Who opened Pandora’s box? Book review on ‘Computational Neuroscience of Vision’ by Edmund T Rolls and Gustavo Deco. Cortex, 40, 549-551.
[4] Lages, M., Dolia, A.N., & Graf, E.W. (2003). Dichoptic motion within Panum’s fusional area? Journal of Vision, 3(9), 798a.
[5] Lages, M., Dolia, A., & Graf, E.W. (2007). Dichoptic motion perception limited to depth of fixation? Vision Research, 47, 244-252.
[6] Lages, M., Graf, E.W., & Clark, A.R. (2002). Tuning characteristics of luminance-defined and contrast-defined motion in depth. Perception (Suppl.), 31, 35.
[7] Lages, M., Graf, E.W., & Dolia, A. (2003). Band-pass, low-pass and high-pass tuning to motion in depth. Proceedings of the 6th Tübinger Perception Conference, Kirchentellinsfurt, Germany: Knirsch.
[8] Lages, M., Mamassian, P., & Graf, E.W. (2003). Spatial and temporal tuning of motion in depth. Vision Research, 43, 2861-2873.
[9] Ma, Y., Paterson, H., Dolia, A., Cho, S.-B., Ude, A., & Pollick, F. (submitted). Towards a biologically-inspired representation of human affect. Journal of Neurocomputing.
[10] Niemistö, A., Shmulevich, I., Lukin, V., Dolia, A., & Yli-Harja, O. (in press). Correction of misclassifications using a proximity-based estimation method. EURASIP Journal on Applied Signal Processing.

Software Tools (MatLab)

(A) MatLab code simulates activation of binocular complex cells for an implementation of Qian and Andersen's (1997) hybrid energy model and an alternative model based on the combination of non-directional filters (Adelson & Bergen, 1985). Complex cell activation with maximal output across depth planes are plotted over time. Several solutions based on fast Fourier transform (fft) or convolution were implemented. In this solution a filter defined in space-time is convolved across time with a space-time image and then summed over space. The input is a sine-wave moving at different depth planes as defined by a phase-shift between the left and right image. Other stimuli moving at different depths as well as different filters are easily implemented.

 

 

Rationality and Inconsistency in Human Choice Behaviour

 

Inconsistencies are usually seen as unsystematic random errors or indifference between objects but they probably reveal more about human choice behaviour than previously assumed.

[1] Lages, M., & Dolia, A. (2002). Ear decomposition for pair comparison data. Journal of Mathematical Psychology, 46, 19-39.

[2] Lages, M., Hoffrage, U., & Gigerenzer, G. (unpublished manuscript). Coherence and consistency in binary decisions.

[3] Lages, M. (1999). Algebraic decomposition of individual choice behavior. Materialien aus der Bildungsforschung Nr 63, Max Planck Institute for Human Develpment, Berlin.

Software Tools

[A] Prolog code (Open Prolog) performs an ear decomposition by sequence on binary choice data sets (encoded as prolog clauses).

 

Sequential Dependency in Visual Discrimination

Psychophysical tasks are typically run with many repetitions to establish a psychometric function. An intelligent observer is likely to exploit redundancies in the task - such as repetition of the reference/standard in a two-alternative forced choice or the midpoint of the stimulus range in the method of single stimuli.

[1] Treisman, M., & Lages, M. (under review). Criterion settting and sensory integration.

[2] Lages, M., & Treisman, M. (under review). Criterion setting and discrimination learning

[3] Lages, M., & Paul, A. (2006). Long-term visual memory for spatial frequency? Psychonomic Bulletin & Review, 13(3), 486-492.

[4] Lages, M., & Treisman, M. (1998). Spatial frequency discrimination: Visual long-term memory or criterion setting? Vision Research, 38(4), 557-572.

Collaborators

Ingo Fründ, Felix Wichmann TU Berlin, Bernstein Centre.