Home    Teaching    SVG    People    Jobs    CV 
Martin Lages  Room 413
telephone: 0044 (0)141 330 6842
facsimile: 0044 (0)141 330 4606
email: m.lages@psy.gla.ac.uk
Research
Projects 
Social Interaction: A Cognitive Neurociences Approach
Abstract Early Staff N N Postdoctoral RA. Collaborators Jamie Hillis, Philippe Schyns, Rob Jenkins, Klaus Kessler and 9 staff at Glasgow University. Results [1] Kessler, K., Gordon, L., Kessford, K., & Lages, M. (submitted). Characteristics of motor resonance predict pattern of flashlag effects in biological motion. [2] Lages, M., Hillis. J., & Jenkins, R. (in preparation). Egocentric and alocentric perspective of a falling object.
Abstract Psychophysical studies on 3D motion perception have shown that perceived trajectory angles of a small target traveling in depth are systematically biased. Here predictions from Bayesian models are investigated that extend existing models of motionfirst and stereofirst processing. These statistical models are based on stochastic representations of monocular velocity and binocular disparity input in a binocular viewing geometry. The assumption of noise in these inputs together with a plausible prior for 3D motion leads to testable predictions of perceived trajectory angle and velocity. Results from two experiments are reported suggesting that disparity rather than motion processing introduces perceptual bias. Staff Martin Lages. Collaborators Suzanne Heron  PG at University of Glasgow (EPSRC studentship) Robbe Goris  University of Leuven, Belgium. Results Publications [1] Lages, M., & Heron, S. (2008). Motion and disparity inform Bayesian 3D motion estimation. Proceedings of the National Aacademy of Science USA, [1] Lages, M. (2006). Bayesian models of binocular 3D motion perception. Journal of Vision, 6(4), 508522. http://journalofvision.org/6/4/14/ [2] Lages, M. (2006). Bayesian modelling of perceptual bias. Perception, 34 Suppl., 53. [3] Lages, M. (2005). Modelling perceptual bias in 3D motion. Journal of Vision, 6(6), 628a.
Software Tools (MatLab) This MatLab code illustrates the Bayesian models for the integration of motion signals in the left and right eye. A prior for slow motion in depth is combined with likelihoods for velocity and disparity input to derive posterior distributions. The maximal value in the posterior indicates the perceived trajectory and velocity of a small target moving in xz space. The simulations are based on intersection of constraints in a binocular viewing geometry. New approach to human stereomotion integration Ref.: GR/R61215/01
Abstract Early stereomotion integration and its neural correlate is a debated issue. Although evidence from singlecell recordings indicates that binocular motion perception occurs early in visual processing, it is still unknown how images from the left and right eye are combined to retrieve motion and depth information. Employing new motionn stimuli in carefully designed psychophysical experiments should reveal basic properties of the human visual system thus providing an answer to a number of fundamental questions about stereomotion integration. It is tried to develop a computational model of early human stereomotion processing that is grounded on psychophysical performance and physiological characteristics. It is explored how the human visual system integrates dichoptic and stereoscopic s timuli to extract motion in depth. Simulations of activation single cells and cell populations should advance the understanding of human stereomotion integration. The modelling may provide milestones for the development of efficient software tools that can extract motion and depth from dynamic stereoscopic images. Staff Alexander Dolia  Postdoctoral RA, Marc Becirsphahic  Programmer PT. Collaborators Erich Graf  Southampton University, Pascal Mamassian  Glasgow University. Results Integration of motion and stereoscopic depth information is essential for the perception of dynamic events in a 3D environment. Motion and stereo share a similar geometry to infer the distance between an object and the observer. Multiple images are in both cases used to triangulate the object's features, either over time or across eyes. Therefore, it is natural to expect that neural structures involved in motion and stereo processing overlap. In collaboration with my colleagues we have investigated human stereomotion integration combining psychophysical methods and computational modelling. We studied early stereomotion integration [4], spatial and temporal tuning properties of motion in depth [6,8,9] and tested a new model of early motion and disparity integration [4,5,A,B]. In addition we researched the effect of prior depth ordering information on perceived motion direction [2] and we made contributions to related projects on image and signal processing [1,9,10]. We successfully tackled the
notoriously difficult problem of how the visual system encodes motion
and disparity processing and we were able to make significant advances.
For the first time we measured spatial and temporal thresholds of human
3D motion perception together in a specifically designed discrimination
task. The results suggest bandpass temporal tuning and temporal integration
of disparity information rather than mechanisms based on motion or velocity
difference. Publications (pdffiles) [1]
Dolia, A.N., Lages, M., & Kaban, A. (2003). An adaptive novelty detection
approach to low level analysis of images corrupted by mixed noise. Lecture
Notes in Artificial Intelligence 2773, 570577. Software Tools (MatLab) (A) MatLab code simulates activation of binocular complex cells for an implementation of Qian and Andersen's (1997) hybrid energy model and an alternative model based on the combination of nondirectional filters (Adelson & Bergen, 1985). Complex cell activation with maximal output across depth planes are plotted over time. Several solutions based on fast Fourier transform (fft) or convolution were implemented. In this solution a filter defined in spacetime is convolved across time with a spacetime image and then summed over space. The input is a sinewave moving at different depth planes as defined by a phaseshift between the left and right image. Other stimuli moving at different depths as well as different filters are easily implemented.
Rationality and Inconsistency in Human Choice Behaviour
Inconsistencies are usually seen as unsystematic random errors or indifference between objects but they probably reveal more about human choice behaviour than previously assumed. [1] Lages, M., & Dolia, A. (2002). Ear decomposition for pair comparison data. Journal of Mathematical Psychology, 46, 1939. [2] Lages, M., Hoffrage, U., & Gigerenzer, G. (unpublished manuscript). Coherence and consistency in binary decisions. [3] Lages, M. (1999). Algebraic decomposition of individual choice behavior. Materialien aus der Bildungsforschung Nr 63, Max Planck Institute for Human Develpment, Berlin. Software Tools [A] Prolog code (Open Prolog) performs an ear decomposition by sequence on binary choice data sets (encoded as prolog clauses).
Sequential Dependency in Visual Discrimination Psychophysical tasks are typically run with many repetitions to establish a psychometric function. An intelligent observer is likely to exploit redundancies in the task  such as repetition of the reference/standard in a twoalternative forced choice or the midpoint of the stimulus range in the method of single stimuli. [1] Treisman, M., & Lages, M. (under review). Criterion settting and sensory integration. [2] Lages, M., & Treisman, M. (under review). Criterion setting and discrimination learning [3] Lages, M., & Paul, A. (2006). Longterm visual memory for spatial frequency? Psychonomic Bulletin & Review, 13(3), 486492. [4] Lages, M., & Treisman, M. (1998). Spatial frequency discrimination: Visual longterm memory or criterion setting? Vision Research, 38(4), 557572. Collaborators Ingo Fründ, Felix Wichmann TU Berlin, Bernstein Centre.
