Cognitive Neuroscience Talks

One sound, two percepts: Predicting future speech perception from brain activation during audiovisual exposure

Lip reading provides for vital information, enhancing auditory understanding. In the case of the McGurk effect, the addition of the visual modality actively alters the identity of the auditory percept. Bertelson et al. [1] showed that repeated presentation of an ambiguous auditory stimulus (/a?a/) dubbed onto a video of a speaker pronouncing /aba/ or /ada/ respectively, biased subsequent perception of the same ambiguous sound presented in isolation (cross-modal recalibration). The present study investigated the neural basis of this phenomenon with fMRI and revealed a network of brain regions whose activation during audiovisual exposure predicted future perceptual tendencies of interpreting ambiguous speech stimuli. Furthermore, the feasibility of predicting the perceptual interpretation of physically identical ambiguous stimuli on a trial-by-trial basis is demonstrated employing pattern classification techniques.