Predicting and perceiving degraded speech
Human listeners are better than machines at perceiving and comprehending speech – particularly if the speech signal is acoustically degraded or ambiguous. This is in part because we are better at using higher-level language knowledge to support perception and we are more able to rapidly learn about speech sounds, words and meanings. In this talk I will argue that a computational account of speech perception based on predictive coding explains both our ability to use prior knowledge to guide perception, and perceptual learning. I will describe recent behavioural, MEG/EEG and multivoxel pattern-analysis fMRI experiments using artificially degraded (noise-vocoded) speech that are consistent with this account.