Mapping multisensory networks of the brain
Our research group has employed multiple methodologies in both human and non-human primate models to develop understanding of the neural circuits implicated in the integration of multisensory inputs and to detail the mechanisms and principles by which these multisensory integration processes proceed. This talk will discuss a number of findings from this work: 1) Multiple lines of electrophysiological, neuroimaging, and anatomical tracing evidence make it clear that multisensory integration processes are achieved extremely early during processing. Evidence now shows that these processes are initiated as early as the initial sensory afferent volley within hierarchically early sensory cortices, and that these are sometimes subserved by convergent feedforward mechanisms. 2) That spatial alignment of sensory inputs is, perhaps surprisingly, quite unnecessary for cortical multisensory integrative processing to occur. 3) That the integration of multisensory speech inputs is crucial for speech recognition under noisy environmental conditions and that there are severe deficits in this ability in a number of clinical populations, primarily children with an Autism Spectrum Disorder (ASD). 4) That functional connectivity patterns across a left-hemisphere dominant audio-visual network predict out-of-scanner performance on a multisensory speech-in-noise challenge task, separating super-integrators from those with impoverished multisensory speech abilities. 5) That neural oscillations, particularly phase-reset mechanisms, play a key modulatory role in coordinating sensory inputs across widely separated cortical sensory regions.