Monday June 10th 2013, 11:00am

after the usual Monday MEG meeting (6th Floor Meeting Room; 11am)

Naturally occurring non-speech sounds are rich in time-varying low-level features, and define an acoustical space whose size dwarfs that occupied by speech. Natural sounds are thus a powerful, though often neglected tool for defining general theories of sound-signal processing in the cortex. A major methodological obstacle to quantifying feature-encoding processes is often the need to assume a cortical feature decomposition whose nature is, in reality, still largely unknown. In this study, we used a feature-agnostic approach to quantify the temporal dynamics of feature-encoding in the cortex. We thus used information-theoretic methods to analyze the phase and Hilbert amplitude of the time-frequency decomposition of MEG responses to natural sounds. In particular, we measured the amount of MEG information for differentiating between largely diverse sound stimuli, and for differentiating between sound stimulation, on the one hand, and the null silence stimulus, on the other. The former sound-information measure is a proxy for measuring sound-specific feature-encoding, whereas the latter silence-information measure is a proxy for measuring sound-generic detection processes. Consistently with recent studies, low-frequency phase (0.1-6 Hz) appeared to contain the largest amounts of both sound and silence information. Both the silence- and sound-information measures revealed phasic onset-related responses in the 0.1-20 Hz range, and in the 2-6 Hz range, respectively. Several functional dissociations emerged between silence and sound information. The former was strongest in the delta-low range (0.1-2 Hz), whereas the latter peaked in the delta-high to theta-low range (2-6 Hz). A phasic offset-related response characterized the silence information in the 0.1-4 Hz range, revealing that delta oscillations have a preferential role in the encoding of sound edges. Interestingly, the post-onset sound information in the phase of 0.1-6 Hz oscillations remained high and stable throughout the entire duration of the sound stimulus, revealing that cortical processes track the sound structure continuously. In contrast with current models of speech processing, gamma oscillations did not contain information. These results represent a first step towards understanding the temporal dynamics of the encoding of natural sound features, and outline explicit constraints for the design of brain-reading algorithms that operate with largely diverse sound stimuli.

Bruno Giordano