Pascal Belin
Visiting Researcher
Supervised Postgraduate Student : Fiona Guy
Collaborating with : Lars Muckli

I obtained a degree of engineering from Ecole Polytechnique (Palaiseau, France) in 1992, and a PhD in Cognitive Sciences from Ecole des Hautes Etudes en Sciences Sociales (Paris, France) in 1997 under the direction of Prof. Yves Samson at CEA-SHFJ, Orsay. I joined Prof. Robert Zatorre’s lab at the Montreal Neurological Institute (McGill University, Canada) as a postdoctoral fellow, and obtained tenure at the Département de Psychologie at Université de Montréal in 2001 as an assistant-professor.

I was named full professor in the Psychology Department at Glasgow University in August 2005. I head the Voice Neurocognition Laboratory; our research investigates the psychological and cerebral bases of the amazing ability of human listeners to extract information from voices.

Highlighted Papers

  • Paquette, S., Peretz, I., & Belin, P. (2013) The "Musical Emotional Bursts": a validated set of musical affect bursts to investigate auditory affective processing. Frontiers in Psychology, 4:509.
  • Giordano, B.L., McAdams, S., Zatorre, R.J., Kriegeskorte, N., & Belin, P. (2013) Abstract encoding of auditory objects in cortical activity patterns. Cerebral Cortex, 23(9):2025-37.
  • Watson, R., Latinus, M., Noguchi, T., Garrod, O., Crabbe, F., & Belin P. Dissociating task difficulty from incongruence in face-voice emotion integration. Frontiers in Human Neuroscience, 7:744.
  • Latinus, M., McAleer, P., Bestelmeyer, P., & Belin, P. (2013) Norm-based coding of voice identity inm human auditoru cortex. Current Biology, 23(12):1075-80 .

Current Funding

  • Audiovisual integration of identity information from the face and voice: behavioural, fMRI and MEG studies. BBSRC. Co-Is Belin, Gross
  • Cerebral processing of affective nonverbal vocalizations: a combined fMRI and MEG study. BBSRC. Co-Is Belin, Gross
  • Lifelong changes in the cerebral processing of social signals. MRC. Co-Is Belin, Grosbras, Rousselet
Consultation times for students :
Mondays 10-12
Pascal Belin
CONTACT INFO
EMail address pascal.belin@univ-amu.fr
 
LABS WEBSITES
Laboratory homepage Voice Neurocognition Lab
SELECTED PUBLICATIONS
LEGEND
Book Chapter Book chapter
Journal Publication Journal publication
Conference Presentation Conference presentation
  The full list of publications is updated by the author. Below is a list of the most relevant publications of Pascal Belin considering his current research interests.
  If you wish to see the full list of publications, please click here.
Paper Watson R, Latinus M, Noguchi T, Garrod O, Crabbe F, Belin P. (2014) Crossmodal Adaptation in Right Posterior Superior Temporal Sulcus during Face-Voice Emotional Integration Journal of Neuroscience Vol.34 pp 6813-6821
Paper McAleer P, Todorov A, Belin P. (2014) How do you say 'hello'? Personality impressions from brief novel voices PLoS One Vol.9 pp e9077
Paper Charest I., Pernet C., Latinus M., Crabbe F. & Belin P. (2013) Cerebral Processing of Voice Gender Studied Using a Continuous Carryover fMRI Design Cerebral Cortex Vol.23 pp 958-966PDF [expand abstract]
Abstract: Normal listeners effortlessly determine a person's gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception.
Paper Belin P., Bestelmeyer P.E.G., Latinus M., & Watson R. (2011) Understanding voice perception. British Journal of Psychology Vol.102 pp 711-725PDF [expand abstract]
Abstract: Voices carry large amounts of socially relevant information on persons, much like 'auditory faces'. Following Bruce and Young (1986)'s seminal model of face perception, we propose that the cerebral processing of vocal information is organized in interacting but functionally dissociable pathways for processing the three main types of vocal information: speech, identity, and affect. The predictions of the 'auditory face' model of voice perception are reviewed in the light of recent clinical, psychological, and neuroimaging evidence
Paper Latinus M., Crabbe F., Belin P. (2011) Learning-induced changes in the cerebral processing of voice identity Cerebral Cortex Vol.21 pp 2820-2828PDF [expand abstract]
Abstract: Temporal voice areas showing a larger activity for vocal than nonvocal sounds have been identified along the superior temporal sulcus (STS); more voice-sensitive areas have been described in frontal and parietal lobes. Yet, the role of voice-sensitive regions in representing voice identity remains unclear. Using a functional magnetic resonance adaptation design, we aimed at disentangling acoustic- from identity-based representations of voices. Sixteen participants were scanned while listening to pairs of voices drawn from morphed continua between 2 initially unfamiliar voices, before and after a voice learning phase. In a given pair, the first and second stimuli could be identical or acoustically different and, at the second session, perceptually similar or different. At both sessions, right mid-STS/superior temporal cortex (STG) and superior Temporal Pole (sTP) showed sensitivity to acoustical changes. Critically, voice learning induced changes in the acoustical processing of voices in inferior frontal cortices (IFCs). At the second session only, right IFC and left cingulate gyrus showed sensitivity to changes in perceived identity. The processing of voice identity appears to be subserved by a large network of brain areas ranging from the sTP, involved in an acoustic-based representation of unfamiliar voices, to areas along the convexity of the IFC for identity-related processing of familiar voices.
Paper Latinus M. & Belin P. (2011) Human Voice Perception Current Biology Vol.21(4) pp R143-5PDF
Paper Bestelmeyer P.E.G., Belin P., & Grosbras M.-H. (2011) Right temporal TMS impairs voice detection. Current Biology Vol.21 pp R838-R839PDF
Paper Latinus M. & Belin P. (2011) Anti-voice adaptation suggests prototype-based coding of voice identity Frontiers in Psychology PDF
Paper Bruckert L., Bestelmeyer P., Latinus M., Rouger J., Charest I., Rousselet G.A., Kawahara I., Belin P. (2010) Vocal attractiveness increases by averaging. Current Biology (26) pp 116-120PDF [expand abstract]
Abstract: Vocal attractiveness has a profound influence on listenersâ?? a bias known as the â??â??what sounds beautiful is goodâ??â?? vocal attractiveness stereotype [1]â??with tangible impact on a voice ownerâ??s success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2â??4]â??e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speakerâ??s or listenerâ??s gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced â??â??distance to meanâ??â??). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.
Paper Sammler D., Baird A., Valabrègue R., Clément S., Dupont S., Belin P. & Samson S. (2010) The relationship of lyrics and tunes in the processing of unfamiliar songs: a functional magnetic resonance adaptation study J Neuroscience Vol.30 pp 3572-3578PDF [expand abstract]
Abstract: The cognitive relationship between lyrics and tunes in song is currently under debate, with some researchers arguing that lyrics and tunes are represented as separate components, while others suggest that they are processed in integration. The present study addressed this issue by means of a functional magnetic resonance adaptation paradigm during passive listening to unfamiliar songs. The repetition and variation of lyrics and/or tunes in blocks of six songs was crossed in a 2 x 2 factorial design to induce selective adaptation for each component. Reductions of the hemodynamic response were observed along the superior temporal sulcus and gyrus (STS/STG) bilaterally. Within these regions, the left mid-STS showed an interaction of the adaptation effects for lyrics and tunes, suggesting an integrated processing of the two components at prelexical, phonemic processing levels. The degree of integration decayed toward more anterior regions of the left STS, where the lack of such an interaction and the stronger adaptation for lyrics than for tunes was suggestive of an independent processing of lyrics, perhaps resulting from the processing of meaning. Finally, evidence for an integrated representation of lyrics and tunes was found in the left dorsal precentral gyrus (PrCG), possibly relating to the build-up of a vocal code for singing in which musical and linguistic features of song are fused. Overall, these results demonstrate that lyrics and tunes are processed at varying degrees of integration (and separation) through the consecutive processing levels allocated along the posterior-anterior axis of the left STS and the left PrCG.
Paper Belin P. & Grosbras M.-H. (2010) Before speech: cerebral voice processing in infants Neuron Vol.65 pp 733-735PDF [expand abstract]
Abstract: In this issue of Neuron, Grossmann et al. provide the first evidence of voice-sensitive regions in the brain of 7-month-old, but not 4-month-old, infants. We discuss the implications of these findings for our understanding of cerebral voice processing in the first months of life.
Paper Charest I., Pernet C., Rousselet G., Quinones I., Latinus M., Fillion-Bilodeau S., Chartrand J.P., Belin P. (2009) Electrophysiological evidence for an early processing of human voices BMC Neuroscience Vol.10 pp 127PDF [expand abstract]
Abstract: Background Previous electrophysiological studies have identified a "voice specific response" (VSR) peaking around 320 ms after stimulus onset, a latency markedly longer than the 70 ms needed to discriminate living from non-living sound sources and the 150 ms to 200 ms needed for the processing of voice paralinguistic qualities. In the present study, we investigated whether an early electrophysiological difference between voice and non-voice stimuli could be observed. Results ERPs were recorded from 32 healthy volunteers who listened to 200 ms long stimuli from three sound categories - voices, bird songs and environmental sounds - whilst performing a pure-tone detection task. ERP analyses revealed voice/non-voice amplitude differences emerging as early as 164 ms post stimulus onset and peaking around 200 ms on fronto-temporal (positivity) and occipital (negativity) electrodes. Conclusion Our electrophysiological results suggest a rapid brain discrimination of sounds of voice, termed the "fronto-temporal positivity to voices" (FTPV), at latencies comparable to the well-known face-preferential N170.
Paper Gougoux F., Belin P., Voss P., Lepore F., Lassonde M. & Zatorre R.J. (2009) Voice perception in blind persons: A functional magnetic resonance imaging study Neuropsychologia (47) pp 2967-2974PDF [expand abstract]
Abstract: Early blind persons have often been shownto be superior to sighted ones across a wide range of non-visual perceptual abilities, which in turn are often explained by the functionally relevant recruitment of occipital areas. While voice stimuli are knownto involve voice-selective areas of the superior temporal sulcus (STS) in sighted persons, it remains unknown if the processing of vocal stimuli involves similar brain regions in blind persons, or whether it benefits from cross-modal processing. To address these questions, we used fMRI to measure cerebral responses to voice and non-voice stimuli in blind (congenital and acquired) and sighted subjects. The global comparison of all sounds vs. silence showed a different pattern of activation between blind (pooled congenital and acquired) and sighted groups, with blind subjects showing stronger activation of occipital areas but weaker activation of temporal areas centered around Heschl’s gyrus. In contrast, the specific comparison of vocal vs. non-vocal sounds did not isolate activations in the occipital areas in either of the blind groups. In the congenitally blind group, however, it led to a stronger activation in the left STS, and to a lesser extent in the fusiform cortex, compared to both sighted participants and those with acquired blindness. Moreover, STS activity in congenital blind participants significantly correlated with performance in a voice discrimination task. This increased recruitment of STS areas in the blind for voice processing is in marked contrast with the usual cross-modal recruitment of occipital cortex.
Paper Belin P., Fillion-Bilodeau S., Gosselin F. (2008) The Montreal Affective Voices: A validated set of nonverbal affect bursts for research on auditory affective processing Behavior Research Methods Vol.40(2) pp 531-539PDF [expand abstract]
Abstract: The Montreal Affective Voices consist of 90 nonverbal affect bursts corresponding to the emotions of anger, disgust, fear, pain, sadness, surprise, happiness, and pleasure (plus a neutral expression), recorded by 10 different actors (5 of them male and 5 female). Ratings of valence, arousal, and intensity for eight emotions were collected for each vocalization from 30 participants. Analyses revealed high recognition accuracies for most of the emotional categories (mean of 68%). They also revealed significant effects of both the actors’ and the participants’ gender: The highest hit rates (75%) were obtained for female participants rating female vocalizations, and the lowest hit rates (60%) for male participants rating male vocalizations. Interestingly, the mixed situations— that is, male participants rating female vocalizations or female participants rating male vocalizations—yielded similar, intermediate ratings. The Montreal Affective Voices are available for download at vnl.psy.gla.ac.uk/ (Resources section).
Paper Belin P., Fecteau S., Charest I., Nicastro N., Hauser M.D. & Armony J.L. (2008) Human cerebral response to animal affective vocalizations Proceedings of the Royal Society Series B (275) pp 473-481PDF [expand abstract]
Abstract: It is presently unknown whether our response to affective vocalizations is specific to those generated by humans or more universal, triggered by emotionally matched vocalizations generated by other species. Here, we used functional magnetic resonance imaging in normal participants to measure cerebral activity during auditory stimulation with affectively valenced animal vocalizations, some familiar (cats) and others not (rhesus monkeys). Positively versus negatively valenced vocalizations from cats and monkeys elicited different cerebral responses despite the participants' inability to differentiate the valence of these animal vocalizations by overt behavioural responses. Moreover, the comparison with human non-speech affective vocalizations revealed a common response to the valence in orbitofrontal cortex, a key component on the limbic system. These findings suggest that the neural mechanisms involved in processing human affective vocalizations may be recruited by heterospecific affective vocalizations at an unconscious level, supporting claims of shared emotional systems across species.
Paper Belin P. (2008) Monkeys Hear Voices Scientific American Mind pp 90-91PDF
Paper Fecteau S., Belin P., Joanette Y. & Armony JL. (2007) Amygdala responses to nonlinguistic emotional vocalization Neuroimage (36) pp 480-487PDF
Paper Chartrand JP. (2007) Brain response to birdsongs in bird experts Neuroreport Vol.18(4) pp 335-340PDF
Paper Belin P., Fecteau S., Charest I., Nicastro N., Hauser M.D. & Armony J.L.. (2007) Human cerebral response to animal affective vocalizations Proceedings of the Royal Society Series B (275) pp 473-481PDF [expand abstract]
Abstract: It is presently unknown whether our response to affective vocalizations is specific to those generated by humans or more universal, triggered by emotionally matched vocalizations generated by other species. Here, we used functional magnetic resonance imaging in normal participants to measure cerebral activity during auditory stimulation with affectively valenced animal vocalizations, some familiar (cats) and others not (rhesus monkeys). Positively versus negatively valenced vocalizations from cats and monkeys elicited different cerebral responses despite the participants' inability to differentiate the valence of these animal vocalizations by overt behavioural responses. Moreover, the comparison with human non-speech affective vocalizations revealed a common response to the valence in orbitofrontal cortex, a key component on the limbic system. These findings suggest that the neural mechanisms involved in processing human affective vocalizations may be recruited by heterospecific affective vocalizations at an unconscious level, supporting claims of shared emotional systems across species.
Paper Armony JL. (2007) Laugh (or Cry) and You Will Be Remembered: Influence of Emotional Expression on Memory for Vocalizations Psychological Science Vol.18(12) pp 1027-1029PDF
Paper Campanella S. & Belin P. (2007) Integrating face and voice in person perception Trends in Cognitive Sciences Vol.11(12) pp 535-543PDF
Paper Fecteau S., Armony J.L., Joanette Y. & Belin P. (2005) Sensitivity to voice in human prefrontal cortex Journal of Neurophysiology (94) pp 2251-2254PDF
Paper Gougoux F., Lepore F Lassonde M., Voss P., Zatorre R.J. & Belin P. (2004) And the Blind Shall Hear: Improved Pitch Discrimination in the Early Blind Nature (430) pp 309-310PDF
Paper Gervais H., Belin P., Boddaert N., Leboyer M., Coez A., Barthélémy C., Samson Y. & Zilbovicius M. (2004) Abnormal Voice Processing in Autism : a fMRI study Nature Neuroscience (7) pp 801-802PDF
Paper Fecteau S., Armony J., Joanette Y. & Belin P. (2004) Is voice processing species-specific in human auditory cortex ? An fMRI study Neuroimage (23) pp 840-848PDF
Paper Belin P., Fecteau S. & Bédard C. (2004) Thinking the voice: neural correlates of voice perception Trends in Cognitive Sciences (8) pp 129-135PDF
Paper Belin P., Zatorre R.J., Lafaille P., Ahad P. & Pike B. (2000) Voice-selective areas in human auditory cortex Nature (403) pp 309-312PDF