CSCAN ROUNDS
Thursday January 31st 2019, 2:30pm
COMPUTATIONAL MODELS OF EMOTIONAL VOCAL BEHAVIOUR AND THEIR APPLICATION TO THE EXPERIMENTAL AND CLINICAL STUDY OF SOCIAL AND AFFECTIVE NEUROSCIENCE.
Abstract: The CREAM ERC project (Cracking the emotional code of music, http://cream.ircam.fr), hosted in IRCAM, Paris, aims to bring together new technologies in audio signal processing and experimental research in the affective psychology/neuroscience of speech and music. In this talk, Jean-Julien will present three new voice-manipulation tools created in the project (among which the open-source software CLEESE and DAVID, available here: http://forumnet.ircam.fr/product/cleese, http://forumnet.ircam.fr/product/david/), and describe a series of experimental and clinical studies in which they were recently used: that of emotional vocal feedback (Aucouturier et al, 2016), facial mimicry of auditory smiles in healthy and congenitally blind participants (Arias et al, 2017), and reverse-correlation of social prosody in healthy participants and brain-stroke survivors (Ponsot et al, 2017). We make these software tools available open-source for the psychology community, and hope that you will find them useful for your own work.

Speaker: JJ Aucouturier is a permanent CNRS researcher in IRCAM in Paris. He was trained in Computer Science, and held several postdoctoral positions in Cognitive Neuroscience in RIKEN Brain Science Institute in Tokyo, Japan and Université of Dijon, France. He is now heading the CREAM neuroscience lab in IRCAM, and uses audio signal processing technologies to understand how sound and music create emotions. Lab website:cream.ircam.fr
PRESENTED BY
Jean-Julien Aucouturier
Permanent CNRS researcher
from the IRCAM (Paris)
INVITED BY
Rachael Jack