Hearing the Facial Expressions of Emotion: The case of the Smile
Smiles are arguably one of the most important behaviours in the human emotional repertoire: they are produced early in development, by a wide range of cultures, and even species. Because of their ubiquity, smiles have been widely studied by visual face perception communities, which describe e.g. their mental representations, and the congruent facial reactions happening shortly after their visual perception. However, smiles are not only experienced visually, they also have acoustic consequences. Are such auditory smiles only a side effect of a primarily visual phenomenon? Can they, in their own right, trigger the low-level reactions involved in the processing of their visual counterparts? In this talk, I will present a measure of auditory smile’s mental representations, as well as a computational model recreating these cues in experimental stimuli. I will then present a series of experiments aiming to probe the cognitive processing of auditory smiles. I will report that (1) auditory smiles can trigger unconscious facial imitation, that (2) they are cognitively integrated with their visual manifestation, and that (3) the development of these processes does not depend on pre-learned visual associations.