cSCAN Rounds

To Go Where No Speech Synthesis Has Gone Before

Computers are now speaking and listening to us more than ever. Ten years ago this technology was found mostly on automated phone lines, now we are used to speech interaction on personal digital assistants and smart speakers like Siri and Echo. Speech output is no longer just about solely conveying information. With speech functionality, computers have entered the social domain. In this talk I will give an overview of techniques that have been developed to give artificial voices character, and describe projects where speech synthesis has been used in novel areas such as bringing the dead back to life, supporting community radio, and talking sunglasses. About Matthew Matthew P. Aylett has been involved in speech technology and HCI as a student and researcher since 1994. He founded Cereproc Ltd in 2006 with the aim of creating commercially available, characterful speech synthesis. In 2007 Cereproc released the first commercial synthesis to allow modification of voice quality for adding underlying emotion to voices. He was awarded a Royal Society Industrial Fellowship in 2012 to explore personification in speech synthesis. He has remained active both commercially, where he dictates Cereproc’s technical strategy, and academically, as a honorary fellow at The School of Informatics at the University of Edinburgh. Matthew Aylett has substantial commercial engineering and product development management experience together with a broad international research background in prosody, dialogue engineering, affective computing, novel interface design and psycholinguistics.