Laughter Animation Synthesis

Yu Ding 1, 2 Ken Prépin 1, 2 Jing Huang 1, 2 C. Pelachaud 1, 2 T. Artières
1 MM - Multimédia
LTCI - Laboratoire Traitement et Communication de l'Information
Abstract :

Laughter is an important communicative signal in human-human communication. However, very few attempts have been made to model laughter animation synthesis for virtual characters. This paper reports our work to model hilarious laughter. We have developed a generator for face and body motions that takes as input the sequence of pseudo-phonemes of laughter and each pseudo-phoneme's duration time. Lip and jaw movements are further driven by laughter prosodic features. The proposed generator first learns the relationship between input signals (pseudo-phoneme and acoustic features) and human motions; then the learnt generator can be used to produce automatically laughter animation in real time. Lip and jaw motion synthesis is based on an extension of Gaussian Models, the contextual Gaussian Model. Head and eyebrow motion synthesis is based on selecting and concatenating motion segments from motion capture data of human laughter while torso and shoulder movements are driven from head motion by a PD controller. Our multimodal behaviors generator of laughter has been evaluated through perceptive study involving the interaction of a human and an agent telling jokes to each other.

Complete list of metadatas

https://hal.telecom-paristech.fr/hal-02412082
Contributor : Telecomparis Hal <>
Submitted on : Sunday, December 15, 2019 - 12:44:44 PM
Last modification on : Thursday, December 19, 2019 - 1:12:34 AM

Identifiers

  • HAL Id : hal-02412082, version 1

Collections

Citation

Yu Ding, Ken Prépin, Jing Huang, C. Pelachaud, T. Artières. Laughter Animation Synthesis. International Conference on Autonomous Agents and Multi-agent Systems, May 2014, Paris, France. pp.773-780. ⟨hal-02412082⟩

Share

Metrics

Record views

3