Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with
Generative Adversarial Affective Expression Learning
- URL: http://arxiv.org/abs/2108.00262v2
- Date: Tue, 3 Aug 2021 10:35:44 GMT
- Title: Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with
Generative Adversarial Affective Expression Learning
- Authors: Uttaran Bhattacharya and Elizabeth Childs and Nicholas Rewkowski and
Dinesh Manocha
- Abstract summary: We present a generative adversarial network to synthesize 3D pose sequences of co-speech upper-body gestures with appropriate affective expressions.
Our network consists of two components: a generator to synthesize gestures from a joint embedding space of features encoded from the input speech and the seed poses, and a discriminator to distinguish between the synthesized pose sequences and real 3D pose sequences.
- Score: 63.06044724907101
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a generative adversarial network to synthesize 3D pose sequences
of co-speech upper-body gestures with appropriate affective expressions. Our
network consists of two components: a generator to synthesize gestures from a
joint embedding space of features encoded from the input speech and the seed
poses, and a discriminator to distinguish between the synthesized pose
sequences and real 3D pose sequences. We leverage the Mel-frequency cepstral
coefficients and the text transcript computed from the input speech in separate
encoders in our generator to learn the desired sentiments and the associated
affective cues. We design an affective encoder using multi-scale
spatial-temporal graph convolutions to transform 3D pose sequences into latent,
pose-based affective features. We use our affective encoder in both our
generator, where it learns affective features from the seed poses to guide the
gesture synthesis, and our discriminator, where it enforces the synthesized
gestures to contain the appropriate affective expressions. We perform extensive
evaluations on two benchmark datasets for gesture synthesis from the speech,
the TED Gesture Dataset and the GENEA Challenge 2020 Dataset. Compared to the
best baselines, we improve the mean absolute joint error by 10--33%, the mean
acceleration difference by 8--58%, and the Fr\'echet Gesture Distance by
21--34%. We also conduct a user study and observe that compared to the best
current baselines, around 15.28% of participants indicated our synthesized
gestures appear more plausible, and around 16.32% of participants felt the
gestures had more appropriate affective expressions aligned with the speech.
Related papers
- Combo: Co-speech holistic 3D human motion generation and efficient customizable adaptation in harmony [55.26315526382004]
We propose a novel framework, Combo, for co-speech holistic 3D human motion generation.
In particular, we identify that one fundamental challenge as the multiple-input-multiple-output nature of the generative model of interest.
Combo is highly effective in generating high-quality motions but also efficient in transferring identity and emotion.
arXiv Detail & Related papers (2024-08-18T07:48:49Z) - Speech2UnifiedExpressions: Synchronous Synthesis of Co-Speech Affective Face and Body Expressions from Affordable Inputs [67.27840327499625]
We present a multimodal learning-based method to simultaneously synthesize co-speech facial expressions and upper-body gestures for digital characters.
Our approach learns from sparse face landmarks and upper-body joints, estimated directly from video data, to generate plausible emotive character motions.
arXiv Detail & Related papers (2024-06-26T04:53:11Z) - EmoVOCA: Speech-Driven Emotional 3D Talking Heads [12.161006152509653]
We propose an innovative data-driven technique for creating a synthetic dataset, called EmoVOCA.
We then designed and trained an emotional 3D talking head generator that accepts a 3D face, an audio file, an emotion label, and an intensity value as inputs, and learns to animate the audio-synchronized lip movements with expressive traits of the face.
arXiv Detail & Related papers (2024-03-19T16:33:26Z) - Co-Speech Gesture Synthesis using Discrete Gesture Token Learning [1.1694169299062596]
Synthesizing realistic co-speech gestures is an important and yet unsolved problem for creating believable motions.
One challenge in learning the co-speech gesture model is that there may be multiple viable gesture motions for the same speech utterance.
We proposed a two-stage model to address this uncertainty issue in gesture synthesis by modeling the gesture segments as discrete latent codes.
arXiv Detail & Related papers (2023-03-04T01:42:09Z) - Generating Holistic 3D Human Motion from Speech [97.11392166257791]
We build a high-quality dataset of 3D holistic body meshes with synchronous speech.
We then define a novel speech-to-motion generation framework in which the face, body, and hands are modeled separately.
arXiv Detail & Related papers (2022-12-08T17:25:19Z) - Learning Hierarchical Cross-Modal Association for Co-Speech Gesture
Generation [107.10239561664496]
We propose a novel framework named Hierarchical Audio-to-Gesture (HA2G) for co-speech gesture generation.
The proposed method renders realistic co-speech gestures and outperforms previous methods in a clear margin.
arXiv Detail & Related papers (2022-03-24T16:33:29Z) - Learning Speech-driven 3D Conversational Gestures from Video [106.15628979352738]
We propose the first approach to automatically and jointly synthesize both the synchronous 3D conversational body and hand gestures.
Our algorithm uses a CNN architecture that leverages the inherent correlation between facial expression and hand gestures.
We also contribute a new way to create a large corpus of more than 33 hours of annotated body, hand, and face data from in-the-wild videos of talking people.
arXiv Detail & Related papers (2021-02-13T01:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.