Triangular Character Animation Sampling with Motion, Emotion, and
Relation
- URL: http://arxiv.org/abs/2203.04930v1
- Date: Wed, 9 Mar 2022 18:19:03 GMT
- Title: Triangular Character Animation Sampling with Motion, Emotion, and
Relation
- Authors: Yizhou Zhao, Liang Qiu, Wensi Ai, Pan Lu, Song-Chun Zhu
- Abstract summary: We present a novel framework to sample and synthesize animations by associating the characters' body motions, facial expressions, and social relations.
Our method can provide animators with an automatic way to generate 3D character animations, help synthesize interactions between Non-Player Characters (NPCs) and enhance machine emotion intelligence in virtual reality (VR)
- Score: 78.80083186208712
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dramatic progress has been made in animating individual characters. However,
we still lack automatic control over activities between characters, especially
those involving interactions. In this paper, we present a novel energy-based
framework to sample and synthesize animations by associating the characters'
body motions, facial expressions, and social relations. We propose a
Spatial-Temporal And-Or graph (ST-AOG), a stochastic grammar model, to encode
the contextual relationship between motion, emotion, and relation, forming a
triangle in a conditional random field. We train our model from a labeled
dataset of two-character interactions. Experiments demonstrate that our method
can recognize the social relation between two characters and sample new scenes
of vivid motion and emotion using Markov Chain Monte Carlo (MCMC) given the
social relation. Thus, our method can provide animators with an automatic way
to generate 3D character animations, help synthesize interactions between
Non-Player Characters (NPCs), and enhance machine emotion intelligence (EQ) in
virtual reality (VR).
Related papers
- ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE [0.0]
We argue that emotions and non-determinism are crucial to generate diverse and emotionally-rich facial animations.
We propose ProbTalk3D a non-deterministic neural network approach for emotion controllable speech-driven 3D facial animation synthesis.
arXiv Detail & Related papers (2024-09-12T11:53:05Z) - EmoFace: Audio-driven Emotional 3D Face Animation [3.573880705052592]
EmoFace is a novel audio-driven methodology for creating facial animations with vivid emotional dynamics.
Our approach can generate facial expressions with multiple emotions, and has the ability to generate random yet natural blinks and eye movements.
Our proposed methodology can be applied in producing dialogues animations of non-playable characters in video games, and driving avatars in virtual reality environments.
arXiv Detail & Related papers (2024-07-17T11:32:16Z) - CSTalk: Correlation Supervised Speech-driven 3D Emotional Facial Animation Generation [13.27632316528572]
Speech-driven 3D facial animation technology has been developed for years, but its practical application still lacks expectations.
Main challenges lie in data limitations, lip alignment, and the naturalness of facial expressions.
This paper proposes a method called CSTalk that models the correlations among different regions of facial movements and supervises the training of the generative model to generate realistic expressions.
arXiv Detail & Related papers (2024-04-29T11:19:15Z) - Digital Life Project: Autonomous 3D Characters with Social Intelligence [86.2845109451914]
Digital Life Project is a framework utilizing language as the universal medium to build autonomous 3D characters.
Our framework comprises two primary components: SocioMind and MoMat-MoGen.
arXiv Detail & Related papers (2023-12-07T18:58:59Z) - ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - MAAIP: Multi-Agent Adversarial Interaction Priors for imitation from
fighting demonstrations for physics-based characters [5.303375034962503]
We propose a novel Multi-Agent Generative Adversarial Imitation Learning based approach.
Our system trains control policies allowing each character to imitate the interactive skills associated with each actor.
This approach has been tested on two different fighting styles, boxing and full-body martial art, to demonstrate the ability of the method to imitate different styles.
arXiv Detail & Related papers (2023-11-04T20:40:39Z) - Compositional 3D Human-Object Neural Animation [93.38239238988719]
Human-object interactions (HOIs) are crucial for human-centric scene understanding applications such as human-centric visual generation, AR/VR, and robotics.
In this paper, we address this challenge in HOI animation from a compositional perspective.
We adopt neural human-object deformation to model and render HOI dynamics based on implicit neural representations.
arXiv Detail & Related papers (2023-04-27T10:04:56Z) - Synthesizing Physical Character-Scene Interactions [64.26035523518846]
It is necessary to synthesize such interactions between virtual characters and their surroundings.
We present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters.
Our approach takes physics-based character motion generation a step closer to broad applicability.
arXiv Detail & Related papers (2023-02-02T05:21:32Z) - Synthesis of Compositional Animations from Textual Descriptions [54.85920052559239]
"How unstructured and complex can we make a sentence and still generate plausible movements from it?"
"How can we animate 3D-characters from a movie script or move robots by simply telling them what we would like them to do?"
arXiv Detail & Related papers (2021-03-26T18:23:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.