Character Controllers Using Motion VAEs
- URL: http://arxiv.org/abs/2103.14274v1
- Date: Fri, 26 Mar 2021 05:51:41 GMT
- Title: Character Controllers Using Motion VAEs
- Authors: Hung Yu Ling and Fabio Zinno and George Cheng and Michiel van de Panne
- Abstract summary: We learn data-driven generative models of human movement using Motion VAEs.
Planning or control algorithms can then use this action space to generate desired motions.
- Score: 9.806910643086045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental problem in computer animation is that of realizing purposeful
and realistic human movement given a sufficiently-rich set of motion capture
clips. We learn data-driven generative models of human movement using
autoregressive conditional variational autoencoders, or Motion VAEs. The latent
variables of the learned autoencoder define the action space for the movement
and thereby govern its evolution over time. Planning or control algorithms can
then use this action space to generate desired motions. In particular, we use
deep reinforcement learning to learn controllers that achieve goal-directed
movements. We demonstrate the effectiveness of the approach on multiple tasks.
We further evaluate system-design choices and describe the current limitations
of Motion VAEs.
Related papers
- Programmable Motion Generation for Open-Set Motion Control Tasks [51.73738359209987]
We introduce a new paradigm, programmable motion generation.
In this paradigm, any given motion control task is broken down into a combination of atomic constraints.
These constraints are then programmed into an error function that quantifies the degree to which a motion sequence adheres to them.
arXiv Detail & Related papers (2024-05-29T17:14:55Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Neural Categorical Priors for Physics-Based Character Control [12.731392285646614]
We propose a new learning framework for controlling physics-based characters with significantly improved motion quality and diversity.
The proposed method uses reinforcement learning (RL) to initially track and imitate life-like movements from unstructured motion clips.
We conduct comprehensive experiments using humanoid characters on two challenging downstream tasks, sword-shield striking and two-player boxing game.
arXiv Detail & Related papers (2023-08-14T15:10:29Z) - Perpetual Humanoid Control for Real-time Simulated Avatars [77.05287269685911]
We present a physics-based humanoid controller that achieves high-fidelity motion imitation and fault-tolerant behavior.
Our controller scales up to learning ten thousand motion clips without using any external stabilizing forces.
We demonstrate the effectiveness of our controller by using it to imitate noisy poses from video-based pose estimators and language-based motion generators in a live and real-time multi-person avatar use case.
arXiv Detail & Related papers (2023-05-10T20:51:37Z) - CALM: Conditional Adversarial Latent Models for Directable Virtual
Characters [71.66218592749448]
We present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.
Using imitation learning, CALM learns a representation of movement that captures the complexity of human motion, and enables direct control over character movements.
arXiv Detail & Related papers (2023-05-02T09:01:44Z) - Human MotionFormer: Transferring Human Motions with Vision Transformers [73.48118882676276]
Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis.
We propose Human MotionFormer, a hierarchical ViT framework that leverages global and local perceptions to capture large and subtle motion matching.
Experiments show that our Human MotionFormer sets the new state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2023-02-22T11:42:44Z) - Task-Generic Hierarchical Human Motion Prior using VAEs [44.356707509079044]
A deep generative model that describes human motions can benefit a wide range of fundamental computer vision and graphics tasks.
We present a method for learning complex human motions independent of specific tasks using a combined global and local latent space.
We demonstrate the effectiveness of our hierarchical motion variational autoencoder in a variety of tasks including video-based human pose estimation.
arXiv Detail & Related papers (2021-06-07T23:11:42Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - Self-supervised Motion Learning from Static Images [36.85209332144106]
Motion from Static Images (MoSI) learns to encode motion information.
MoSI can discover regions with large motion even without fine-tuning on the downstream datasets.
We demonstrate that MoSI can discover regions with large motion even without fine-tuning on the downstream datasets.
arXiv Detail & Related papers (2021-04-01T03:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.