CALM: Conditional Adversarial Latent Models for Directable Virtual
Characters
- URL: http://arxiv.org/abs/2305.02195v1
- Date: Tue, 2 May 2023 09:01:44 GMT
- Title: CALM: Conditional Adversarial Latent Models for Directable Virtual
Characters
- Authors: Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, Xue
Bin Peng
- Abstract summary: We present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.
Using imitation learning, CALM learns a representation of movement that captures the complexity of human motion, and enables direct control over character movements.
- Score: 71.66218592749448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present Conditional Adversarial Latent Models (CALM), an
approach for generating diverse and directable behaviors for user-controlled
interactive virtual characters. Using imitation learning, CALM learns a
representation of movement that captures the complexity and diversity of human
motion, and enables direct control over character movements. The approach
jointly learns a control policy and a motion encoder that reconstructs key
characteristics of a given motion without merely replicating it. The results
show that CALM learns a semantic motion representation, enabling control over
the generated motions and style-conditioning for higher-level task training.
Once trained, the character can be controlled using intuitive interfaces, akin
to those found in video games.
Related papers
- Taming Diffusion Probabilistic Models for Character Control [46.52584236101806]
We present a novel character control framework that responds in real-time to a variety of user-supplied control signals.
At the heart of our method lies a transformer-based Conditional Autoregressive Motion Diffusion Model.
Our work represents the first model that enables real-time generation of high-quality, diverse character animations.
arXiv Detail & Related papers (2024-04-23T15:20:17Z) - MotionCrafter: One-Shot Motion Customization of Diffusion Models [66.44642854791807]
We introduce MotionCrafter, a one-shot instance-guided motion customization method.
MotionCrafter employs a parallel spatial-temporal architecture that injects the reference motion into the temporal component of the base model.
During training, a frozen base model provides appearance normalization, effectively separating appearance from motion.
arXiv Detail & Related papers (2023-12-08T16:31:04Z) - MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete
Representations [25.630268570049708]
MoConVQ is a novel unified framework for physics-based motion control leveraging scalable discrete representations.
Our approach effectively learns motion embeddings from a large, unstructured dataset spanning tens of hours of motion examples.
arXiv Detail & Related papers (2023-10-16T09:09:02Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - A GAN-Like Approach for Physics-Based Imitation Learning and Interactive
Character Control [2.2082422928825136]
We present a simple and intuitive approach for interactive control of physically simulated characters.
Our work builds upon generative adversarial networks (GAN) and reinforcement learning.
We highlight the applicability of our approach in a range of imitation and interactive control tasks.
arXiv Detail & Related papers (2021-05-21T00:03:29Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - Character Controllers Using Motion VAEs [9.806910643086045]
We learn data-driven generative models of human movement using Motion VAEs.
Planning or control algorithms can then use this action space to generate desired motions.
arXiv Detail & Related papers (2021-03-26T05:51:41Z) - Hierarchical Contrastive Motion Learning for Video Action Recognition [100.9807616796383]
We present hierarchical contrastive motion learning, a new self-supervised learning framework to extract effective motion representations from raw video frames.
Our approach progressively learns a hierarchy of motion features that correspond to different abstraction levels in a network.
Our motion learning module is lightweight and flexible to be embedded into various backbone networks.
arXiv Detail & Related papers (2020-07-20T17:59:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.