AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control
- URL: http://arxiv.org/abs/2104.02180v1
- Date: Mon, 5 Apr 2021 22:43:14 GMT
- Title: AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control
- Authors: Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, Angjoo Kanazawa
- Abstract summary: We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
- Score: 145.61135774698002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthesizing graceful and life-like behaviors for physically simulated
characters has been a fundamental challenge in computer animation. Data-driven
methods that leverage motion tracking are a prominent class of techniques for
producing high fidelity motions for a wide range of behaviors. However, the
effectiveness of these tracking-based methods often hinges on carefully
designed objective functions, and when applied to large and diverse motion
datasets, these methods require significant additional machinery to select the
appropriate motion for the character to track in a given scenario. In this
work, we propose to obviate the need to manually design imitation objectives
and mechanisms for motion selection by utilizing a fully automated approach
based on adversarial imitation learning. High-level task objectives that the
character should perform can be specified by relatively simple reward
functions, while the low-level style of the character's behaviors can be
specified by a dataset of unstructured motion clips, without any explicit clip
selection or sequencing. These motion clips are used to train an adversarial
motion prior, which specifies style-rewards for training the character through
reinforcement learning (RL). The adversarial RL procedure automatically selects
which motion to perform, dynamically interpolating and generalizing from the
dataset. Our system produces high-quality motions that are comparable to those
achieved by state-of-the-art tracking-based techniques, while also being able
to easily accommodate large datasets of unstructured motion clips. Composition
of disparate skills emerges automatically from the motion prior, without
requiring a high-level motion planner or other task-specific annotations of the
motion clips. We demonstrate the effectiveness of our framework on a diverse
cast of complex simulated characters and a challenging suite of motor control
tasks.
Related papers
- MotionCom: Automatic and Motion-Aware Image Composition with LLM and Video Diffusion Prior [51.672193627686]
MotionCom is a training-free motion-aware diffusion based image composition.
It enables seamless integration of target objects into new scenes with dynamically coherent results.
arXiv Detail & Related papers (2024-09-16T08:44:17Z) - FLD: Fourier Latent Dynamics for Structured Motion Representation and
Learning [19.491968038335944]
We introduce a self-supervised, structured representation and generation method that extracts spatial-temporal relationships in periodic or quasi-periodic motions.
Our work opens new possibilities for future advancements in general motion representation and learning algorithms.
arXiv Detail & Related papers (2024-02-21T13:59:21Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Motion In-Betweening with Phase Manifolds [29.673541655825332]
This paper introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder.
Our approach utilizes a mixture-of-experts neural network model, in which the phases cluster movements in both space and time with different expert weights.
arXiv Detail & Related papers (2023-08-24T12:56:39Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - CALM: Conditional Adversarial Latent Models for Directable Virtual
Characters [71.66218592749448]
We present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.
Using imitation learning, CALM learns a representation of movement that captures the complexity of human motion, and enables direct control over character movements.
arXiv Detail & Related papers (2023-05-02T09:01:44Z) - Learning Variational Motion Prior for Video-based Motion Capture [31.79649766268877]
We present a novel variational motion prior (VMP) learning approach for video-based motion capture.
Our framework can effectively reduce temporal jittering and failure modes in frame-wise pose estimation.
Experiments over both public datasets and in-the-wild videos have demonstrated the efficacy and generalization capability of our framework.
arXiv Detail & Related papers (2022-10-27T02:45:48Z) - Character Controllers Using Motion VAEs [9.806910643086045]
We learn data-driven generative models of human movement using Motion VAEs.
Planning or control algorithms can then use this action space to generate desired motions.
arXiv Detail & Related papers (2021-03-26T05:51:41Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.