UniCon: Universal Neural Controller For Physics-based Character Motion
- URL: http://arxiv.org/abs/2011.15119v1
- Date: Mon, 30 Nov 2020 18:51:16 GMT
- Title: UniCon: Universal Neural Controller For Physics-based Character Motion
- Authors: Tingwu Wang, Yunrong Guo, Maria Shugrina, Sanja Fidler
- Abstract summary: We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
- Score: 70.45421551688332
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The field of physics-based animation is gaining importance due to the
increasing demand for realism in video games and films, and has recently seen
wide adoption of data-driven techniques, such as deep reinforcement learning
(RL), which learn control from (human) demonstrations. While RL has shown
impressive results at reproducing individual motions and interactive
locomotion, existing methods are limited in their ability to generalize to new
motions and their ability to compose a complex motion sequence interactively.
In this paper, we propose a physics-based universal neural controller (UniCon)
that learns to master thousands of motions with different styles by learning on
large-scale motion datasets. UniCon is a two-level framework that consists of a
high-level motion scheduler and an RL-powered low-level motion executor, which
is our key innovation. By systematically analyzing existing multi-motion RL
frameworks, we introduce a novel objective function and training techniques
which make a significant leap in performance. Once trained, our motion executor
can be combined with different high-level schedulers without the need for
retraining, enabling a variety of real-time interactive applications. We show
that UniCon can support keyboard-driven control, compose motion sequences drawn
from a large pool of locomotion and acrobatics skills and teleport a person
captured on video to a physics-based virtual avatar. Numerical and qualitative
results demonstrate a significant improvement in efficiency, robustness and
generalizability of UniCon over prior state-of-the-art, showcasing
transferability to unseen motions, unseen humanoid models and unseen
perturbation.
Related papers
- Sitcom-Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes [83.55301458112672]
Sitcom-Crafter is a system for human motion generation in 3D space.
Central to the function generation modules is our novel 3D scene-aware human-human interaction module.
Augmentation modules encompass plot comprehension for command generation, motion synchronization for seamless integration of different motion types.
arXiv Detail & Related papers (2024-10-14T17:56:19Z) - SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation [55.47473138423572]
We introduce SuperPADL, a scalable framework for physics-based text-to-motion.
SuperPADL trains controllers on thousands of diverse motion clips using RL and supervised learning.
Our controller is trained on a dataset containing over 5000 skills and runs in real time on a consumer GPU.
arXiv Detail & Related papers (2024-07-15T07:07:11Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete
Representations [25.630268570049708]
MoConVQ is a novel unified framework for physics-based motion control leveraging scalable discrete representations.
Our approach effectively learns motion embeddings from a large, unstructured dataset spanning tens of hours of motion examples.
arXiv Detail & Related papers (2023-10-16T09:09:02Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics [21.00283279991885]
We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics.
We conduct extensive evaluations of our model across different motion tasks and various physical perturbations, demonstrating the scalability and diversity of responses.
arXiv Detail & Related papers (2023-09-24T20:25:59Z) - Human MotionFormer: Transferring Human Motions with Vision Transformers [73.48118882676276]
Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis.
We propose Human MotionFormer, a hierarchical ViT framework that leverages global and local perceptions to capture large and subtle motion matching.
Experiments show that our Human MotionFormer sets the new state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2023-02-22T11:42:44Z) - Advanced Skills through Multiple Adversarial Motion Priors in
Reinforcement Learning [10.445369597014533]
We present an approach to augment the concept of adversarial motion prior-based reinforcement learning.
We show that multiple styles and skills can be learned simultaneously without notable performance differences.
Our approach is validated in several real-world experiments with a wheeled-legged quadruped robot.
arXiv Detail & Related papers (2022-03-23T09:24:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.