Residual Force Control for Agile Human Behavior Imitation and Extended
Motion Synthesis
- URL: http://arxiv.org/abs/2006.07364v2
- Date: Thu, 22 Oct 2020 17:57:12 GMT
- Title: Residual Force Control for Agile Human Behavior Imitation and Extended
Motion Synthesis
- Authors: Ye Yuan, Kris Kitani
- Abstract summary: Reinforcement learning has shown great promise for realistic human behaviors by learning humanoid control policies from motion capture data.
It is still very challenging to reproduce sophisticated human skills like ballet dance, or to stably imitate long-term human behaviors with complex transitions.
We propose a novel approach, residual force control (RFC), that augments a humanoid control policy by adding external residual forces into the action space.
- Score: 32.22704734791378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning has shown great promise for synthesizing realistic
human behaviors by learning humanoid control policies from motion capture data.
However, it is still very challenging to reproduce sophisticated human skills
like ballet dance, or to stably imitate long-term human behaviors with complex
transitions. The main difficulty lies in the dynamics mismatch between the
humanoid model and real humans. That is, motions of real humans may not be
physically possible for the humanoid model. To overcome the dynamics mismatch,
we propose a novel approach, residual force control (RFC), that augments a
humanoid control policy by adding external residual forces into the action
space. During training, the RFC-based policy learns to apply residual forces to
the humanoid to compensate for the dynamics mismatch and better imitate the
reference motion. Experiments on a wide range of dynamic motions demonstrate
that our approach outperforms state-of-the-art methods in terms of convergence
speed and the quality of learned motions. Notably, we showcase a physics-based
virtual character empowered by RFC that can perform highly agile ballet dance
moves such as pirouette, arabesque and jet\'e. Furthermore, we propose a
dual-policy control framework, where a kinematic policy and an RFC-based policy
work in tandem to synthesize multi-modal infinite-horizon human motions without
any task guidance or user input. Our approach is the first humanoid control
method that successfully learns from a large-scale human motion dataset
(Human3.6M) and generates diverse long-term motions. Code and videos are
available at https://www.ye-yuan.com/rfc.
Related papers
- ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills [46.16771391136412]
ASAP is a two-stage framework designed to tackle the dynamics mismatch and enable agile humanoid whole-body skills.
In the first stage, we pre-train motion tracking policies in simulation using retargeted human motion data.
In the second stage, we deploy the policies in the real world and collect real-world data to train a delta (residual) action model.
arXiv Detail & Related papers (2025-02-03T08:22:46Z) - ExBody2: Advanced Expressive Humanoid Whole-Body Control [16.69009772546575]
We propose ExBody2, a whole-body tracking framework that can control the humanoid to mimic the motion.
The model is trained in simulation with Reinforcement Learning and then transferred to the real world.
We conduct experiments on two humanoid platforms and demonstrate the superiority of our approach against state-of-the-arts.
arXiv Detail & Related papers (2024-12-17T18:59:51Z) - Learning Multi-Modal Whole-Body Control for Real-World Humanoid Robots [13.229028132036321]
Masked Humanoid Controller (MHC) supports standing, walking, and mimicry of whole and partial-body motions.
MHC imitates partially masked motions from a library of behaviors spanning standing, walking, optimized reference trajectories, re-targeted video clips, and human motion capture data.
We demonstrate sim-to-real transfer on the real-world Digit V3 humanoid robot.
arXiv Detail & Related papers (2024-07-30T09:10:24Z) - Expressive Whole-Body Control for Humanoid Robots [20.132927075816742]
We learn a whole-body control policy on a human-sized robot to mimic human motions as realistic as possible.
With training in simulation and Sim2Real transfer, our policy can control a humanoid robot to walk in different styles, shake hands with humans, and even dance with a human in the real world.
arXiv Detail & Related papers (2024-02-26T18:09:24Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Skeleton2Humanoid: Animating Simulated Characters for
Physically-plausible Motion In-betweening [59.88594294676711]
Modern deep learning based motion synthesis approaches barely consider the physical plausibility of synthesized motions.
We propose a system Skeleton2Humanoid'' which performs physics-oriented motion correction at test time.
Experiments on the challenging LaFAN1 dataset show our system can outperform prior methods significantly in terms of both physical plausibility and accuracy.
arXiv Detail & Related papers (2022-10-09T16:15:34Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.