Skeleton2Humanoid: Animating Simulated Characters for
Physically-plausible Motion In-betweening
- URL: http://arxiv.org/abs/2210.04294v1
- Date: Sun, 9 Oct 2022 16:15:34 GMT
- Title: Skeleton2Humanoid: Animating Simulated Characters for
Physically-plausible Motion In-betweening
- Authors: Yunhao Li, Zhenbo Yu, Yucheng Zhu, Bingbing Ni, Guangtao Zhai, Wei
Shen
- Abstract summary: Modern deep learning based motion synthesis approaches barely consider the physical plausibility of synthesized motions.
We propose a system Skeleton2Humanoid'' which performs physics-oriented motion correction at test time.
Experiments on the challenging LaFAN1 dataset show our system can outperform prior methods significantly in terms of both physical plausibility and accuracy.
- Score: 59.88594294676711
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human motion synthesis is a long-standing problem with various applications
in digital twins and the Metaverse. However, modern deep learning based motion
synthesis approaches barely consider the physical plausibility of synthesized
motions and consequently they usually produce unrealistic human motions. In
order to solve this problem, we propose a system ``Skeleton2Humanoid'' which
performs physics-oriented motion correction at test time by regularizing
synthesized skeleton motions in a physics simulator. Concretely, our system
consists of three sequential stages: (I) test time motion synthesis network
adaptation, (II) skeleton to humanoid matching and (III) motion imitation based
on reinforcement learning (RL). Stage I introduces a test time adaptation
strategy, which improves the physical plausibility of synthesized human
skeleton motions by optimizing skeleton joint locations. Stage II performs an
analytical inverse kinematics strategy, which converts the optimized human
skeleton motions to humanoid robot motions in a physics simulator, then the
converted humanoid robot motions can be served as reference motions for the RL
policy to imitate. Stage III introduces a curriculum residual force control
policy, which drives the humanoid robot to mimic complex converted reference
motions in accordance with the physical law. We verify our system on a typical
human motion synthesis task, motion-in-betweening. Experiments on the
challenging LaFAN1 dataset show our system can outperform prior methods
significantly in terms of both physical plausibility and accuracy. Code will be
released for research purposes at:
https://github.com/michaelliyunhao/Skeleton2Humanoid
Related papers
- ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills [46.16771391136412]
ASAP is a two-stage framework designed to tackle the dynamics mismatch and enable agile humanoid whole-body skills.
In the first stage, we pre-train motion tracking policies in simulation using retargeted human motion data.
In the second stage, we deploy the policies in the real world and collect real-world data to train a delta (residual) action model.
arXiv Detail & Related papers (2025-02-03T08:22:46Z) - Learning Speed-Adaptive Walking Agent Using Imitation Learning with Physics-Informed Simulation [0.0]
We create a skeletal humanoid agent capable of adapting to varying walking speeds while maintaining biomechanically realistic motions.
The framework combines a synthetic data generator, which produces biomechanically plausible gait kinematics from open-source biomechanics data, and a training system that uses adversarial imitation learning to train the agent's walking policy.
arXiv Detail & Related papers (2024-12-05T07:55:58Z) - Morph: A Motion-free Physics Optimization Framework for Human Motion Generation [25.51726849102517]
Our framework achieves state-of-the-art motion generation quality while improving physical plausibility drastically.
Experiments on text-to-motion and music-to-dance generation tasks demonstrate that our framework achieves state-of-the-art motion generation quality.
arXiv Detail & Related papers (2024-11-22T14:09:56Z) - PhysReaction: Physically Plausible Real-Time Humanoid Reaction Synthesis via Forward Dynamics Guided 4D Imitation [19.507619255773125]
We propose a Forward Dynamics Guided 4D Imitation method to generate physically plausible human-like reactions.
The learned policy is capable of generating physically plausible and human-like reactions in real-time, significantly improving the speed(x33) and quality of reactions.
arXiv Detail & Related papers (2024-04-01T12:21:56Z) - ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - GraMMaR: Ground-aware Motion Model for 3D Human Motion Reconstruction [61.833152949826946]
We propose a novel Ground-aware Motion Model for 3D Human Motion Reconstruction, named GraMMaR.
GraMMaR learns the distribution of transitions in both pose and interaction between every joint and ground plane at each time step of a motion sequence.
It is trained to explicitly promote consistency between the motion and distance change towards the ground.
arXiv Detail & Related papers (2023-06-29T07:22:20Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Physics-based Human Motion Estimation and Synthesis from Videos [0.0]
We propose a framework for training generative models of physically plausible human motion directly from monocular RGB videos.
At the core of our method is a novel optimization formulation that corrects imperfect image-based pose estimations.
Results show that our physically-corrected motions significantly outperform prior work on pose estimation.
arXiv Detail & Related papers (2021-09-21T01:57:54Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.