Physics-based Human Motion Estimation and Synthesis from Videos
- URL: http://arxiv.org/abs/2109.09913v1
- Date: Tue, 21 Sep 2021 01:57:54 GMT
- Title: Physics-based Human Motion Estimation and Synthesis from Videos
- Authors: Kevin Xie (1 and 2), Tingwu Wang (1 and 2), Umar Iqbal (2), Yunrong
Guo (2), Sanja Fidler (1 and 2), Florian Shkurti (1) ((1) University of
Toronto, (2) Nvidia)
- Abstract summary: We propose a framework for training generative models of physically plausible human motion directly from monocular RGB videos.
At the core of our method is a novel optimization formulation that corrects imperfect image-based pose estimations.
Results show that our physically-corrected motions significantly outperform prior work on pose estimation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion synthesis is an important problem with applications in graphics,
gaming and simulation environments for robotics. Existing methods require
accurate motion capture data for training, which is costly to obtain. Instead,
we propose a framework for training generative models of physically plausible
human motion directly from monocular RGB videos, which are much more widely
available. At the core of our method is a novel optimization formulation that
corrects imperfect image-based pose estimations by enforcing physics
constraints and reasons about contacts in a differentiable way. This
optimization yields corrected 3D poses and motions, as well as their
corresponding contact forces. Results show that our physically-corrected
motions significantly outperform prior work on pose estimation. We can then use
these to train a generative model to synthesize future motion. We demonstrate
both qualitatively and quantitatively significantly improved motion estimation,
synthesis quality and physical plausibility achieved by our method on the large
scale Human3.6m dataset \cite{h36m_pami} as compared to prior kinematic and
physics-based methods. By enabling learning of motion synthesis from video, our
method paves the way for large-scale, realistic and diverse motion synthesis.
Related papers
- ReinDiffuse: Crafting Physically Plausible Motions with Reinforced Diffusion Model [9.525806425270428]
We present emphReinDiffuse that combines reinforcement learning with motion diffusion model to generate physically credible human motions.
Our method adapts Motion Diffusion Model to output a parameterized distribution of actions, making them compatible with reinforcement learning paradigms.
Our approach outperforms existing state-of-the-art models on two major datasets, HumanML3D and KIT-ML.
arXiv Detail & Related papers (2024-10-09T16:24:11Z) - Aligning Human Motion Generation with Human Perceptions [51.831338643012444]
We propose a data-driven approach to bridge the gap by introducing a large-scale human perceptual evaluation dataset, MotionPercept, and a human motion critic model, MotionCritic.
Our critic model offers a more accurate metric for assessing motion quality and could be readily integrated into the motion generation pipeline.
arXiv Detail & Related papers (2024-07-02T14:01:59Z) - DreamPhysics: Learning Physical Properties of Dynamic 3D Gaussians with Video Diffusion Priors [75.83647027123119]
We propose to learn the physical properties of a material field with video diffusion priors.
We then utilize a physics-based Material-Point-Method simulator to generate 4D content with realistic motions.
arXiv Detail & Related papers (2024-06-03T16:05:25Z) - PACE: Human and Camera Motion Estimation from in-the-wild Videos [113.76041632912577]
We present a method to estimate human motion in a global scene from moving cameras.
This is a highly challenging task due to the coupling of human and camera motions in the video.
We propose a joint optimization framework that disentangles human and camera motions using both foreground human motion priors and background scene features.
arXiv Detail & Related papers (2023-10-20T19:04:14Z) - Skeleton2Humanoid: Animating Simulated Characters for
Physically-plausible Motion In-betweening [59.88594294676711]
Modern deep learning based motion synthesis approaches barely consider the physical plausibility of synthesized motions.
We propose a system Skeleton2Humanoid'' which performs physics-oriented motion correction at test time.
Experiments on the challenging LaFAN1 dataset show our system can outperform prior methods significantly in terms of both physical plausibility and accuracy.
arXiv Detail & Related papers (2022-10-09T16:15:34Z) - Trajectory Optimization for Physics-Based Reconstruction of 3d Human
Pose from Monocular Video [31.96672354594643]
We focus on the task of estimating a physically plausible articulated human motion from monocular video.
Existing approaches that do not consider physics often produce temporally inconsistent output with motion artifacts.
We show that our approach achieves competitive results with respect to existing physics-based methods on the Human3.6M benchmark.
arXiv Detail & Related papers (2022-05-24T18:02:49Z) - Differentiable Dynamics for Articulated 3d Human Motion Reconstruction [29.683633237503116]
We introduce DiffPhy, a differentiable physics-based model for articulated 3d human motion reconstruction from video.
We validate the model by demonstrating that it can accurately reconstruct physically plausible 3d human motion from monocular video.
arXiv Detail & Related papers (2022-05-24T17:58:37Z) - Render In-between: Motion Guided Video Synthesis for Action
Interpolation [53.43607872972194]
We propose a motion-guided frame-upsampling framework that is capable of producing realistic human motion and appearance.
A novel motion model is trained to inference the non-linear skeletal motion between frames by leveraging a large-scale motion-capture dataset.
Our pipeline only requires low-frame-rate videos and unpaired human motion data but does not require high-frame-rate videos for training.
arXiv Detail & Related papers (2021-11-01T15:32:51Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.