Learning Motion Priors for 4D Human Body Capture in 3D Scenes
- URL: http://arxiv.org/abs/2108.10399v1
- Date: Mon, 23 Aug 2021 20:47:09 GMT
- Title: Learning Motion Priors for 4D Human Body Capture in 3D Scenes
- Authors: Siwei Zhang, Yan Zhang, Federica Bogo, Marc Pollefeys, Siyu Tang
- Abstract summary: We propose LEMO: LEarning human MOtion priors for 4D human body capture.
We introduce a novel motion prior, which reduces the jitters exhibited by poses recovered over a sequence.
We also design a contact friction term and a contact-aware motion infiller obtained via per-instance self-supervised training.
With our pipeline, we demonstrate high-quality 4D human body capture, reconstructing smooth motions and physically plausible body-scene interactions.
- Score: 81.54377747405812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering high-quality 3D human motion in complex scenes from monocular
videos is important for many applications, ranging from AR/VR to robotics.
However, capturing realistic human-scene interactions, while dealing with
occlusions and partial views, is challenging; current approaches are still far
from achieving compelling results. We address this problem by proposing LEMO:
LEarning human MOtion priors for 4D human body capture. By leveraging the
large-scale motion capture dataset AMASS, we introduce a novel motion
smoothness prior, which strongly reduces the jitters exhibited by poses
recovered over a sequence. Furthermore, to handle contacts and occlusions
occurring frequently in body-scene interactions, we design a contact friction
term and a contact-aware motion infiller obtained via per-instance
self-supervised training. To prove the effectiveness of the proposed motion
priors, we combine them into a novel pipeline for 4D human body capture in 3D
scenes. With our pipeline, we demonstrate high-quality 4D human body capture,
reconstructing smooth motions and physically plausible body-scene interactions.
The code and data are available at https://sanweiliti.github.io/LEMO/LEMO.html.
Related papers
- AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation [60.5897687447003]
AvatarGO is a novel framework designed to generate realistic 4D HOI scenes from textual inputs.
Our framework not only generates coherent compositional motions, but also exhibits greater robustness in handling issues.
As the first attempt to synthesize 4D avatars with object interactions, we hope AvatarGO could open new doors for human-centric 4D content creation.
arXiv Detail & Related papers (2024-10-09T17:58:56Z) - Synthesizing Diverse Human Motions in 3D Indoor Scenes [16.948649870341782]
We present a novel method for populating 3D indoor scenes with virtual humans that can navigate in the environment and interact with objects in a realistic manner.
Existing approaches rely on training sequences that contain captured human motions and the 3D scenes they interact with.
We propose a reinforcement learning-based approach that enables virtual humans to navigate in 3D scenes and interact with objects realistically and autonomously.
arXiv Detail & Related papers (2023-05-21T09:22:24Z) - Decoupling Human and Camera Motion from Videos in the Wild [67.39432972193929]
We propose a method to reconstruct global human trajectories from videos in the wild.
Our method decouples the camera and human motion, which allows us to place people in the same world coordinate frame.
arXiv Detail & Related papers (2023-02-24T18:59:15Z) - Contact-aware Human Motion Forecasting [87.04827994793823]
We tackle the task of scene-aware 3D human motion forecasting, which consists of predicting future human poses given a 3D scene and a past human motion.
Our approach outperforms the state-of-the-art human motion forecasting and human synthesis methods on both synthetic and real datasets.
arXiv Detail & Related papers (2022-10-08T07:53:19Z) - Estimating 3D Motion and Forces of Human-Object Interactions from
Internet Videos [49.52070710518688]
We introduce a method to reconstruct the 3D motion of a person interacting with an object from a single RGB video.
Our method estimates the 3D poses of the person together with the object pose, the contact positions and the contact forces on the human body.
arXiv Detail & Related papers (2021-11-02T13:40:18Z) - Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors [71.29186299435423]
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment.
We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift.
HPS could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera.
arXiv Detail & Related papers (2021-03-31T17:58:31Z) - Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes [27.443701512923177]
We propose to bridge human motion synthesis and scene affordance reasoning.
We present a hierarchical generative framework to synthesize long-term 3D human motion conditioning on the 3D scene structure.
Our experiments show significant improvements over previous approaches on generating natural and physically plausible human motion in a scene.
arXiv Detail & Related papers (2020-12-10T09:09:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.