Synthesizing Physically Plausible Human Motions in 3D Scenes
- URL: http://arxiv.org/abs/2308.09036v1
- Date: Thu, 17 Aug 2023 15:17:49 GMT
- Title: Synthesizing Physically Plausible Human Motions in 3D Scenes
- Authors: Liang Pan, Jingbo Wang, Buzhen Huang, Junyu Zhang, Haofan Wang, Xu
Tang, Yangang Wang
- Abstract summary: We present a framework that enables physically simulated characters to perform long-term interaction tasks in diverse, cluttered, and unseen scenes.
Specifically, InterCon contains two complementary policies that enable characters to enter and leave the interacting state.
To generate interaction with objects at different places, we further design NavCon, a trajectory following policy, to keep characters' motions in the free space of 3D scenes.
- Score: 41.1310197485928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing physically plausible human motions in 3D scenes is a challenging
problem. Kinematics-based methods cannot avoid inherent artifacts (e.g.,
penetration and foot skating) due to the lack of physical constraints.
Meanwhile, existing physics-based methods cannot generalize to multi-object
scenarios since the policy trained with reinforcement learning has limited
modeling capacity. In this work, we present a framework that enables physically
simulated characters to perform long-term interaction tasks in diverse,
cluttered, and unseen scenes. The key idea is to decompose human-scene
interactions into two fundamental processes, Interacting and Navigating, which
motivates us to construct two reusable Controller, i.e., InterCon and NavCon.
Specifically, InterCon contains two complementary policies that enable
characters to enter and leave the interacting state (e.g., sitting on a chair
and getting up). To generate interaction with objects at different places, we
further design NavCon, a trajectory following policy, to keep characters'
locomotion in the free space of 3D scenes. Benefiting from the divide and
conquer strategy, we can train the policies in simple environments and
generalize to complex multi-object scenes. Experimental results demonstrate
that our framework can synthesize physically plausible long-term human motions
in complex 3D scenes. Code will be publicly released at
https://github.com/liangpan99/InterScene.
Related papers
- MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling [21.1274747033854]
Character video synthesis aims to produce realistic videos of animatable characters within lifelike scenes.
Milo is a novel framework which can synthesize character videos with controllable attributes.
Milo achieves advanced scalability to arbitrary characters, generality to novel 3D motions, and applicability to interactive real-world scenes.
arXiv Detail & Related papers (2024-09-24T15:00:07Z) - Physics-based Scene Layout Generation from Human Motion [21.939444709132395]
We present a physics-based approach that simultaneously optimize a scene layout generator and simulates a moving human in a physics simulator.
We use reinforcement learning to perform a dual-optimization of both the character motion imitation controller and the scene layout generator.
We evaluate our method using motions from SAMP and PROX, and demonstrate physically plausible scene layout reconstruction compared with the previous kinematics-based method.
arXiv Detail & Related papers (2024-05-21T02:36:37Z) - Style-Consistent 3D Indoor Scene Synthesis with Decoupled Objects [84.45345829270626]
Controllable 3D indoor scene synthesis stands at the forefront of technological progress.
Current methods for scene stylization are limited to applying styles to the entire scene.
We introduce a unique pipeline designed for synthesis 3D indoor scenes.
arXiv Detail & Related papers (2024-01-24T03:10:36Z) - Revisit Human-Scene Interaction via Space Occupancy [55.67657438543008]
Human-scene Interaction (HSI) generation is a challenging task and crucial for various downstream tasks.
In this work, we argue that interaction with a scene is essentially interacting with the space occupancy of the scene from an abstract physical perspective.
By treating pure motion sequences as records of humans interacting with invisible scene occupancy, we can aggregate motion-only data into a large-scale paired human-occupancy interaction database.
arXiv Detail & Related papers (2023-12-05T12:03:00Z) - QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse
Sensors [69.75711933065378]
We show that headset and controller pose can generate realistic full-body poses even in highly constrained environments.
We discuss three features, the environment representation, the contact reward and scene randomization, crucial to the performance of the method.
arXiv Detail & Related papers (2023-06-09T04:40:38Z) - Synthesizing Diverse Human Motions in 3D Indoor Scenes [16.948649870341782]
We present a novel method for populating 3D indoor scenes with virtual humans that can navigate in the environment and interact with objects in a realistic manner.
Existing approaches rely on training sequences that contain captured human motions and the 3D scenes they interact with.
We propose a reinforcement learning-based approach that enables virtual humans to navigate in 3D scenes and interact with objects realistically and autonomously.
arXiv Detail & Related papers (2023-05-21T09:22:24Z) - Synthesizing Physical Character-Scene Interactions [64.26035523518846]
It is necessary to synthesize such interactions between virtual characters and their surroundings.
We present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters.
Our approach takes physics-based character motion generation a step closer to broad applicability.
arXiv Detail & Related papers (2023-02-02T05:21:32Z) - IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions [69.95820880360345]
We present the first framework to synthesize the full-body motion of virtual human characters with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters.
We show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios.
arXiv Detail & Related papers (2022-12-14T23:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.