Towards Diverse and Natural Scene-aware 3D Human Motion Synthesis
- URL: http://arxiv.org/abs/2205.13001v1
- Date: Wed, 25 May 2022 18:20:01 GMT
- Title: Towards Diverse and Natural Scene-aware 3D Human Motion Synthesis
- Authors: Jingbo Wang, Yu Rong, Jingyuan Liu, Sijie Yan, Dahua Lin, Bo Dai
- Abstract summary: We focus on the problem of synthesizing diverse scene-aware human motions under the guidance of target action sequences.
Based on this factorized scheme, a hierarchical framework is proposed, with each sub-module responsible for modeling one aspect.
Experiment results show that the proposed framework remarkably outperforms previous methods in terms of diversity and naturalness.
- Score: 117.15586710830489
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to synthesize long-term human motion sequences in real-world
scenes can facilitate numerous applications. Previous approaches for
scene-aware motion synthesis are constrained by pre-defined target objects or
positions and thus limit the diversity of human-scene interactions for
synthesized motions. In this paper, we focus on the problem of synthesizing
diverse scene-aware human motions under the guidance of target action
sequences. To achieve this, we first decompose the diversity of scene-aware
human motions into three aspects, namely interaction diversity (e.g. sitting on
different objects with different poses in the given scenes), path diversity
(e.g. moving to the target locations following different paths), and the motion
diversity (e.g. having various body movements during moving). Based on this
factorized scheme, a hierarchical framework is proposed, with each sub-module
responsible for modeling one aspect. We assess the effectiveness of our
framework on two challenging datasets for scene-aware human motion synthesis.
The experiment results show that the proposed framework remarkably outperforms
previous methods in terms of diversity and naturalness.
Related papers
- ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - DiverseMotion: Towards Diverse Human Motion Generation via Discrete
Diffusion [70.33381660741861]
We present DiverseMotion, a new approach for synthesizing high-quality human motions conditioned on textual descriptions.
We show that our DiverseMotion achieves the state-of-the-art motion quality and competitive motion diversity.
arXiv Detail & Related papers (2023-09-04T05:43:48Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Locomotion-Action-Manipulation: Synthesizing Human-Scene Interactions in
Complex 3D Environments [11.87902527509297]
We present LAMA, Locomotion-Action-MAnipulation, to synthesize natural and plausible long-term human movements in complex indoor environments.
Unlike existing methods that require motion data "paired" with scanned 3D scenes for supervision, we formulate the problem as a test-time optimization by using human motion capture data only for synthesis.
arXiv Detail & Related papers (2023-01-09T18:59:16Z) - IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions [69.95820880360345]
We present the first framework to synthesize the full-body motion of virtual human characters with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters.
We show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios.
arXiv Detail & Related papers (2022-12-14T23:59:24Z) - Scene-aware Generative Network for Human Motion Synthesis [125.21079898942347]
We propose a new framework, with the interaction between the scene and the human motion taken into account.
Considering the uncertainty of human motion, we formulate this task as a generative task.
We derive a GAN based learning approach, with discriminators to enforce the compatibility between the human motion and the contextual scene.
arXiv Detail & Related papers (2021-05-31T09:05:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.