The Wanderings of Odysseus in 3D Scenes
- URL: http://arxiv.org/abs/2112.09251v1
- Date: Thu, 16 Dec 2021 23:24:50 GMT
- Title: The Wanderings of Odysseus in 3D Scenes
- Authors: Yan Zhang and Siyu Tang
- Abstract summary: We propose generative motion primitives via body surface markers, shortened as GAMMA.
We exploit body surface markers and conditional variational autoencoder to model each motion primitive.
Experiments show that our method can produce more realistic and controllable motion than state-of-the-art data-driven method.
- Score: 22.230079422580065
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Our goal is to populate digital environments, in which the digital humans
have diverse body shapes, move perpetually, and have plausible body-scene
contact. The core challenge is to generate realistic, controllable, and
infinitely long motions for diverse 3D bodies. To this end, we propose
generative motion primitives via body surface markers, shortened as GAMMA. In
our solution, we decompose the long-term motion into a time sequence of motion
primitives. We exploit body surface markers and conditional variational
autoencoder to model each motion primitive, and generate long-term motion by
implementing the generative model recursively. To control the motion to reach a
goal, we apply a policy network to explore the model latent space, and use a
tree-based search to preserve the motion quality during testing. Experiments
show that our method can produce more realistic and controllable motion than
state-of-the-art data-driven method. With conventional path-finding algorithms,
the generated human bodies can realistically move long distances for a long
period of time in the scene. Code will be released for research purposes at:
\url{https://yz-cnsdqz.github.io/eigenmotion/GAMMA/}
Related papers
- Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Hierarchical Generation of Human-Object Interactions with Diffusion
Probabilistic Models [71.64318025625833]
This paper presents a novel approach to generating the 3D motion of a human interacting with a target object.
Our framework first generates a set of milestones and then synthesizes the motion along them.
The experiments on the NSM, COUCH, and SAMP datasets show that our approach outperforms previous methods by a large margin in both quality and diversity.
arXiv Detail & Related papers (2023-10-03T17:50:23Z) - Synthesizing Diverse Human Motions in 3D Indoor Scenes [16.948649870341782]
We present a novel method for populating 3D indoor scenes with virtual humans that can navigate in the environment and interact with objects in a realistic manner.
Existing approaches rely on training sequences that contain captured human motions and the 3D scenes they interact with.
We propose a reinforcement learning-based approach that enables virtual humans to navigate in 3D scenes and interact with objects realistically and autonomously.
arXiv Detail & Related papers (2023-05-21T09:22:24Z) - Generating Continual Human Motion in Diverse 3D Scenes [56.70255926954609]
We introduce a method to synthesize animator guided human motion across 3D scenes.
We decompose the continual motion synthesis problem into walking along paths and transitioning in and out of the actions specified by the keypoints.
Our model can generate long sequences of diverse actions such as grabbing, sitting and leaning chained together.
arXiv Detail & Related papers (2023-04-04T18:24:22Z) - Generating Smooth Pose Sequences for Diverse Human Motion Prediction [90.45823619796674]
We introduce a unified deep generative network for both diverse and controllable motion prediction.
Our experiments on two standard benchmark datasets, Human3.6M and HumanEva-I, demonstrate that our approach outperforms the state-of-the-art baselines in terms of both sample diversity and accuracy.
arXiv Detail & Related papers (2021-08-19T00:58:00Z) - We are More than Our Joints: Predicting how 3D Bodies Move [63.34072043909123]
We train a novel variational autoencoder that generates motions from latent frequencies.
Experiments show that our method produces state-of-the-art results and realistic 3D body animations.
arXiv Detail & Related papers (2020-12-01T16:41:04Z) - Deep Generative Modelling of Human Reach-and-Place Action [15.38392014421915]
We suggest a deep generative model for human reach-and-place action conditioned on a start and end position.
We have captured a dataset of 600 such human 3D actions, to sample the 2x3-D space of 3D source and targets.
Our evaluation includes several ablations, analysis of generative diversity and applications.
arXiv Detail & Related papers (2020-10-05T21:36:20Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z) - Generative Tweening: Long-term Inbetweening of 3D Human Motions [40.16462039509098]
We introduce a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions.
We trained with 79 classes of captured motion data, our network performs robustly on a variety of highly complex motion styles.
arXiv Detail & Related papers (2020-05-18T17:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.