Stochastic Scene-Aware Motion Prediction
- URL: http://arxiv.org/abs/2108.08284v1
- Date: Wed, 18 Aug 2021 17:56:17 GMT
- Title: Stochastic Scene-Aware Motion Prediction
- Authors: Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang,
Yi Zhou, Michael Black
- Abstract summary: We present a novel data-driven, synthesis motion method that models different styles of performing a given action with a target object.
Our method, called SAMP, for SceneAware Motion Prediction, generalizes to target objects of various geometries while enabling the character to navigate in cluttered scenes.
- Score: 41.6104600038666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A long-standing goal in computer vision is to capture, model, and
realistically synthesize human behavior. Specifically, by learning from data,
our goal is to enable virtual humans to navigate within cluttered indoor scenes
and naturally interact with objects. Such embodied behavior has applications in
virtual reality, computer games, and robotics, while synthesized behavior can
be used as a source of training data. This is challenging because real human
motion is diverse and adapts to the scene. For example, a person can sit or lie
on a sofa in many places and with varying styles. It is necessary to model this
diversity when synthesizing virtual humans that realistically perform
human-scene interactions. We present a novel data-driven, stochastic motion
synthesis method that models different styles of performing a given action with
a target object. Our method, called SAMP, for Scene-Aware Motion Prediction,
generalizes to target objects of various geometries while enabling the
character to navigate in cluttered scenes. To train our method, we collected
MoCap data covering various sitting, lying down, walking, and running styles.
We demonstrate our method on complex indoor scenes and achieve superior
performance compared to existing solutions. Our code and data are available for
research at https://samp.is.tue.mpg.de.
Related papers
- Massively Multi-Person 3D Human Motion Forecasting with Scene Context [13.197408989895102]
We propose a scene-aware social transformer model (SAST) to forecast long-term (10s) human motion motion.
We combine a temporal convolutional encoder-decoder architecture with a Transformer-based bottleneck that allows us to efficiently combine motion and scene information.
Our model outperforms other approaches in terms of realism and diversity on different metrics and in a user study.
arXiv Detail & Related papers (2024-09-18T17:58:51Z) - SynPlay: Importing Real-world Diversity for a Synthetic Human Dataset [19.32308498024933]
We introduce Synthetic Playground (SynPlay), a new synthetic human dataset that aims to bring out the diversity of human appearance in the real world.
We focus on two factors to achieve a level of diversity that has not yet been seen in previous works: realistic human motions and poses.
We show that using SynPlay in model training leads to enhanced accuracy over existing synthetic datasets for human detection and segmentation.
arXiv Detail & Related papers (2024-08-21T17:58:49Z) - Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z) - QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse
Sensors [69.75711933065378]
We show that headset and controller pose can generate realistic full-body poses even in highly constrained environments.
We discuss three features, the environment representation, the contact reward and scene randomization, crucial to the performance of the method.
arXiv Detail & Related papers (2023-06-09T04:40:38Z) - Synthesizing Diverse Human Motions in 3D Indoor Scenes [16.948649870341782]
We present a novel method for populating 3D indoor scenes with virtual humans that can navigate in the environment and interact with objects in a realistic manner.
Existing approaches rely on training sequences that contain captured human motions and the 3D scenes they interact with.
We propose a reinforcement learning-based approach that enables virtual humans to navigate in 3D scenes and interact with objects realistically and autonomously.
arXiv Detail & Related papers (2023-05-21T09:22:24Z) - CIRCLE: Capture In Rich Contextual Environments [69.97976304918149]
We propose a novel motion acquisition system in which the actor perceives and operates in a highly contextual virtual world.
We present CIRCLE, a dataset containing 10 hours of full-body reaching motion from 5 subjects across nine scenes.
We use this dataset to train a model that generates human motion conditioned on scene information.
arXiv Detail & Related papers (2023-03-31T09:18:12Z) - Synthesizing Physical Character-Scene Interactions [64.26035523518846]
It is necessary to synthesize such interactions between virtual characters and their surroundings.
We present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters.
Our approach takes physics-based character motion generation a step closer to broad applicability.
arXiv Detail & Related papers (2023-02-02T05:21:32Z) - IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions [69.95820880360345]
We present the first framework to synthesize the full-body motion of virtual human characters with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters.
We show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios.
arXiv Detail & Related papers (2022-12-14T23:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.