PACE: Data-Driven Virtual Agent Interaction in Dense and Cluttered
Environments
- URL: http://arxiv.org/abs/2303.14255v1
- Date: Fri, 24 Mar 2023 19:49:08 GMT
- Title: PACE: Data-Driven Virtual Agent Interaction in Dense and Cluttered
Environments
- Authors: James Mullen, Dinesh Manocha
- Abstract summary: We present PACE, a novel method for modifying motion-captured virtual agents to interact with and move throughout dense, cluttered 3D scenes.
Our approach changes a given motion sequence of a virtual agent as needed to adjust to the obstacles and objects in the environment.
We compare our method with prior motion generating techniques and highlight the benefits of our method with a perceptual study and physical plausibility metrics.
- Score: 69.03289331433874
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present PACE, a novel method for modifying motion-captured virtual agents
to interact with and move throughout dense, cluttered 3D scenes. Our approach
changes a given motion sequence of a virtual agent as needed to adjust to the
obstacles and objects in the environment. We first take the individual frames
of the motion sequence most important for modeling interactions with the scene
and pair them with the relevant scene geometry, obstacles, and semantics such
that interactions in the agents motion match the affordances of the scene
(e.g., standing on a floor or sitting in a chair). We then optimize the motion
of the human by directly altering the high-DOF pose at each frame in the motion
to better account for the unique geometric constraints of the scene. Our
formulation uses novel loss functions that maintain a realistic flow and
natural-looking motion. We compare our method with prior motion generating
techniques and highlight the benefits of our method with a perceptual study and
physical plausibility metrics. Human raters preferred our method over the prior
approaches. Specifically, they preferred our method 57.1% of the time versus
the state-of-the-art method using existing motions, and 81.0% of the time
versus a state-of-the-art motion synthesis method. Additionally, our method
performs significantly higher on established physical plausibility and
interaction metrics. Specifically, we outperform competing methods by over 1.2%
in terms of the non-collision metric and by over 18% in terms of the contact
metric. We have integrated our interactive system with Microsoft HoloLens and
demonstrate its benefits in real-world indoor scenes. Our project website is
available at https://gamma.umd.edu/pace/.
Related papers
- Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z) - PACE: Human and Camera Motion Estimation from in-the-wild Videos [113.76041632912577]
We present a method to estimate human motion in a global scene from moving cameras.
This is a highly challenging task due to the coupling of human and camera motions in the video.
We propose a joint optimization framework that disentangles human and camera motions using both foreground human motion priors and background scene features.
arXiv Detail & Related papers (2023-10-20T19:04:14Z) - Synthesizing Diverse Human Motions in 3D Indoor Scenes [16.948649870341782]
We present a novel method for populating 3D indoor scenes with virtual humans that can navigate in the environment and interact with objects in a realistic manner.
Existing approaches rely on training sequences that contain captured human motions and the 3D scenes they interact with.
We propose a reinforcement learning-based approach that enables virtual humans to navigate in 3D scenes and interact with objects realistically and autonomously.
arXiv Detail & Related papers (2023-05-21T09:22:24Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Synthesizing Physical Character-Scene Interactions [64.26035523518846]
It is necessary to synthesize such interactions between virtual characters and their surroundings.
We present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters.
Our approach takes physics-based character motion generation a step closer to broad applicability.
arXiv Detail & Related papers (2023-02-02T05:21:32Z) - Physics-based Human Motion Estimation and Synthesis from Videos [0.0]
We propose a framework for training generative models of physically plausible human motion directly from monocular RGB videos.
At the core of our method is a novel optimization formulation that corrects imperfect image-based pose estimations.
Results show that our physically-corrected motions significantly outperform prior work on pose estimation.
arXiv Detail & Related papers (2021-09-21T01:57:54Z) - Contact-Aware Retargeting of Skinned Motion [49.71236739408685]
This paper introduces a motion estimation method that preserves self-contacts and prevents interpenetration.
The method identifies self-contacts and ground contacts in the input motion, and optimize the motion to apply to the output skeleton.
In experiments, our results quantitatively outperform previous methods and we conduct a user study where our retargeted motions are rated as higher-quality than those produced by recent works.
arXiv Detail & Related papers (2021-09-15T17:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.