Synthesizing Physical Character-Scene Interactions
- URL: http://arxiv.org/abs/2302.00883v1
- Date: Thu, 2 Feb 2023 05:21:32 GMT
- Title: Synthesizing Physical Character-Scene Interactions
- Authors: Mohamed Hassan, Yunrong Guo, Tingwu Wang, Michael Black, Sanja Fidler,
Xue Bin Peng
- Abstract summary: It is necessary to synthesize such interactions between virtual characters and their surroundings.
We present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters.
Our approach takes physics-based character motion generation a step closer to broad applicability.
- Score: 64.26035523518846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Movement is how people interact with and affect their environment. For
realistic character animation, it is necessary to synthesize such interactions
between virtual characters and their surroundings. Despite recent progress in
character animation using machine learning, most systems focus on controlling
an agent's movements in fairly simple and homogeneous environments, with
limited interactions with other objects. Furthermore, many previous approaches
that synthesize human-scene interactions require significant manual labeling of
the training data. In contrast, we present a system that uses adversarial
imitation learning and reinforcement learning to train physically-simulated
characters that perform scene interaction tasks in a natural and life-like
manner. Our method learns scene interaction behaviors from large unstructured
motion datasets, without manual annotation of the motion data. These scene
interactions are learned using an adversarial discriminator that evaluates the
realism of a motion within the context of a scene. The key novelty involves
conditioning both the discriminator and the policy networks on scene context.
We demonstrate the effectiveness of our approach through three challenging
scene interaction tasks: carrying, sitting, and lying down, which require
coordination of a character's movements in relation to objects in the
environment. Our policies learn to seamlessly transition between different
behaviors like idling, walking, and sitting. By randomizing the properties of
the objects and their placements during training, our method is able to
generalize beyond the objects and scenarios depicted in the training dataset,
producing natural character-scene interactions for a wide variety of object
shapes and placements. The approach takes physics-based character motion
generation a step closer to broad applicability.
Related papers
- Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z) - Revisit Human-Scene Interaction via Space Occupancy [55.67657438543008]
Human-scene Interaction (HSI) generation is a challenging task and crucial for various downstream tasks.
In this work, we argue that interaction with a scene is essentially interacting with the space occupancy of the scene from an abstract physical perspective.
By treating pure motion sequences as records of humans interacting with invisible scene occupancy, we can aggregate motion-only data into a large-scale paired human-occupancy interaction database.
arXiv Detail & Related papers (2023-12-05T12:03:00Z) - MAAIP: Multi-Agent Adversarial Interaction Priors for imitation from
fighting demonstrations for physics-based characters [5.303375034962503]
We propose a novel Multi-Agent Generative Adversarial Imitation Learning based approach.
Our system trains control policies allowing each character to imitate the interactive skills associated with each actor.
This approach has been tested on two different fighting styles, boxing and full-body martial art, to demonstrate the ability of the method to imitate different styles.
arXiv Detail & Related papers (2023-11-04T20:40:39Z) - QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse
Sensors [69.75711933065378]
We show that headset and controller pose can generate realistic full-body poses even in highly constrained environments.
We discuss three features, the environment representation, the contact reward and scene randomization, crucial to the performance of the method.
arXiv Detail & Related papers (2023-06-09T04:40:38Z) - Synthesizing Diverse Human Motions in 3D Indoor Scenes [16.948649870341782]
We present a novel method for populating 3D indoor scenes with virtual humans that can navigate in the environment and interact with objects in a realistic manner.
Existing approaches rely on training sequences that contain captured human motions and the 3D scenes they interact with.
We propose a reinforcement learning-based approach that enables virtual humans to navigate in 3D scenes and interact with objects realistically and autonomously.
arXiv Detail & Related papers (2023-05-21T09:22:24Z) - PACE: Data-Driven Virtual Agent Interaction in Dense and Cluttered
Environments [69.03289331433874]
We present PACE, a novel method for modifying motion-captured virtual agents to interact with and move throughout dense, cluttered 3D scenes.
Our approach changes a given motion sequence of a virtual agent as needed to adjust to the obstacles and objects in the environment.
We compare our method with prior motion generating techniques and highlight the benefits of our method with a perceptual study and physical plausibility metrics.
arXiv Detail & Related papers (2023-03-24T19:49:08Z) - IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions [69.95820880360345]
We present the first framework to synthesize the full-body motion of virtual human characters with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters.
We show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios.
arXiv Detail & Related papers (2022-12-14T23:59:24Z) - Stochastic Scene-Aware Motion Prediction [41.6104600038666]
We present a novel data-driven, synthesis motion method that models different styles of performing a given action with a target object.
Our method, called SAMP, for SceneAware Motion Prediction, generalizes to target objects of various geometries while enabling the character to navigate in cluttered scenes.
arXiv Detail & Related papers (2021-08-18T17:56:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.