SAGA: Stochastic Whole-Body Grasping with Contact
- URL: http://arxiv.org/abs/2112.10103v1
- Date: Sun, 19 Dec 2021 10:15:30 GMT
- Title: SAGA: Stochastic Whole-Body Grasping with Contact
- Authors: Yan Wu, Jiahao Wang, Yan Zhang, Siwei Zhang, Otmar Hilliges, Fisher
Yu, Siyu Tang
- Abstract summary: Human grasping synthesis has numerous applications including AR/VR, video games, and robotics.
In this work, our goal is to synthesize whole-body grasping motion. Given a 3D object, we aim to generate diverse and natural whole-body human motions that approach and grasp the object.
- Score: 60.43627793243098
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human grasping synthesis has numerous applications including AR/VR, video
games, and robotics. While some methods have been proposed to generate
realistic hand-object interaction for object grasping and manipulation, they
typically only consider the hand interacting with objects. In this work, our
goal is to synthesize whole-body grasping motion. Given a 3D object, we aim to
generate diverse and natural whole-body human motions that approach and grasp
the object. This task is challenging as it requires modeling both whole-body
dynamics and dexterous finger movements. To this end, we propose SAGA
(StochAstic whole-body Grasping with contAct) which consists of two key
components: (a) Static whole-body grasping pose generation. Specifically, we
propose a multi-task generative model, to jointly learn static whole-body
grasping poses and human-object contacts. (b) Grasping motion infilling. Given
an initial pose and the generated whole-body grasping pose as the starting and
ending poses of the motion respectively, we design a novel contact-aware
generative motion infilling module to generate a diverse set of grasp-oriented
motions. We demonstrate the effectiveness of our method being the first
generative framework to synthesize realistic and expressive whole-body motions
that approach and grasp randomly placed unseen objects. The code and videos are
available at: https://jiahaoplus.github.io/SAGA/saga.html.
Related papers
- GraspDiffusion: Synthesizing Realistic Whole-body Hand-Object Interaction [9.564223516111275]
Recent generative models can synthesize high-quality images but often fail to generate humans interacting with objects using their hands.
In this paper, we propose GraspDiffusion, a novel generative method that creates realistic scenes of human-object interaction.
arXiv Detail & Related papers (2024-10-17T01:45:42Z) - Target Pose Guided Whole-body Grasping Motion Generation for Digital Humans [8.741075482543991]
We propose a grasping motion generation framework for digital human.
We first generate a target pose for whole-body digital human based on off-the-shelf target grasping pose generation methods.
With an initial pose and this generated target pose, a transformer-based neural network is used to generate the whole grasping trajectory.
arXiv Detail & Related papers (2024-09-26T05:43:23Z) - GRIP: Generating Interaction Poses Using Spatial Cues and Latent Consistency [57.9920824261925]
Hands are dexterous and highly versatile manipulators that are central to how humans interact with objects and their environment.
modeling realistic hand-object interactions is critical for applications in computer graphics, computer vision, and mixed reality.
GRIP is a learning-based method that takes as input the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction.
arXiv Detail & Related papers (2023-08-22T17:59:51Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions [69.95820880360345]
We present the first framework to synthesize the full-body motion of virtual human characters with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters.
We show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios.
arXiv Detail & Related papers (2022-12-14T23:59:24Z) - Generating Holistic 3D Human Motion from Speech [97.11392166257791]
We build a high-quality dataset of 3D holistic body meshes with synchronous speech.
We then define a novel speech-to-motion generation framework in which the face, body, and hands are modeled separately.
arXiv Detail & Related papers (2022-12-08T17:25:19Z) - GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping [47.49549115570664]
Existing methods focus on the major limbs of the body, ignoring the hands and head. Hands have been separately studied but the focus has been on generating realistic static grasps of objects.
We need to generate full-body motions and realistic hand grasps simultaneously.
For the first time, we address the problem of generating full-body, hand and head motions of an avatar grasping an unknown object.
arXiv Detail & Related papers (2021-12-21T18:59:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.