FLEX: Full-Body Grasping Without Full-Body Grasps
- URL: http://arxiv.org/abs/2211.11903v2
- Date: Tue, 28 Mar 2023 21:03:06 GMT
- Title: FLEX: Full-Body Grasping Without Full-Body Grasps
- Authors: Purva Tendulkar and D\'idac Sur\'is and Carl Vondrick
- Abstract summary: We address the task of generating a virtual human -- hands and full body -- grasping everyday objects.
Existing methods approach this problem by collecting a 3D dataset of humans interacting with objects and training on this data.
We leverage the existence of both full-body pose and hand grasping priors, composing them using 3D geometrical constraints to obtain full-body grasps.
- Score: 24.10724524386518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing 3D human avatars interacting realistically with a scene is an
important problem with applications in AR/VR, video games and robotics. Towards
this goal, we address the task of generating a virtual human -- hands and full
body -- grasping everyday objects. Existing methods approach this problem by
collecting a 3D dataset of humans interacting with objects and training on this
data. However, 1) these methods do not generalize to different object positions
and orientations, or to the presence of furniture in the scene, and 2) the
diversity of their generated full-body poses is very limited. In this work, we
address all the above challenges to generate realistic, diverse full-body
grasps in everyday scenes without requiring any 3D full-body grasping data. Our
key insight is to leverage the existence of both full-body pose and hand
grasping priors, composing them using 3D geometrical constraints to obtain
full-body grasps. We empirically validate that these constraints can generate a
variety of feasible human grasps that are superior to baselines both
quantitatively and qualitatively. See our webpage for more details:
https://flex.cs.columbia.edu/.
Related papers
- Decanus to Legatus: Synthetic training for 2D-3D human pose lifting [26.108023246654646]
We propose an algorithm to generate infinite 3D synthetic human poses (Legatus) from a 3D pose distribution based on 10 initial handcrafted 3D poses (Decanus)
Our results show that we can achieve 3D pose estimation performance comparable to methods using real data from specialized datasets but in a zero-shot setup, showing the potential of our framework.
arXiv Detail & Related papers (2022-10-05T13:10:19Z) - BEHAVE: Dataset and Method for Tracking Human Object Interactions [105.77368488612704]
We present the first full body human- object interaction dataset with multi-view RGBD frames and corresponding 3D SMPL and object fits along with the annotated contacts between them.
We use this data to learn a model that can jointly track humans and objects in natural environments with an easy-to-use portable multi-camera setup.
arXiv Detail & Related papers (2022-04-14T13:21:19Z) - GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping [47.49549115570664]
Existing methods focus on the major limbs of the body, ignoring the hands and head. Hands have been separately studied but the focus has been on generating realistic static grasps of objects.
We need to generate full-body motions and realistic hand grasps simultaneously.
For the first time, we address the problem of generating full-body, hand and head motions of an avatar grasping an unknown object.
arXiv Detail & Related papers (2021-12-21T18:59:34Z) - SAGA: Stochastic Whole-Body Grasping with Contact [60.43627793243098]
Human grasping synthesis has numerous applications including AR/VR, video games, and robotics.
In this work, our goal is to synthesize whole-body grasping motion. Given a 3D object, we aim to generate diverse and natural whole-body human motions that approach and grasp the object.
arXiv Detail & Related papers (2021-12-19T10:15:30Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z) - GRAB: A Dataset of Whole-Body Human Grasping of Objects [53.00728704389501]
Training computers to understand human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time.
We collect a new dataset, called GRAB, of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size.
This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task.
arXiv Detail & Related papers (2020-08-25T17:57:55Z) - Perceiving 3D Human-Object Spatial Arrangements from a Single Image in
the Wild [96.08358373137438]
We present a method that infers spatial arrangements and shapes of humans and objects in a globally consistent 3D scene.
Our method runs on datasets without any scene- or object-level 3D supervision.
arXiv Detail & Related papers (2020-07-30T17:59:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.