Embodied Hands: Modeling and Capturing Hands and Bodies Together
- URL: http://arxiv.org/abs/2201.02610v1
- Date: Fri, 7 Jan 2022 18:59:32 GMT
- Title: Embodied Hands: Modeling and Capturing Hands and Bodies Together
- Authors: Javier Romero, Dimitrios Tzionas, Michael J. Black
- Abstract summary: Humans move their hands and bodies together to communicate and solve tasks.
Most methods treat the 3D modeling and tracking of bodies and hands separately.
We formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences.
- Score: 61.32931890166915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans move their hands and bodies together to communicate and solve tasks.
Capturing and replicating such coordinated activity is critical for virtual
characters that behave realistically. Surprisingly, most methods treat the 3D
modeling and tracking of bodies and hands separately. Here we formulate a model
of hands and bodies interacting together and fit it to full-body 4D sequences.
When scanning or capturing the full body in 3D, hands are small and often
partially occluded, making their shape and pose hard to recover. To cope with
low-resolution, occlusion, and noise, we develop a new model called MANO (hand
Model with Articulated and Non-rigid defOrmations). MANO is learned from around
1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand
poses. The model is realistic, low-dimensional, captures non-rigid shape
changes with pose, is compatible with standard graphics packages, and can fit
any human hand. MANO provides a compact mapping from hand poses to pose blend
shape corrections and a linear manifold of pose synergies. We attach MANO to a
standard parameterized 3D body shape model (SMPL), resulting in a fully
articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting
complex, natural, activities of subjects captured with a 4D scanner. The
fitting is fully automatic and results in full body models that move naturally
with detailed hand motions and a realism not seen before in full body
performance capture. The models and data are freely available for research
purposes in our website (http://mano.is.tue.mpg.de).
Related papers
- Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot [22.848563931757962]
We present Multi-HMR, a strong sigle-shot model for multi-person 3D human mesh recovery from a single RGB image.
Predictions encompass the whole body, including hands and facial expressions, using the SMPL-X parametric model.
We show that incorporating it into the training data further enhances predictions, particularly for hands.
arXiv Detail & Related papers (2024-02-22T16:05:13Z) - Interactive Sketching of Mannequin Poses [3.222802562733787]
3D body poses are necessary for various downstream applications.
We propose a machine-learning model for inferring the 3D pose of a CG mannequin from sketches of humans drawn in a cylinder-person style.
Our unique approach to vector graphics training data underpins our integrated ML-and-kinematics system.
arXiv Detail & Related papers (2022-12-14T08:45:51Z) - SUPR: A Sparse Unified Part-Based Human Representation [61.693373050670644]
We show that existing models of the head and hands fail to capture the full range of motion for these parts.
Previous body part models are trained using 3D scans that are isolated to the individual parts.
We propose a new learning scheme that jointly trains a full-body model and specific part models.
arXiv Detail & Related papers (2022-10-25T09:32:34Z) - NIMBLE: A Non-rigid Hand Model with Bones and Muscles [41.19718491215149]
We present NIMBLE, a novel parametric hand model that includes the missing key components.
NIMBLE consists of 20 bones as triangular meshes, 7 muscle groups as tetrahedral meshes, and a skin mesh.
We demonstrate applying NIMBLE to modeling, rendering, and visual inference tasks.
arXiv Detail & Related papers (2022-02-09T15:57:21Z) - GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping [47.49549115570664]
Existing methods focus on the major limbs of the body, ignoring the hands and head. Hands have been separately studied but the focus has been on generating realistic static grasps of objects.
We need to generate full-body motions and realistic hand grasps simultaneously.
For the first time, we address the problem of generating full-body, hand and head motions of an avatar grasping an unknown object.
arXiv Detail & Related papers (2021-12-21T18:59:34Z) - Human Performance Capture from Monocular Video in the Wild [50.34917313325813]
We propose a method capable of capturing the dynamic 3D human shape from a monocular video featuring challenging body poses.
Our method outperforms state-of-the-art methods on an in-the-wild human video dataset 3DPW.
arXiv Detail & Related papers (2021-11-29T16:32:41Z) - GRAB: A Dataset of Whole-Body Human Grasping of Objects [53.00728704389501]
Training computers to understand human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time.
We collect a new dataset, called GRAB, of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size.
This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task.
arXiv Detail & Related papers (2020-08-25T17:57:55Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.