SUPR: A Sparse Unified Part-Based Human Representation
- URL: http://arxiv.org/abs/2210.13861v1
- Date: Tue, 25 Oct 2022 09:32:34 GMT
- Title: SUPR: A Sparse Unified Part-Based Human Representation
- Authors: Ahmed A. A. Osman, Timo Bolkart, Dimitrios Tzionas, Michael J. Black
- Abstract summary: We show that existing models of the head and hands fail to capture the full range of motion for these parts.
Previous body part models are trained using 3D scans that are isolated to the individual parts.
We propose a new learning scheme that jointly trains a full-body model and specific part models.
- Score: 61.693373050670644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical 3D shape models of the head, hands, and fullbody are widely used
in computer vision and graphics. Despite their wide use, we show that existing
models of the head and hands fail to capture the full range of motion for these
parts. Moreover, existing work largely ignores the feet, which are crucial for
modeling human movement and have applications in biomechanics, animation, and
the footwear industry. The problem is that previous body part models are
trained using 3D scans that are isolated to the individual parts. Such data
does not capture the full range of motion for such parts, e.g. the motion of
head relative to the neck. Our observation is that full-body scans provide
important information about the motion of the body parts. Consequently, we
propose a new learning scheme that jointly trains a full-body model and
specific part models using a federated dataset of full-body and body-part
scans. Specifically, we train an expressive human body model called SUPR
(Sparse Unified Part-Based Human Representation), where each joint strictly
influences a sparse set of model vertices. The factorized representation
enables separating SUPR into an entire suite of body part models. Note that the
feet have received little attention and existing 3D body models have highly
under-actuated feet. Using novel 4D scans of feet, we train a model with an
extended kinematic tree that captures the range of motion of the toes.
Additionally, feet deform due to ground contact. To model this, we include a
novel non-linear deformation function that predicts foot deformation
conditioned on the foot pose, shape, and ground contact. We train SUPR on an
unprecedented number of scans: 1.2 million body, head, hand and foot scans. We
quantitatively compare SUPR and the separated body parts and find that our
suite of models generalizes better than existing models. SUPR is available at
http://supr.is.tue.mpg.de
Related papers
- HUMOS: Human Motion Model Conditioned on Body Shape [54.20419874234214]
We introduce a new approach to develop a generative motion model based on body shape.
We show that it's possible to train this model using unpaired data.
The resulting model generates diverse, physically plausible, and dynamically stable human motions.
arXiv Detail & Related papers (2024-09-05T23:50:57Z) - Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot [22.848563931757962]
We present Multi-HMR, a strong sigle-shot model for multi-person 3D human mesh recovery from a single RGB image.
Predictions encompass the whole body, including hands and facial expressions, using the SMPL-X parametric model.
We show that incorporating it into the training data further enhances predictions, particularly for hands.
arXiv Detail & Related papers (2024-02-22T16:05:13Z) - DANBO: Disentangled Articulated Neural Body Representations via Graph
Neural Networks [12.132886846993108]
High-resolution models enable photo-realistic avatars but at the cost of requiring studio settings not available to end users.
Our goal is to create avatars directly from raw images without relying on expensive studio setups and surface tracking.
We introduce a three-stage method that induces two inductive biases to better disentangled pose-dependent deformation.
arXiv Detail & Related papers (2022-05-03T17:56:46Z) - NIMBLE: A Non-rigid Hand Model with Bones and Muscles [41.19718491215149]
We present NIMBLE, a novel parametric hand model that includes the missing key components.
NIMBLE consists of 20 bones as triangular meshes, 7 muscle groups as tetrahedral meshes, and a skin mesh.
We demonstrate applying NIMBLE to modeling, rendering, and visual inference tasks.
arXiv Detail & Related papers (2022-02-09T15:57:21Z) - Embodied Hands: Modeling and Capturing Hands and Bodies Together [61.32931890166915]
Humans move their hands and bodies together to communicate and solve tasks.
Most methods treat the 3D modeling and tracking of bodies and hands separately.
We formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences.
arXiv Detail & Related papers (2022-01-07T18:59:32Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.