STAR: Sparse Trained Articulated Human Body Regressor
- URL: http://arxiv.org/abs/2008.08535v1
- Date: Wed, 19 Aug 2020 16:27:55 GMT
- Title: STAR: Sparse Trained Articulated Human Body Regressor
- Authors: Ahmed A. A. Osman, Timo Bolkart, Michael J. Black
- Abstract summary: We introduce STAR, which is quantitatively and qualitatively superior to SMPL.
SMPL has a huge number of parameters resulting from its use of global blend shapes.
SMPL factors pose-dependent deformations from body shape while, in reality, people with different shapes deform differently.
We show that the shape space of SMPL is not rich enough to capture the variation in the human population.
- Score: 62.71047277943326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The SMPL body model is widely used for the estimation, synthesis, and
analysis of 3D human pose and shape. While popular, we show that SMPL has
several limitations and introduce STAR, which is quantitatively and
qualitatively superior to SMPL. First, SMPL has a huge number of parameters
resulting from its use of global blend shapes. These dense pose-corrective
offsets relate every vertex on the mesh to all the joints in the kinematic
tree, capturing spurious long-range correlations. To address this, we define
per-joint pose correctives and learn the subset of mesh vertices that are
influenced by each joint movement. This sparse formulation results in more
realistic deformations and significantly reduces the number of model parameters
to 20% of SMPL. When trained on the same data as SMPL, STAR generalizes better
despite having many fewer parameters. Second, SMPL factors pose-dependent
deformations from body shape while, in reality, people with different shapes
deform differently. Consequently, we learn shape-dependent pose-corrective
blend shapes that depend on both body pose and BMI. Third, we show that the
shape space of SMPL is not rich enough to capture the variation in the human
population. We address this by training STAR with an additional 10,000 scans of
male and female subjects, and show that this results in better model
generalization. STAR is compact, generalizes better to new bodies and is a
drop-in replacement for SMPL. STAR is publicly available for research purposes
at http://star.is.tue.mpg.de.
Related papers
- ToMiE: Towards Modular Growth in Enhanced SMPL Skeleton for 3D Human with Animatable Garments [41.23897822168498]
We propose a modular growth strategy that enables the joint tree of the skeleton to expand adaptively.
Specifically, our method, called ToMiE, consists of parent joints localization and external joints optimization.
ToMiE manages to outperform other methods across various cases with garments, not only in rendering quality but also by offering free animation of grown joints.
arXiv Detail & Related papers (2024-10-10T16:25:52Z) - SUPR: A Sparse Unified Part-Based Human Representation [61.693373050670644]
We show that existing models of the head and hands fail to capture the full range of motion for these parts.
Previous body part models are trained using 3D scans that are isolated to the individual parts.
We propose a new learning scheme that jointly trains a full-body model and specific part models.
arXiv Detail & Related papers (2022-10-25T09:32:34Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in
Separate Spaces [28.99602069185613]
We propose a dual-space NeRF that models the scene lighting and the human body with two skinnings in two separate spaces.
To bridge these two spaces, previous methods mostly rely on the linear blend skinning (LBS) algorithm.
We propose to use the barycentric mapping, which can directly generalize to unseen poses and surprisingly superior results than LBS with neural blending weights.
arXiv Detail & Related papers (2022-08-31T13:35:04Z) - Learnable human mesh triangulation for 3D human pose and shape
estimation [6.699132260402631]
The accuracy of joint rotation and shape estimation has received relatively little attention in the skinned multi-person linear model (SMPL)-based human mesh reconstruction from multi-view images.
We propose a two-stage method to resolve the ambiguity of joint rotation and shape reconstruction and the difficulty of network learning.
The proposed method significantly outperforms the previous works in terms of joint rotation and shape estimation, and achieves competitive performance in terms of joint location estimation.
arXiv Detail & Related papers (2022-08-24T01:11:57Z) - Adversarial Parametric Pose Prior [106.12437086990853]
We learn a prior that restricts the SMPL parameters to values that produce realistic poses via adversarial training.
We show that our learned prior covers the diversity of the real-data distribution, facilitates optimization for 3D reconstruction from 2D keypoints, and yields better pose estimates when used for regression from images.
arXiv Detail & Related papers (2021-12-08T10:05:32Z) - LEAP: Learning Articulated Occupancy of People [56.35797895609303]
We introduce LEAP (LEarning Articulated occupancy of People), a novel neural occupancy representation of the human body.
Given a set of bone transformations and a query point in space, LEAP first maps the query point to a canonical space via learned linear blend skinning (LBS) functions.
LEAP efficiently queries the occupancy value via an occupancy network that models accurate identity- and pose-dependent deformations in the canonical space.
arXiv Detail & Related papers (2021-04-14T13:41:56Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.