Accurate 3D Body Shape Regression using Metric and Semantic Attributes
- URL: http://arxiv.org/abs/2206.07036v1
- Date: Tue, 14 Jun 2022 17:54:49 GMT
- Title: Accurate 3D Body Shape Regression using Metric and Semantic Attributes
- Authors: Vasileios Choutas, Lea Muller, Chun-Hao P. Huang, Siyu Tang, Dimitrios
Tzionas, Michael J. Black
- Abstract summary: We show that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
This is the first demonstration that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
- Score: 55.58629009876271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While methods that regress 3D human meshes from images have progressed
rapidly, the estimated body shapes often do not capture the true human shape.
This is problematic since, for many applications, accurate body shape is as
important as pose. The key reason that body shape accuracy lags pose accuracy
is the lack of data. While humans can label 2D joints, and these constrain 3D
pose, it is not so easy to "label" 3D body shape. Since paired data with images
and 3D body shape are rare, we exploit two sources of information: (1) we
collect internet images of diverse "fashion" models together with a small set
of anthropometric measurements; (2) we collect linguistic shape attributes for
a wide range of 3D body meshes and the model images. Taken together, these
datasets provide sufficient constraints to infer dense 3D shape. We exploit the
anthropometric measurements and linguistic shape attributes in several novel
ways to train a neural network, called SHAPY, that regresses 3D human pose and
shape from an RGB image. We evaluate SHAPY on public benchmarks, but note that
they either lack significant body shape variation, ground-truth shape, or
clothing variation. Thus, we collect a new dataset for evaluating 3D human
shape estimation, called HBW, containing photos of "Human Bodies in the Wild"
for which we have ground-truth 3D body scans. On this new benchmark, SHAPY
significantly outperforms state-of-the-art methods on the task of 3D body shape
estimation. This is the first demonstration that 3D body shape regression from
images can be trained from easy-to-obtain anthropometric measurements and
linguistic shape attributes. Our model and data are available at:
shapy.is.tue.mpg.de
Related papers
- Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing [54.29207348918216]
Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
arXiv Detail & Related papers (2023-09-28T06:18:38Z) - Single-view 3D Body and Cloth Reconstruction under Complex Poses [37.86174829271747]
We extend existing implicit function-based models to deal with images of humans with arbitrary poses and self-occluded limbs.
We learn an implicit function that maps the input image to a 3D body shape with a low level of detail.
We then learn a displacement map, conditioned on the smoothed surface, which encodes the high-frequency details of the clothes and body.
arXiv Detail & Related papers (2022-05-09T07:34:06Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Collaborative Regression of Expressive Bodies using Moderation [54.730550151409474]
Methods that estimate 3D bodies, faces, or hands have progressed significantly, yet separately.
We introduce PIXIE, which produces animatable, whole-body 3D avatars from a single image.
We label training images as male, female, or non-binary, and train PIXIE to infer "gendered" 3D body shapes with a novel shape loss.
arXiv Detail & Related papers (2021-05-11T18:55:59Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - GRAB: A Dataset of Whole-Body Human Grasping of Objects [53.00728704389501]
Training computers to understand human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time.
We collect a new dataset, called GRAB, of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size.
This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task.
arXiv Detail & Related papers (2020-08-25T17:57:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.