SHARP: Shape-Aware Reconstruction of People in Loose Clothing
- URL: http://arxiv.org/abs/2205.11948v1
- Date: Tue, 24 May 2022 10:26:42 GMT
- Title: SHARP: Shape-Aware Reconstruction of People in Loose Clothing
- Authors: Sai Sagar Jinka, Astitva Srivastava, Chandradeep Pokhariya, Avinash
Sharma and P.J. Narayanan
- Abstract summary: SHARP (SHape Aware Reconstruction of People in loose clothing) is a novel end-to-end trainable network.
It recovers the 3D geometry and appearance of humans in loose clothing from a monocular image.
We show superior qualitative and quantitative performance than existing state-of-the-art methods.
- Score: 6.469298908778292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in deep learning have enabled 3D human body
reconstruction from a monocular image, which has broad applications in multiple
domains. In this paper, we propose SHARP (SHape Aware Reconstruction of People
in loose clothing), a novel end-to-end trainable network that accurately
recovers the 3D geometry and appearance of humans in loose clothing from a
monocular image. SHARP uses a sparse and efficient fusion strategy to combine
parametric body prior with a non-parametric 2D representation of clothed
humans. The parametric body prior enforces geometrical consistency on the body
shape and pose, while the non-parametric representation models loose clothing
and handle self-occlusions as well. We also leverage the sparseness of the
non-parametric representation for faster training of our network while using
losses on 2D maps. Another key contribution is 3DHumans, our new life-like
dataset of 3D human body scans with rich geometrical and textural details. We
evaluate SHARP on 3DHumans and other publicly available datasets and show
superior qualitative and quantitative performance than existing
state-of-the-art methods.
Related papers
- FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis [51.193297565630886]
The challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images.
This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets.
We propose leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization.
arXiv Detail & Related papers (2024-10-13T01:25:05Z) - Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing [54.29207348918216]
Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
arXiv Detail & Related papers (2023-09-28T06:18:38Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - USR: Unsupervised Separated 3D Garment and Human Reconstruction via
Geometry and Semantic Consistency [41.89803177312638]
We propose an unsupervised separated 3D garments and human reconstruction model (USR), which reconstructs the human body and authentic textured clothes in layers without 3D models.
Our method proposes a generalized surface-aware neural radiance field to learn the mapping between sparse multi-view images and geometries of the dressed people.
arXiv Detail & Related papers (2023-02-21T08:48:27Z) - Accurate 3D Body Shape Regression using Metric and Semantic Attributes [55.58629009876271]
We show that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
This is the first demonstration that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
arXiv Detail & Related papers (2022-06-14T17:54:49Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - Learning Temporal 3D Human Pose Estimation with Pseudo-Labels [3.0954251281114513]
We present a simple, yet effective, approach for self-supervised 3D human pose estimation.
We rely on triangulating 2D body pose estimates of a multiple-view camera system.
Our method achieves state-of-the-art performance in the Human3.6M and MPI-INF-3DHP benchmarks.
arXiv Detail & Related papers (2021-10-14T17:40:45Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - SHARP: Shape-Aware Reconstruction of People In Loose Clothing [6.796748304066826]
3D human body reconstruction from monocular images is an interesting and ill-posed problem in computer vision.
We propose SHARP, a novel end-to-end trainable network that accurately recovers the detailed geometry and appearance of 3D people in loose clothing from a monocular image.
We evaluate SHARP on publicly available Cloth3D and THuman datasets and report superior performance to state-of-the-art approaches.
arXiv Detail & Related papers (2021-06-09T02:54:53Z) - ARCH: Animatable Reconstruction of Clothed Humans [27.849315613277724]
ARCH (Animatable Reconstruction of Clothed Humans) is an end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image.
ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image.
arXiv Detail & Related papers (2020-04-08T14:23:08Z) - PeeledHuman: Robust Shape Representation for Textured 3D Human Body
Reconstruction [7.582064461041252]
PeeledHuman encodes the human body as a set of Peeled Depth and RGB maps in 2D.
We train PeelGAN using a 3D Chamfer loss and other 2D losses to generate multiple depth values per-pixel and a corresponding RGB field per-vertex.
In our simple non-parametric solution, the generated Peeled Depth maps are back-projected to 3D space to obtain a complete textured 3D shape.
arXiv Detail & Related papers (2020-02-16T20:03:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.