USR: Unsupervised Separated 3D Garment and Human Reconstruction via
Geometry and Semantic Consistency
- URL: http://arxiv.org/abs/2302.10518v2
- Date: Wed, 22 Feb 2023 07:07:16 GMT
- Title: USR: Unsupervised Separated 3D Garment and Human Reconstruction via
Geometry and Semantic Consistency
- Authors: Yue Shi, Yuxuan Xiong, Jingyi Chai, Bingbing Ni, Wenjun Zhang
- Abstract summary: We propose an unsupervised separated 3D garments and human reconstruction model (USR), which reconstructs the human body and authentic textured clothes in layers without 3D models.
Our method proposes a generalized surface-aware neural radiance field to learn the mapping between sparse multi-view images and geometries of the dressed people.
- Score: 41.89803177312638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dressed people reconstruction from images is a popular task with promising
applications in the creative media and game industry. However, most existing
methods reconstruct the human body and garments as a whole with the supervision
of 3D models, which hinders the downstream interaction tasks and requires
hard-to-obtain data. To address these issues, we propose an unsupervised
separated 3D garments and human reconstruction model (USR), which reconstructs
the human body and authentic textured clothes in layers without 3D models. More
specifically, our method proposes a generalized surface-aware neural radiance
field to learn the mapping between sparse multi-view images and geometries of
the dressed people. Based on the full geometry, we introduce a Semantic and
Confidence Guided Separation strategy (SCGS) to detect, segment, and
reconstruct the clothes layer, leveraging the consistency between 2D semantic
and 3D geometry. Moreover, we propose a Geometry Fine-tune Module to smooth
edges. Extensive experiments on our dataset show that comparing with
state-of-the-art methods, USR achieves improvements on both geometry and
appearance reconstruction while supporting generalizing to unseen people in
real time. Besides, we also introduce SMPL-D model to show the benefit of the
separated modeling of clothes and the human body that allows swapping clothes
and virtual try-on.
Related papers
- PGAHum: Prior-Guided Geometry and Appearance Learning for High-Fidelity Animatable Human Reconstruction [9.231326291897817]
We introduce PGAHum, a prior-guided geometry and appearance learning framework for high-fidelity animatable human reconstruction.
We thoroughly exploit 3D human priors in three key modules of PGAHum to achieve high-quality geometry reconstruction with intricate details and photorealistic view synthesis on unseen poses.
arXiv Detail & Related papers (2024-04-22T04:22:30Z) - GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - GALA: Generating Animatable Layered Assets from a Single Scan [20.310367593475508]
We present GALA, a framework that takes as input a single-layer clothed 3D human mesh and decomposes it into complete multi-layered 3D assets.
The outputs can then be combined with other assets to create novel clothed human avatars with any pose.
arXiv Detail & Related papers (2024-01-23T18:59:59Z) - 3D Reconstruction of Interacting Multi-Person in Clothing from a Single Image [8.900009931200955]
This paper introduces a novel pipeline to reconstruct the geometry of interacting multi-person in clothing on a globally coherent scene space from a single image.
We overcome this challenge by utilizing two human priors for complete 3D geometry and surface contacts.
The results demonstrate that our method is complete, globally coherent, and physically plausible compared to existing methods.
arXiv Detail & Related papers (2024-01-12T07:23:02Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - SHARP: Shape-Aware Reconstruction of People in Loose Clothing [6.469298908778292]
SHARP (SHape Aware Reconstruction of People in loose clothing) is a novel end-to-end trainable network.
It recovers the 3D geometry and appearance of humans in loose clothing from a monocular image.
We show superior qualitative and quantitative performance than existing state-of-the-art methods.
arXiv Detail & Related papers (2022-05-24T10:26:42Z) - 3D Magic Mirror: Clothing Reconstruction from a Single Image via a
Causal Perspective [96.65476492200648]
This research aims to study a self-supervised 3D clothing reconstruction method.
It recovers the geometry shape, and texture of human clothing from a single 2D image.
arXiv Detail & Related papers (2022-04-27T17:46:55Z) - Multi-person Implicit Reconstruction from a Single Image [37.6877421030774]
We present a new end-to-end learning framework to obtain detailed and spatially coherent reconstructions of multiple people from a single image.
Existing multi-person methods suffer from two main drawbacks: they are often model-based and cannot capture accurate 3D models of people with loose clothing and hair.
arXiv Detail & Related papers (2021-04-19T13:21:55Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.