Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images
- URL: http://arxiv.org/abs/2003.12753v2
- Date: Sat, 4 Jul 2020 12:43:49 GMT
- Title: Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images
- Authors: Heming Zhu, Yu Cao, Hang Jin, Weikai Chen, Dong Du, Zhangye Wang,
Shuguang Cui, Xiaoguang Han
- Abstract summary: Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
- Score: 50.34202789543989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-fidelity clothing reconstruction is the key to achieving photorealism in
a wide range of applications including human digitization, virtual try-on, etc.
Recent advances in learning-based approaches have accomplished unprecedented
accuracy in recovering unclothed human shape and pose from single images,
thanks to the availability of powerful statistical models, e.g. SMPL, learned
from a large number of body scans. In contrast, modeling and recovering clothed
human and 3D garments remains notoriously difficult, mostly due to the lack of
large-scale clothing models available for the research community. We propose to
fill this gap by introducing Deep Fashion3D, the largest collection to date of
3D garment models, with the goal of establishing a novel benchmark and dataset
for the evaluation of image-based garment reconstruction systems. Deep
Fashion3D contains 2078 models reconstructed from real garments, which covers
10 different categories and 563 garment instances. It provides rich annotations
including 3D feature lines, 3D body pose and the corresponded multi-view real
images. In addition, each garment is randomly posed to enhance the variety of
real clothing deformations. To demonstrate the advantage of Deep Fashion3D, we
propose a novel baseline approach for single-view garment reconstruction, which
leverages the merits of both mesh and implicit representations. A novel
adaptable template is proposed to enable the learning of all types of clothing
in a single network. Extensive experiments have been conducted on the proposed
dataset to verify its significance and usefulness. We will make Deep Fashion3D
publicly available upon publication.
Related papers
- Design2Cloth: 3D Cloth Generation from 2D Masks [34.80461276448817]
We propose Design2Cloth, a high fidelity 3D generative model trained on a real world dataset from more than 2000 subject scans.
Under a series of both qualitative and quantitative experiments, we showcase that Design2Cloth outperforms current state-of-the-art cloth generative models by a large margin.
arXiv Detail & Related papers (2024-04-03T12:32:13Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - Multi-View Consistency Loss for Improved Single-Image 3D Reconstruction
of Clothed People [36.30755368202957]
We present a novel method to improve the accuracy of the 3D reconstruction of clothed human shape from a single image.
The accuracy and completeness for reconstruction of clothed people is limited due to the large variation in shape resulting from clothing, hair, body size, pose and camera viewpoint.
arXiv Detail & Related papers (2020-09-29T17:18:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.