Robust 3D Garment Digitization from Monocular 2D Images for 3D Virtual
Try-On Systems
- URL: http://arxiv.org/abs/2111.15140v1
- Date: Tue, 30 Nov 2021 05:49:23 GMT
- Title: Robust 3D Garment Digitization from Monocular 2D Images for 3D Virtual
Try-On Systems
- Authors: Sahib Majithia, Sandeep N. Parameswaran, Sadbhavana Babar, Vikram
Garg, Astitva Srivastava and Avinash Sharma
- Abstract summary: We develop a robust 3D garment digitization solution that can generalize well on real-world fashion catalog images.
To train the supervised deep networks for landmark prediction & texture inpainting tasks, we generated a large set of synthetic data.
We manually annotated a small set of fashion catalog images crawled from online fashion e-commerce platforms to finetune.
- Score: 1.7394606468019056
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we develop a robust 3D garment digitization solution that can
generalize well on real-world fashion catalog images with cloth texture
occlusions and large body pose variations. We assumed fixed topology parametric
template mesh models for known types of garments (e.g., T-shirts, Trousers) and
perform mapping of high-quality texture from an input catalog image to UV map
panels corresponding to the parametric mesh model of the garment. We achieve
this by first predicting a sparse set of 2D landmarks on the boundary of the
garments. Subsequently, we use these landmarks to perform
Thin-Plate-Spline-based texture transfer on UV map panels. Subsequently, we
employ a deep texture inpainting network to fill the large holes (due to view
variations & self-occlusions) in TPS output to generate consistent UV maps.
Furthermore, to train the supervised deep networks for landmark prediction &
texture inpainting tasks, we generated a large set of synthetic data with
varying texture and lighting imaged from various views with the human present
in a wide variety of poses. Additionally, we manually annotated a small set of
fashion catalog images crawled from online fashion e-commerce platforms to
finetune. We conduct thorough empirical evaluations and show impressive
qualitative results of our proposed 3D garment texture solution on fashion
catalog images. Such 3D garment digitization helps us solve the challenging
task of enabling 3D Virtual Try-on.
Related papers
- 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with
2D Diffusion Models [102.75875255071246]
3D content creation via text-driven stylization has played a fundamental challenge to multimedia and graphics community.
We propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
arXiv Detail & Related papers (2023-11-09T15:51:27Z) - DeepIron: Predicting Unwarped Garment Texture from a Single Image [9.427635404752934]
This paper presents a novel framework that reconstructs the texture map for 3D garments from a single image with pose.
A key component of our framework, the Texture Unwarper, infers the original texture image from the input clothing image.
By inferring the unwarped original texture of the input garment, our method helps reconstruct 3D garment models that can show high-quality texture images realistically deformed for new poses.
arXiv Detail & Related papers (2023-10-24T01:44:11Z) - Structure-Preserving 3D Garment Modeling with Neural Sewing Machines [190.70647799442565]
We propose a novel Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling.
NSM is capable of representing 3D garments under diverse garment shapes and topologies, realistically reconstructing 3D garments from 2D images with the preserved structure, and accurately manipulating the 3D garment categories, shapes, and topologies.
arXiv Detail & Related papers (2022-11-12T16:43:29Z) - xCloth: Extracting Template-free Textured 3D Clothes from a Monocular
Image [4.056667956036515]
We present a novel framework for template-free textured 3D garment digitization.
More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps.
We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.
arXiv Detail & Related papers (2022-08-27T05:57:00Z) - Deep Hybrid Self-Prior for Full 3D Mesh Generation [57.78562932397173]
We propose to exploit a novel hybrid 2D-3D self-prior in deep neural networks to significantly improve the geometry quality.
In particular, we first generate an initial mesh using a 3D convolutional neural network with 3D self-prior, and then encode both 3D information and color information in the 2D UV atlas.
Our method recovers the 3D textured mesh model of high quality from sparse input, and outperforms the state-of-the-art methods in terms of both the geometry and texture quality.
arXiv Detail & Related papers (2021-08-18T07:44:21Z) - Real Time Incremental Foveal Texture Mapping for Autonomous Vehicles [11.702817783491616]
The generated detailed map serves as a virtual test bed for various vision and planning algorithms.
It can also serve as a background map for various vision and planning algorithms.
arXiv Detail & Related papers (2021-01-16T07:41:24Z) - 3DBooSTeR: 3D Body Shape and Texture Recovery [76.91542440942189]
3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
arXiv Detail & Related papers (2020-10-23T21:07:59Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z) - Learning to Transfer Texture from Clothing Images to 3D Humans [50.838970996234465]
We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
arXiv Detail & Related papers (2020-03-04T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.