Robust 3D Garment Digitization from Monocular 2D Images for 3D Virtual
Try-On Systems
- URL: http://arxiv.org/abs/2111.15140v1
- Date: Tue, 30 Nov 2021 05:49:23 GMT
- Title: Robust 3D Garment Digitization from Monocular 2D Images for 3D Virtual
Try-On Systems
- Authors: Sahib Majithia, Sandeep N. Parameswaran, Sadbhavana Babar, Vikram
Garg, Astitva Srivastava and Avinash Sharma
- Abstract summary: We develop a robust 3D garment digitization solution that can generalize well on real-world fashion catalog images.
To train the supervised deep networks for landmark prediction & texture inpainting tasks, we generated a large set of synthetic data.
We manually annotated a small set of fashion catalog images crawled from online fashion e-commerce platforms to finetune.
- Score: 1.7394606468019056
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we develop a robust 3D garment digitization solution that can
generalize well on real-world fashion catalog images with cloth texture
occlusions and large body pose variations. We assumed fixed topology parametric
template mesh models for known types of garments (e.g., T-shirts, Trousers) and
perform mapping of high-quality texture from an input catalog image to UV map
panels corresponding to the parametric mesh model of the garment. We achieve
this by first predicting a sparse set of 2D landmarks on the boundary of the
garments. Subsequently, we use these landmarks to perform
Thin-Plate-Spline-based texture transfer on UV map panels. Subsequently, we
employ a deep texture inpainting network to fill the large holes (due to view
variations & self-occlusions) in TPS output to generate consistent UV maps.
Furthermore, to train the supervised deep networks for landmark prediction &
texture inpainting tasks, we generated a large set of synthetic data with
varying texture and lighting imaged from various views with the human present
in a wide variety of poses. Additionally, we manually annotated a small set of
fashion catalog images crawled from online fashion e-commerce platforms to
finetune. We conduct thorough empirical evaluations and show impressive
qualitative results of our proposed 3D garment texture solution on fashion
catalog images. Such 3D garment digitization helps us solve the challenging
task of enabling 3D Virtual Try-on.
Related papers
- FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis [51.193297565630886]
The challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images.
This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets.
We propose leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization.
arXiv Detail & Related papers (2024-10-13T01:25:05Z) - FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images [56.63824638417697]
FabricDiffusion is a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes.
We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos.
arXiv Detail & Related papers (2024-10-02T17:57:12Z) - ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models [65.22994156658918]
We present a method that learns to generate multi-view images in a single denoising process from real-world data.
We design an autoregressive generation that renders more 3D-consistent images at any viewpoint.
arXiv Detail & Related papers (2024-03-04T07:57:05Z) - 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with
2D Diffusion Models [102.75875255071246]
3D content creation via text-driven stylization has played a fundamental challenge to multimedia and graphics community.
We propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
arXiv Detail & Related papers (2023-11-09T15:51:27Z) - DeepIron: Predicting Unwarped Garment Texture from a Single Image [9.427635404752934]
This paper presents a novel framework that reconstructs the texture map for 3D garments from a single image with pose.
A key component of our framework, the Texture Unwarper, infers the original texture image from the input clothing image.
By inferring the unwarped original texture of the input garment, our method helps reconstruct 3D garment models that can show high-quality texture images realistically deformed for new poses.
arXiv Detail & Related papers (2023-10-24T01:44:11Z) - xCloth: Extracting Template-free Textured 3D Clothes from a Monocular
Image [4.056667956036515]
We present a novel framework for template-free textured 3D garment digitization.
More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps.
We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.
arXiv Detail & Related papers (2022-08-27T05:57:00Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z) - Learning to Transfer Texture from Clothing Images to 3D Humans [50.838970996234465]
We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
arXiv Detail & Related papers (2020-03-04T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.