xCloth: Extracting Template-free Textured 3D Clothes from a Monocular
Image
- URL: http://arxiv.org/abs/2208.12934v1
- Date: Sat, 27 Aug 2022 05:57:00 GMT
- Title: xCloth: Extracting Template-free Textured 3D Clothes from a Monocular
Image
- Authors: Astitva Srivastava, Chandradeep Pokhariya, Sai Sagar Jinka and Avinash
Sharma
- Abstract summary: We present a novel framework for template-free textured 3D garment digitization.
More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps.
We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.
- Score: 4.056667956036515
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Existing approaches for 3D garment reconstruction either assume a predefined
template for the garment geometry (restricting them to fixed clothing styles)
or yield vertex colored meshes (lacking high-frequency textural details). Our
novel framework co-learns geometric and semantic information of garment surface
from the input monocular image for template-free textured 3D garment
digitization. More specifically, we propose to extend PeeledHuman
representation to predict the pixel-aligned, layered depth and semantic maps to
extract 3D garments. The layered representation is further exploited to UV
parametrize the arbitrary surface of the extracted garment without any human
intervention to form a UV atlas. The texture is then imparted on the UV atlas
in a hybrid fashion by first projecting pixels from the input image to UV space
for the visible region, followed by inpainting the occluded regions. Thus, we
are able to digitize arbitrarily loose clothing styles while retaining
high-frequency textural details from a monocular image. We achieve
high-fidelity 3D garment reconstruction results on three publicly available
datasets and generalization on internet images.
Related papers
- FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images [56.63824638417697]
FabricDiffusion is a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes.
We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos.
arXiv Detail & Related papers (2024-10-02T17:57:12Z) - Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing [79.10630153776759]
3D Gaussian splatting, emerging as a groundbreaking approach, has drawn increasing attention for its capabilities of high-fidelity reconstruction and real-time rendering.
We propose a novel approach, namely Texture-GS, to disentangle the appearance from the geometry by representing it as a 2D texture mapped onto the 3D surface.
Our method not only facilitates high-fidelity appearance editing but also achieves real-time rendering on consumer-level devices.
arXiv Detail & Related papers (2024-03-15T06:42:55Z) - Nuvo: Neural UV Mapping for Unruly 3D Representations [61.87715912587394]
Existing UV mapping algorithms operate on geometry produced by state-of-the-art 3D reconstruction and generation techniques.
We present a UV mapping method designed to operate on geometry produced by 3D reconstruction and generation techniques.
arXiv Detail & Related papers (2023-12-11T18:58:38Z) - A Generative Multi-Resolution Pyramid and Normal-Conditioning 3D Cloth
Draping [37.77353302404437]
We build a conditional variational autoencoder for 3D garment generation and draping.
We propose a pyramid network to add garment details progressively in a canonical space.
Our results on two public datasets, CLOTH3D and CAPE, show that our model is robust, controllable in terms of detail generation.
arXiv Detail & Related papers (2023-11-05T16:12:48Z) - Normal-guided Garment UV Prediction for Human Re-texturing [45.710312986737975]
We show that it is possible to edit dressed human images and videos without 3D reconstruction.
Our approach captures the underlying geometry of the garment in a self-supervised way.
We demonstrate that our method outperforms the state-of-the-art human UV map estimation approaches on both real and synthetic data.
arXiv Detail & Related papers (2023-03-11T22:18:18Z) - Structure-Preserving 3D Garment Modeling with Neural Sewing Machines [190.70647799442565]
We propose a novel Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling.
NSM is capable of representing 3D garments under diverse garment shapes and topologies, realistically reconstructing 3D garments from 2D images with the preserved structure, and accurately manipulating the 3D garment categories, shapes, and topologies.
arXiv Detail & Related papers (2022-11-12T16:43:29Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Robust 3D Garment Digitization from Monocular 2D Images for 3D Virtual
Try-On Systems [1.7394606468019056]
We develop a robust 3D garment digitization solution that can generalize well on real-world fashion catalog images.
To train the supervised deep networks for landmark prediction & texture inpainting tasks, we generated a large set of synthetic data.
We manually annotated a small set of fashion catalog images crawled from online fashion e-commerce platforms to finetune.
arXiv Detail & Related papers (2021-11-30T05:49:23Z) - Learning to Transfer Texture from Clothing Images to 3D Humans [50.838970996234465]
We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
arXiv Detail & Related papers (2020-03-04T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.