Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual
Try-On
- URL: http://arxiv.org/abs/2308.04288v1
- Date: Tue, 8 Aug 2023 14:32:38 GMT
- Title: Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual
Try-On
- Authors: Daiheng Gao, Xu Chen, Xindi Zhang, Qi Wang, Ke Sun, Bang Zhang,
Liefeng Bo, Qixing Huang
- Abstract summary: Cloth2Tex is a self-supervised method that generates texture maps with reasonable layout and structural consistency.
It can be used to support high-fidelity texture inpainting.
We evaluate our approach both qualitatively and quantitatively and demonstrate that Cloth2Tex can generate high-quality texture maps.
- Score: 47.4550741942217
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Fabricating and designing 3D garments has become extremely demanding with the
increasing need for synthesizing realistic dressed persons for a variety of
applications, e.g. 3D virtual try-on, digitalization of 2D clothes into 3D
apparel, and cloth animation. It thus necessitates a simple and straightforward
pipeline to obtain high-quality texture from simple input, such as 2D reference
images. Since traditional warping-based texture generation methods require a
significant number of control points to be manually selected for each type of
garment, which can be a time-consuming and tedious process. We propose a novel
method, called Cloth2Tex, which eliminates the human burden in this process.
Cloth2Tex is a self-supervised method that generates texture maps with
reasonable layout and structural consistency. Another key feature of Cloth2Tex
is that it can be used to support high-fidelity texture inpainting. This is
done by combining Cloth2Tex with a prevailing latent diffusion model. We
evaluate our approach both qualitatively and quantitatively and demonstrate
that Cloth2Tex can generate high-quality texture maps and achieve the best
visual effects in comparison to other methods. Project page:
tomguluson92.github.io/projects/cloth2tex/
Related papers
- InsTex: Indoor Scenes Stylized Texture Synthesis [81.12010726769768]
High-quality textures are crucial for 3D scenes for augmented/virtual reality (ARVR) applications.
Current methods suffer from lengthy processing times and visual artifacts.
We introduce two-stage architecture designed to generate high-quality textures for 3D scenes.
arXiv Detail & Related papers (2025-01-22T08:37:59Z) - EucliDreamer: Fast and High-Quality Texturing for 3D Models with Depth-Conditioned Stable Diffusion [5.158983929861116]
We present EucliDreamer, a simple and effective method to generate textures for 3D models given text and prompts.
The texture is parametized as an implicit function on the 3D surface, which is optimized with the Score Distillation Sampling (SDS) process and differentiable rendering.
arXiv Detail & Related papers (2024-04-16T04:44:16Z) - WordRobe: Text-Guided Generation of Textured 3D Garments [30.614451083408266]
"WordRobe" is a novel framework for the generation of unposed & textured 3D garment meshes from user-friendly text prompts.
We demonstrate superior performance over current SOTAs for learning 3D garment latent space, garment synthesis, and text-driven texture synthesis.
arXiv Detail & Related papers (2024-03-26T09:44:34Z) - DI-Net : Decomposed Implicit Garment Transfer Network for Digital
Clothed 3D Human [75.45488434002898]
Existing 2D virtual try-on methods cannot be directly extended to 3D since they lack the ability to perceive the depth of each pixel.
We propose a Decomposed Implicit garment transfer network (DI-Net), which can effortlessly reconstruct a 3D human mesh with the newly try-on result.
arXiv Detail & Related papers (2023-11-28T14:28:41Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - GETAvatar: Generative Textured Meshes for Animatable Human Avatars [69.56959932421057]
We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
arXiv Detail & Related papers (2023-10-04T10:30:24Z) - TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models [13.248386665044087]
We present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy.
Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort.
arXiv Detail & Related papers (2023-09-20T12:33:53Z) - TeCH: Text-guided Reconstruction of Lifelike Clothed Humans [35.68114652041377]
Existing methods often generate overly smooth back-side surfaces with a blurry texture.
Motivated by the power of foundation models, TeCH reconstructs the 3D human by leveraging descriptive text prompts.
We propose a hybrid 3D representation based on DMTet, which consists of an explicit body shape grid and an implicit distance field.
arXiv Detail & Related papers (2023-08-16T17:59:13Z) - Learning to Transfer Texture from Clothing Images to 3D Humans [50.838970996234465]
We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
arXiv Detail & Related papers (2020-03-04T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.