Learning to Transfer Texture from Clothing Images to 3D Humans
- URL: http://arxiv.org/abs/2003.02050v2
- Date: Mon, 30 Mar 2020 23:35:26 GMT
- Title: Learning to Transfer Texture from Clothing Images to 3D Humans
- Authors: Aymen Mir, Thiemo Alldieck, Gerard Pons-Moll
- Abstract summary: We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
- Score: 50.838970996234465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a simple yet effective method to automatically
transfer textures of clothing images (front and back) to 3D garments worn on
top SMPL, in real time. We first automatically compute training pairs of images
with aligned 3D garments using a custom non-rigid 3D to 2D registration method,
which is accurate but slow. Using these pairs, we learn a mapping from pixels
to the 3D garment surface. Our idea is to learn dense correspondences from
garment image silhouettes to a 2D-UV map of a 3D garment surface using shape
information alone, completely ignoring texture, which allows us to generalize
to the wide range of web images. Several experiments demonstrate that our model
is more accurate than widely used baselines such as thin-plate-spline warping
and image-to-image translation networks while being orders of magnitude faster.
Our model opens the door for applications such as virtual try-on, and allows
for generation of 3D humans with varied textures which is necessary for
learning.
Related papers
- FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis [51.193297565630886]
The challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images.
This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets.
We propose leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization.
arXiv Detail & Related papers (2024-10-13T01:25:05Z) - DI-Net : Decomposed Implicit Garment Transfer Network for Digital
Clothed 3D Human [75.45488434002898]
Existing 2D virtual try-on methods cannot be directly extended to 3D since they lack the ability to perceive the depth of each pixel.
We propose a Decomposed Implicit garment transfer network (DI-Net), which can effortlessly reconstruct a 3D human mesh with the newly try-on result.
arXiv Detail & Related papers (2023-11-28T14:28:41Z) - SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes [62.82552328188602]
We present SCULPT, a novel 3D generative model for clothed and textured 3D meshes of humans.
We devise a deep neural network that learns to represent the geometry and appearance distribution of clothed human bodies.
arXiv Detail & Related papers (2023-08-21T11:23:25Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - xCloth: Extracting Template-free Textured 3D Clothes from a Monocular
Image [4.056667956036515]
We present a novel framework for template-free textured 3D garment digitization.
More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps.
We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.
arXiv Detail & Related papers (2022-08-27T05:57:00Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Robust 3D Garment Digitization from Monocular 2D Images for 3D Virtual
Try-On Systems [1.7394606468019056]
We develop a robust 3D garment digitization solution that can generalize well on real-world fashion catalog images.
To train the supervised deep networks for landmark prediction & texture inpainting tasks, we generated a large set of synthetic data.
We manually annotated a small set of fashion catalog images crawled from online fashion e-commerce platforms to finetune.
arXiv Detail & Related papers (2021-11-30T05:49:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.