DI-Net : Decomposed Implicit Garment Transfer Network for Digital
Clothed 3D Human
- URL: http://arxiv.org/abs/2311.16818v1
- Date: Tue, 28 Nov 2023 14:28:41 GMT
- Title: DI-Net : Decomposed Implicit Garment Transfer Network for Digital
Clothed 3D Human
- Authors: Xiaojing Zhong, Yukun Su, Zhonghua Wu, Guosheng Lin, Qingyao Wu
- Abstract summary: Existing 2D virtual try-on methods cannot be directly extended to 3D since they lack the ability to perceive the depth of each pixel.
We propose a Decomposed Implicit garment transfer network (DI-Net), which can effortlessly reconstruct a 3D human mesh with the newly try-on result.
- Score: 75.45488434002898
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D virtual try-on enjoys many potential applications and hence has attracted
wide attention. However, it remains a challenging task that has not been
adequately solved. Existing 2D virtual try-on methods cannot be directly
extended to 3D since they lack the ability to perceive the depth of each pixel.
Besides, 3D virtual try-on approaches are mostly built on the fixed topological
structure and with heavy computation. To deal with these problems, we propose a
Decomposed Implicit garment transfer network (DI-Net), which can effortlessly
reconstruct a 3D human mesh with the newly try-on result and preserve the
texture from an arbitrary perspective. Specifically, DI-Net consists of two
modules: 1) A complementary warping module that warps the reference image to
have the same pose as the source image through dense correspondence learning
and sparse flow learning; 2) A geometry-aware decomposed transfer module that
decomposes the garment transfer into image layout based transfer and texture
based transfer, achieving surface and texture reconstruction by constructing
pixel-aligned implicit functions. Experimental results show the effectiveness
and superiority of our method in the 3D virtual try-on task, which can yield
more high-quality results over other existing methods.
Related papers
- Unsupervised Style-based Explicit 3D Face Reconstruction from Single
Image [10.1205208477163]
In this work, we propose a general adversarial learning framework for solving Unsupervised 2D to Explicit 3D Style Transfer.
Specifically, we merge two architectures: the unsupervised explicit 3D reconstruction network of Wu et al. and the Generative Adversarial Network (GAN) named StarGAN-v2.
We show that our solution is able to outperform well established solutions such as DepthNet in 3D reconstruction and Pix2NeRF in conditional style transfer.
arXiv Detail & Related papers (2023-04-24T21:25:06Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - 3D-Aware Indoor Scene Synthesis with Depth Priors [62.82867334012399]
Existing methods fail to model indoor scenes due to the large diversity of room layouts and the objects inside.
We argue that indoor scenes do not have a shared intrinsic structure, and hence only using 2D images cannot adequately guide the model with the 3D geometry.
arXiv Detail & Related papers (2022-02-17T09:54:29Z) - Style Agnostic 3D Reconstruction via Adversarial Style Transfer [23.304453155586312]
Reconstructing the 3D geometry of an object from an image is a major challenge in computer vision.
We propose an approach that enables a differentiable-based learning of 3D objects from images with backgrounds.
arXiv Detail & Related papers (2021-10-20T21:24:44Z) - Unsupervised High-Fidelity Facial Texture Generation and Reconstruction [20.447635896077454]
We propose a novel unified pipeline for both tasks, generation of both geometry and texture, and recovery of high-fidelity texture.
Our texture model is learned, in an unsupervised fashion, from natural images as opposed to scanned texture maps.
By applying precise 3DMM fitting, we can seamlessly integrate our modeled textures into synthetically generated background images.
arXiv Detail & Related papers (2021-10-10T10:59:04Z) - M3D-VTON: A Monocular-to-3D Virtual Try-On Network [62.77413639627565]
Existing 3D virtual try-on methods mainly rely on annotated 3D human shapes and garment templates.
We propose a novel Monocular-to-3D Virtual Try-On Network (M3D-VTON) that builds on the merits of both 2D and 3D approaches.
arXiv Detail & Related papers (2021-08-11T10:05:17Z) - D-OccNet: Detailed 3D Reconstruction Using Cross-Domain Learning [0.0]
We extend the work on Occupancy Networks by exploiting cross-domain learning of image and point cloud domains.
Our network, the Double Occupancy Network (D-OccNet) outperforms Occupancy Networks in terms of visual quality and details captured in the 3D reconstruction.
arXiv Detail & Related papers (2021-04-28T16:00:54Z) - Improved Modeling of 3D Shapes with Multi-view Depth Maps [48.8309897766904]
We present a general-purpose framework for modeling 3D shapes using CNNs.
Using just a single depth image of the object, we can output a dense multi-view depth map representation of 3D objects.
arXiv Detail & Related papers (2020-09-07T17:58:27Z) - Learning to Transfer Texture from Clothing Images to 3D Humans [50.838970996234465]
We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
arXiv Detail & Related papers (2020-03-04T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.