A Generative Multi-Resolution Pyramid and Normal-Conditioning 3D Cloth
Draping
- URL: http://arxiv.org/abs/2311.02700v2
- Date: Mon, 15 Jan 2024 11:41:11 GMT
- Title: A Generative Multi-Resolution Pyramid and Normal-Conditioning 3D Cloth
Draping
- Authors: Hunor Laczk\'o, Meysam Madadi, Sergio Escalera, Jordi Gonzalez
- Abstract summary: We build a conditional variational autoencoder for 3D garment generation and draping.
We propose a pyramid network to add garment details progressively in a canonical space.
Our results on two public datasets, CLOTH3D and CAPE, show that our model is robust, controllable in terms of detail generation.
- Score: 37.77353302404437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: RGB cloth generation has been deeply studied in the related literature,
however, 3D garment generation remains an open problem. In this paper, we build
a conditional variational autoencoder for 3D garment generation and draping. We
propose a pyramid network to add garment details progressively in a canonical
space, i.e. unposing and unshaping the garments w.r.t. the body. We study
conditioning the network on surface normal UV maps, as an intermediate
representation, which is an easier problem to optimize than 3D coordinates. Our
results on two public datasets, CLOTH3D and CAPE, show that our model is
robust, controllable in terms of detail generation by the use of
multi-resolution pyramids, and achieves state-of-the-art results that can
highly generalize to unseen garments, poses, and shapes even when training with
small amounts of data.
Related papers
- DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes [62.82552328188602]
We present SCULPT, a novel 3D generative model for clothed and textured 3D meshes of humans.
We devise a deep neural network that learns to represent the geometry and appearance distribution of clothed human bodies.
arXiv Detail & Related papers (2023-08-21T11:23:25Z) - xCloth: Extracting Template-free Textured 3D Clothes from a Monocular
Image [4.056667956036515]
We present a novel framework for template-free textured 3D garment digitization.
More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps.
We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.
arXiv Detail & Related papers (2022-08-27T05:57:00Z) - 3D-Aware Indoor Scene Synthesis with Depth Priors [62.82867334012399]
Existing methods fail to model indoor scenes due to the large diversity of room layouts and the objects inside.
We argue that indoor scenes do not have a shared intrinsic structure, and hence only using 2D images cannot adequately guide the model with the 3D geometry.
arXiv Detail & Related papers (2022-02-17T09:54:29Z) - 3D-aware Image Synthesis via Learning Structural and Textural
Representations [39.681030539374994]
We propose VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation.
Our approach achieves sufficiently higher image quality and better 3D control than the previous methods.
arXiv Detail & Related papers (2021-12-20T18:59:40Z) - DECOR-GAN: 3D Shape Detailization by Conditional Refinement [50.8801457082181]
We introduce a deep generative network for 3D shape detailization, akin to stylization with the style being geometric details.
We demonstrate that our method can refine a coarse shape into a variety of detailed shapes with different styles.
arXiv Detail & Related papers (2020-12-16T18:52:10Z) - Learning to Transfer Texture from Clothing Images to 3D Humans [50.838970996234465]
We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
arXiv Detail & Related papers (2020-03-04T12:53:58Z) - PeeledHuman: Robust Shape Representation for Textured 3D Human Body
Reconstruction [7.582064461041252]
PeeledHuman encodes the human body as a set of Peeled Depth and RGB maps in 2D.
We train PeelGAN using a 3D Chamfer loss and other 2D losses to generate multiple depth values per-pixel and a corresponding RGB field per-vertex.
In our simple non-parametric solution, the generated Peeled Depth maps are back-projected to 3D space to obtain a complete textured 3D shape.
arXiv Detail & Related papers (2020-02-16T20:03:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.