Structure-Preserving 3D Garment Modeling with Neural Sewing Machines
- URL: http://arxiv.org/abs/2211.06701v1
- Date: Sat, 12 Nov 2022 16:43:29 GMT
- Title: Structure-Preserving 3D Garment Modeling with Neural Sewing Machines
- Authors: Xipeng Chen, Guangrun Wang, Dizhong Zhu, Xiaodan Liang, Philip H. S.
Torr and Liang Lin
- Abstract summary: We propose a novel Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling.
NSM is capable of representing 3D garments under diverse garment shapes and topologies, realistically reconstructing 3D garments from 2D images with the preserved structure, and accurately manipulating the 3D garment categories, shapes, and topologies.
- Score: 190.70647799442565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Garment modeling is a critical and challenging topic in the area of
computer vision and graphics, with increasing attention focused on garment
representation learning, garment reconstruction, and controllable garment
manipulation, whereas existing methods were constrained to model garments under
specific categories or with relatively simple topologies. In this paper, we
propose a novel Neural Sewing Machine (NSM), a learning-based framework for
structure-preserving 3D garment modeling, which is capable of learning
representations for garments with diverse shapes and topologies and is
successfully applied to 3D garment reconstruction and controllable
manipulation. To model generic garments, we first obtain sewing pattern
embedding via a unified sewing pattern encoding module, as the sewing pattern
can accurately describe the intrinsic structure and the topology of the 3D
garment. Then we use a 3D garment decoder to decode the sewing pattern
embedding into a 3D garment using the UV-position maps with masks. To preserve
the intrinsic structure of the predicted 3D garment, we introduce an
inner-panel structure-preserving loss, an inter-panel structure-preserving
loss, and a surface-normal loss in the learning process of our framework. We
evaluate NSM on the public 3D garment dataset with sewing patterns with diverse
garment shapes and categories. Extensive experiments demonstrate that the
proposed NSM is capable of representing 3D garments under diverse garment
shapes and topologies, realistically reconstructing 3D garments from 2D images
with the preserved structure, and accurately manipulating the 3D garment
categories, shapes, and topologies, outperforming the state-of-the-art methods
by a clear margin.
Related papers
- Garment3DGen: 3D Garment Stylization and Texture Generation [11.836357439129301]
Garment3DGen is a new method to synthesize 3D garment assets from a base mesh given a single input image as guidance.
We leverage the recent progress of image-to-3D diffusion methods to generate 3D garment geometries.
We generate high-fidelity texture maps that are globally and locally consistent and faithfully capture the input guidance.
arXiv Detail & Related papers (2024-03-27T17:59:33Z) - WordRobe: Text-Guided Generation of Textured 3D Garments [30.614451083408266]
"WordRobe" is a novel framework for the generation of unposed & textured 3D garment meshes from user-friendly text prompts.
We demonstrate superior performance over current SOTAs for learning 3D garment latent space, garment synthesis, and text-driven texture synthesis.
arXiv Detail & Related papers (2024-03-26T09:44:34Z) - ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns [57.176642106425895]
We introduce a garment representation model that addresses limitations of current approaches.
It is faster and yields higher quality reconstructions than purely implicit surface representations.
It supports rapid editing of garment shapes and texture by modifying individual 2D panels.
arXiv Detail & Related papers (2023-05-23T14:23:48Z) - xCloth: Extracting Template-free Textured 3D Clothes from a Monocular
Image [4.056667956036515]
We present a novel framework for template-free textured 3D garment digitization.
More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps.
We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.
arXiv Detail & Related papers (2022-08-27T05:57:00Z) - NeuralTailor: Reconstructing Sewing Pattern Structures from 3D Point
Clouds of Garments [7.331799534004012]
We propose to use a garment sewing pattern to facilitate the intrinsic garment shape estimation.
We introduce NeuralTailor, a novel architecture based on point-level attention for set regression with variable cardinality.
Our experiments show that NeuralTailor successfully reconstructs sewing patterns and generalizes to garment types with pattern topologies unseen during training.
arXiv Detail & Related papers (2022-01-31T08:33:49Z) - Generating Datasets of 3D Garments with Sewing Patterns [10.729374293332281]
We create the first large-scale synthetic dataset of 3D garment models with their sewing patterns.
The dataset contains more than 20000 garment design variations produced from 19 different base types.
arXiv Detail & Related papers (2021-09-12T23:03:48Z) - DensePose 3D: Lifting Canonical Surface Maps of Articulated Objects to
the Third Dimension [71.71234436165255]
We contribute DensePose 3D, a method that can learn such reconstructions in a weakly supervised fashion from 2D image annotations only.
Because it does not require 3D scans, DensePose 3D can be used for learning a wide range of articulated categories such as different animal species.
We show significant improvements compared to state-of-the-art non-rigid structure-from-motion baselines on both synthetic and real data on categories of humans and animals.
arXiv Detail & Related papers (2021-08-31T18:33:55Z) - 3DBooSTeR: 3D Body Shape and Texture Recovery [76.91542440942189]
3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
arXiv Detail & Related papers (2020-10-23T21:07:59Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z) - Learning to Transfer Texture from Clothing Images to 3D Humans [50.838970996234465]
We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
arXiv Detail & Related papers (2020-03-04T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.