NeuralTailor: Reconstructing Sewing Pattern Structures from 3D Point
Clouds of Garments
- URL: http://arxiv.org/abs/2201.13063v1
- Date: Mon, 31 Jan 2022 08:33:49 GMT
- Title: NeuralTailor: Reconstructing Sewing Pattern Structures from 3D Point
Clouds of Garments
- Authors: Maria Korosteleva, Sung-Hee Lee
- Abstract summary: We propose to use a garment sewing pattern to facilitate the intrinsic garment shape estimation.
We introduce NeuralTailor, a novel architecture based on point-level attention for set regression with variable cardinality.
Our experiments show that NeuralTailor successfully reconstructs sewing patterns and generalizes to garment types with pattern topologies unseen during training.
- Score: 7.331799534004012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The fields of SocialVR, performance capture, and virtual try-on are often
faced with a need to faithfully reproduce real garments in the virtual world.
One critical task is the disentanglement of the intrinsic garment shape from
deformations due to fabric properties, physical forces, and contact with the
body. We propose to use a garment sewing pattern, a realistic and compact
garment descriptor, to facilitate the intrinsic garment shape estimation.
Another major challenge is a high diversity of shapes and designs in the
domain. The most common approach for Deep Learning on 3D garments is to build
specialized models for individual garments or garment types. We argue that
building a unified model for various garment designs has the benefit of
generalization to novel garment types, hence covering a larger design domain
than individual models would. We introduce NeuralTailor, a novel architecture
based on point-level attention for set regression with variable cardinality,
and apply it to the task of reconstructing 2D garment sewing patterns from the
3D point could garment models. Our experiments show that NeuralTailor
successfully reconstructs sewing patterns and generalizes to garment types with
pattern topologies unseen during training.
Related papers
- AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - SPnet: Estimating Garment Sewing Patterns from a Single Image [10.604555099281173]
This paper presents a novel method for reconstructing 3D garment models from a single image of a posed user.
By inferring the fundamental shape of the garment through sewing patterns from a single image, we can generate 3D garments that can adaptively deform to arbitrary poses.
arXiv Detail & Related papers (2023-12-26T09:51:25Z) - Towards Garment Sewing Pattern Reconstruction from a Single Image [76.97825595711444]
Garment sewing pattern represents the intrinsic rest shape of a garment, and is the core for many applications like fashion design, virtual try-on, and digital avatars.
We first synthesize a versatile dataset, named SewFactory, which consists of around 1M images and ground-truth sewing patterns.
We then propose a two-level Transformer network called Sewformer, which significantly improves the sewing pattern prediction performance.
arXiv Detail & Related papers (2023-11-07T18:59:51Z) - Structure-Preserving 3D Garment Modeling with Neural Sewing Machines [190.70647799442565]
We propose a novel Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling.
NSM is capable of representing 3D garments under diverse garment shapes and topologies, realistically reconstructing 3D garments from 2D images with the preserved structure, and accurately manipulating the 3D garment categories, shapes, and topologies.
arXiv Detail & Related papers (2022-11-12T16:43:29Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Generating Datasets of 3D Garments with Sewing Patterns [10.729374293332281]
We create the first large-scale synthetic dataset of 3D garment models with their sewing patterns.
The dataset contains more than 20000 garment design variations produced from 19 different base types.
arXiv Detail & Related papers (2021-09-12T23:03:48Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.