Towards Garment Sewing Pattern Reconstruction from a Single Image
- URL: http://arxiv.org/abs/2311.04218v1
- Date: Tue, 7 Nov 2023 18:59:51 GMT
- Title: Towards Garment Sewing Pattern Reconstruction from a Single Image
- Authors: Lijuan Liu, Xiangyu Xu, Zhijie Lin, Jiabin Liang, Shuicheng Yan
- Abstract summary: Garment sewing pattern represents the intrinsic rest shape of a garment, and is the core for many applications like fashion design, virtual try-on, and digital avatars.
We first synthesize a versatile dataset, named SewFactory, which consists of around 1M images and ground-truth sewing patterns.
We then propose a two-level Transformer network called Sewformer, which significantly improves the sewing pattern prediction performance.
- Score: 76.97825595711444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Garment sewing pattern represents the intrinsic rest shape of a garment, and
is the core for many applications like fashion design, virtual try-on, and
digital avatars. In this work, we explore the challenging problem of recovering
garment sewing patterns from daily photos for augmenting these applications. To
solve the problem, we first synthesize a versatile dataset, named SewFactory,
which consists of around 1M images and ground-truth sewing patterns for model
training and quantitative evaluation. SewFactory covers a wide range of human
poses, body shapes, and sewing patterns, and possesses realistic appearances
thanks to the proposed human texture synthesis network. Then, we propose a
two-level Transformer network called Sewformer, which significantly improves
the sewing pattern prediction performance. Extensive experiments demonstrate
that the proposed framework is effective in recovering sewing patterns and well
generalizes to casually-taken human photos. Code, dataset, and pre-trained
models are available at: https://sewformer.github.io.
Related papers
- Dress-1-to-3: Single Image to Simulation-Ready 3D Outfit with Diffusion Prior and Differentiable Physics [27.697150953628572]
This paper focuses on 3D garment generation, a key area for applications like virtual try-on with dynamic garment animations.
We introduce Dress-1-to-3, a novel pipeline that reconstructs physics-plausible, simulation-ready separated garments with sewing patterns and humans from an in-the-wild image.
arXiv Detail & Related papers (2025-02-05T18:49:03Z) - Multimodal Latent Diffusion Model for Complex Sewing Pattern Generation [52.13927859375693]
We propose SewingLDM, a multi-modal generative model that generates sewing patterns controlled by text prompts, body shapes, and garment sketches.
To learn the sewing pattern distribution in the latent space, we design a two-step training strategy.
Comprehensive qualitative and quantitative experiments show the effectiveness of our proposed method.
arXiv Detail & Related papers (2024-12-19T02:05:28Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - SPnet: Estimating Garment Sewing Patterns from a Single Image [10.604555099281173]
This paper presents a novel method for reconstructing 3D garment models from a single image of a posed user.
By inferring the fundamental shape of the garment through sewing patterns from a single image, we can generate 3D garments that can adaptively deform to arbitrary poses.
arXiv Detail & Related papers (2023-12-26T09:51:25Z) - A Two-stage Personalized Virtual Try-on Framework with Shape Control and
Texture Guidance [7.302929117437442]
This paper proposes a brand new personalized virtual try-on model (PE-VITON), which uses the two stages (shape control and texture guidance) to decouple the clothing attributes.
The proposed model can effectively solve the problems of weak reduction of clothing folds, poor generation effect under complex human posture, blurred edges of clothing, and unclear texture styles in traditional try-on methods.
arXiv Detail & Related papers (2023-12-24T13:32:55Z) - Garment Recovery with Shape and Deformation Priors [51.41962835642731]
We propose a method that delivers realistic garment models from real-world images, regardless of garment shape or deformation.
Not only does our approach recover the garment geometry accurately, it also yields models that can be directly used by downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-17T07:06:21Z) - NeuralTailor: Reconstructing Sewing Pattern Structures from 3D Point
Clouds of Garments [7.331799534004012]
We propose to use a garment sewing pattern to facilitate the intrinsic garment shape estimation.
We introduce NeuralTailor, a novel architecture based on point-level attention for set regression with variable cardinality.
Our experiments show that NeuralTailor successfully reconstructs sewing patterns and generalizes to garment types with pattern topologies unseen during training.
arXiv Detail & Related papers (2022-01-31T08:33:49Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.