Significance of Anatomical Constraints in Virtual Try-On
- URL: http://arxiv.org/abs/2401.02110v1
- Date: Thu, 4 Jan 2024 07:43:40 GMT
- Title: Significance of Anatomical Constraints in Virtual Try-On
- Authors: Debapriya Roy, Sanchayan Santra, Diganta Mukherjee, and Bhabatosh
Chanda
- Abstract summary: VTON system takes a clothing source and a person's image to predict try-on output of the person in the given clothing.
Existing methods fail by generating inaccurate clothing deformations.
We propose a part-based warping approach that divides the clothing into independently warpable parts to warp them separately and later combine them.
- Score: 3.5002397743250504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The system of Virtual Try-ON (VTON) allows a user to try a product virtually.
In general, a VTON system takes a clothing source and a person's image to
predict the try-on output of the person in the given clothing. Although
existing methods perform well for simple poses, in case of bent or crossed arms
posture or when there is a significant difference between the alignment of the
source clothing and the pose of the target person, these methods fail by
generating inaccurate clothing deformations. In the VTON methods that employ
Thin Plate Spline (TPS) based clothing transformations, this mainly occurs for
two reasons - (1)~the second-order smoothness constraint of TPS that restricts
the bending of the object plane. (2)~Overlaps among different clothing parts
(e.g., sleeves and torso) can not be modeled by a single TPS transformation, as
it assumes the clothing as a single planar object; therefore, disregards the
independence of movement of different clothing parts. To this end, we make two
major contributions. Concerning the bending limitations of TPS, we propose a
human AnaTomy-Aware Geometric (ATAG) transformation. Regarding the overlap
issue, we propose a part-based warping approach that divides the clothing into
independently warpable parts to warp them separately and later combine them.
Extensive analysis shows the efficacy of this approach.
Related papers
- PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - Better Fit: Accommodate Variations in Clothing Types for Virtual Try-on [25.550019373321653]
Image-based virtual try-on aims to transfer target in-shop clothing to a dressed model image.
We propose an adaptive mask training paradigm that dynamically adjusts training masks.
For unpaired try-on validation, we construct a comprehensive cross-try-on benchmark.
arXiv Detail & Related papers (2024-03-13T12:07:14Z) - ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns [57.176642106425895]
We introduce a garment representation model that addresses limitations of current approaches.
It is faster and yields higher quality reconstructions than purely implicit surface representations.
It supports rapid editing of garment shapes and texture by modifying individual 2D panels.
arXiv Detail & Related papers (2023-05-23T14:23:48Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Significance of Skeleton-based Features in Virtual Try-On [3.7552180803118325]
The idea of textitVirtual Try-ON (VTON) benefits e-retailing by giving an user the convenience of trying a clothing at the comfort of their home.
Most of the existing VTON methods produce inconsistent results when a person posing with his arms folded.
We propose two learning-based modules: a synthesizer network and a mask prediction network.
arXiv Detail & Related papers (2022-08-17T05:24:03Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z) - SPG-VTON: Semantic Prediction Guidance for Multi-pose Virtual Try-on [27.870740623131816]
Image-based virtual try-on is challenging in fitting a target in-shop clothes into a reference person under diverse human poses.
We propose an end-to-end Semantic Prediction Guidance multi-pose Virtual Try-On Network (SPG-VTON)
We evaluate the proposed method on the most massive multi-pose dataset (MPV) and the DeepFashion dataset.
arXiv Detail & Related papers (2021-08-03T15:40:50Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z) - LGVTON: A Landmark Guided Approach to Virtual Try-On [4.617329011921226]
Given the images of two people: a person and a model, it generates a rendition of the person wearing the clothes of the model.
This is useful considering the fact that on most e-commerce websites images of only clothes are not usually available.
arXiv Detail & Related papers (2020-04-01T16:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.