Significance of Skeleton-based Features in Virtual Try-On
- URL: http://arxiv.org/abs/2208.08076v3
- Date: Sat, 6 Jan 2024 06:42:07 GMT
- Title: Significance of Skeleton-based Features in Virtual Try-On
- Authors: Debapriya Roy, Sanchayan Santra, Diganta Mukherjee, Bhabatosh Chanda
- Abstract summary: The idea of textitVirtual Try-ON (VTON) benefits e-retailing by giving an user the convenience of trying a clothing at the comfort of their home.
Most of the existing VTON methods produce inconsistent results when a person posing with his arms folded.
We propose two learning-based modules: a synthesizer network and a mask prediction network.
- Score: 3.7552180803118325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The idea of \textit{Virtual Try-ON} (VTON) benefits e-retailing by giving an
user the convenience of trying a clothing at the comfort of their home. In
general, most of the existing VTON methods produce inconsistent results when a
person posing with his arms folded i.e., bent or crossed, wants to try an
outfit. The problem becomes severe in the case of long-sleeved outfits. As
then, for crossed arm postures, overlap among different clothing parts might
happen. The existing approaches, especially the warping-based methods employing
\textit{Thin Plate Spline (TPS)} transform can not tackle such cases. To this
end, we attempt a solution approach where the clothing from the source person
is segmented into semantically meaningful parts and each part is warped
independently to the shape of the person. To address the bending issue, we
employ hand-crafted geometric features consistent with human body geometry for
warping the source outfit. In addition, we propose two learning-based modules:
a synthesizer network and a mask prediction network. All these together attempt
to produce a photo-realistic, pose-robust VTON solution without requiring any
paired training data. Comparison with some of the benchmark methods clearly
establishes the effectiveness of the approach.
Related papers
- PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - High-Fidelity Virtual Try-on with Large-Scale Unpaired Learning [36.7085107012134]
Virtual try-on (VTON) transfers a target clothing image to a reference person, where clothing fidelity is a key requirement for downstream e-commerce applications.
We propose a novel framework textbfBoosted Virtual Try-on (BVTON) to leverage the large-scale unpaired learning for high-fidelity try-on.
arXiv Detail & Related papers (2024-11-03T15:00:26Z) - Significance of Anatomical Constraints in Virtual Try-On [3.5002397743250504]
VTON system takes a clothing source and a person's image to predict try-on output of the person in the given clothing.
Existing methods fail by generating inaccurate clothing deformations.
We propose a part-based warping approach that divides the clothing into independently warpable parts to warp them separately and later combine them.
arXiv Detail & Related papers (2024-01-04T07:43:40Z) - Learning Garment DensePose for Robust Warping in Virtual Try-On [72.13052519560462]
We propose a robust warping method for virtual try-on based on a learned garment DensePose.
Our method achieves the state-of-the-art equivalent on virtual try-on benchmarks.
arXiv Detail & Related papers (2023-03-30T20:02:29Z) - ECON: Explicit Clothed humans Optimized via Normal integration [54.51948104460489]
We present ECON, a method for creating 3D humans in loose clothes.
It infers detailed 2D maps for the front and back side of a clothed person.
It "inpaints" the missing geometry between d-BiNI surfaces.
arXiv Detail & Related papers (2022-12-14T18:59:19Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z) - LGVTON: A Landmark Guided Approach to Virtual Try-On [4.617329011921226]
Given the images of two people: a person and a model, it generates a rendition of the person wearing the clothes of the model.
This is useful considering the fact that on most e-commerce websites images of only clothes are not usually available.
arXiv Detail & Related papers (2020-04-01T16:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.