LGVTON: A Landmark Guided Approach to Virtual Try-On
- URL: http://arxiv.org/abs/2004.00562v2
- Date: Wed, 29 Sep 2021 05:46:45 GMT
- Title: LGVTON: A Landmark Guided Approach to Virtual Try-On
- Authors: Debapriya Roy, Sanchayan Santra, and Bhabatosh Chanda
- Abstract summary: Given the images of two people: a person and a model, it generates a rendition of the person wearing the clothes of the model.
This is useful considering the fact that on most e-commerce websites images of only clothes are not usually available.
- Score: 4.617329011921226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a Landmark Guided Virtual Try-On (LGVTON) method
for clothes, which aims to solve the problem of clothing trials on e-commerce
websites. Given the images of two people: a person and a model, it generates a
rendition of the person wearing the clothes of the model. This is useful
considering the fact that on most e-commerce websites images of only clothes
are not usually available. We follow a three-stage approach to achieve our
objective. In the first stage, LGVTON warps the clothes of the model using a
Thin-Plate Spline (TPS) based transformation to fit the person. Unlike previous
TPS-based methods, we use the landmarks (of human and clothes) to compute the
TPS transformation. This enables the warping to work independently of the
complex patterns, such as stripes, florals, and textures, present on the
clothes. However, this computed warp may not always be very precise. We,
therefore, further refine it in the subsequent stages with the help of a mask
generator (Stage 2) and an image synthesizer (Stage 3) modules. The mask
generator improves the fit of the warped clothes, and the image synthesizer
ensures a realistic output. To tackle the problem of lack of paired training
data, we resort to a self-supervised training strategy. Here paired data refers
to the image pair of model and person wearing the same cloth. We compare LGVTON
with four existing methods on two popular fashion datasets namely MPV and
DeepFashion using two performance measures, FID (Fr\'echet Inception Distance)
and SSIM (Structural Similarity Index). The proposed method in most cases
outperforms the state-of-the-art methods.
Related papers
- High-Fidelity Virtual Try-on with Large-Scale Unpaired Learning [36.7085107012134]
Virtual try-on (VTON) transfers a target clothing image to a reference person, where clothing fidelity is a key requirement for downstream e-commerce applications.
We propose a novel framework textbfBoosted Virtual Try-on (BVTON) to leverage the large-scale unpaired learning for high-fidelity try-on.
arXiv Detail & Related papers (2024-11-03T15:00:26Z) - MV-VTON: Multi-View Virtual Try-On with Diffusion Models [91.71150387151042]
The goal of image-based virtual try-on is to generate an image of the target person naturally wearing the given clothing.
Existing methods solely focus on the frontal try-on using the frontal clothing.
We introduce Multi-View Virtual Try-ON (MV-VTON), which aims to reconstruct the dressing results from multiple views using the given clothes.
arXiv Detail & Related papers (2024-04-26T12:27:57Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - Significance of Anatomical Constraints in Virtual Try-On [3.5002397743250504]
VTON system takes a clothing source and a person's image to predict try-on output of the person in the given clothing.
Existing methods fail by generating inaccurate clothing deformations.
We propose a part-based warping approach that divides the clothing into independently warpable parts to warp them separately and later combine them.
arXiv Detail & Related papers (2024-01-04T07:43:40Z) - Learning Garment DensePose for Robust Warping in Virtual Try-On [72.13052519560462]
We propose a robust warping method for virtual try-on based on a learned garment DensePose.
Our method achieves the state-of-the-art equivalent on virtual try-on benchmarks.
arXiv Detail & Related papers (2023-03-30T20:02:29Z) - ECON: Explicit Clothed humans Optimized via Normal integration [54.51948104460489]
We present ECON, a method for creating 3D humans in loose clothes.
It infers detailed 2D maps for the front and back side of a clothed person.
It "inpaints" the missing geometry between d-BiNI surfaces.
arXiv Detail & Related papers (2022-12-14T18:59:19Z) - Significance of Skeleton-based Features in Virtual Try-On [3.7552180803118325]
The idea of textitVirtual Try-ON (VTON) benefits e-retailing by giving an user the convenience of trying a clothing at the comfort of their home.
Most of the existing VTON methods produce inconsistent results when a person posing with his arms folded.
We propose two learning-based modules: a synthesizer network and a mask prediction network.
arXiv Detail & Related papers (2022-08-17T05:24:03Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - SPG-VTON: Semantic Prediction Guidance for Multi-pose Virtual Try-on [27.870740623131816]
Image-based virtual try-on is challenging in fitting a target in-shop clothes into a reference person under diverse human poses.
We propose an end-to-end Semantic Prediction Guidance multi-pose Virtual Try-On Network (SPG-VTON)
We evaluate the proposed method on the most massive multi-pose dataset (MPV) and the DeepFashion dataset.
arXiv Detail & Related papers (2021-08-03T15:40:50Z) - Cloth Interactive Transformer for Virtual Try-On [106.21605249649957]
We propose a novel two-stage cloth interactive transformer (CIT) method for the virtual try-on task.
In the first stage, we design a CIT matching block, aiming to precisely capture the long-range correlations between the cloth-agnostic person information and the in-shop cloth information.
In the second stage, we put forth a CIT reasoning block for establishing global mutual interactive dependencies among person representation, the warped clothing item, and the corresponding warped cloth mask.
arXiv Detail & Related papers (2021-04-12T14:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.