Disentangled Cycle Consistency for Highly-realistic Virtual Try-On
- URL: http://arxiv.org/abs/2103.09479v2
- Date: Fri, 19 Mar 2021 08:08:17 GMT
- Title: Disentangled Cycle Consistency for Highly-realistic Virtual Try-On
- Authors: Chongjian Ge, Yibing Song, Yuying Ge, Han Yang, Wei Liu and Ping Luo
- Abstract summary: Image virtual try-on replaces the clothes on a person image with a desired in-shop clothes image.
Existing methods formulate virtual try-on as either in-painting or cycle consistency.
We propose a Disentangled Cycle-consistency Try-On Network (DCTON)
- Score: 34.97658860425598
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image virtual try-on replaces the clothes on a person image with a desired
in-shop clothes image. It is challenging because the person and the in-shop
clothes are unpaired. Existing methods formulate virtual try-on as either
in-painting or cycle consistency. Both of these two formulations encourage the
generation networks to reconstruct the input image in a self-supervised manner.
However, existing methods do not differentiate clothing and non-clothing
regions. A straight-forward generation impedes virtual try-on quality because
of the heavily coupled image contents. In this paper, we propose a Disentangled
Cycle-consistency Try-On Network (DCTON). The DCTON is able to produce
highly-realistic try-on images by disentangling important components of virtual
try-on including clothes warping, skin synthesis, and image composition. To
this end, DCTON can be naturally trained in a self-supervised manner following
cycle consistency learning. Extensive experiments on challenging benchmarks
show that DCTON outperforms state-of-the-art approaches favorably.
Related papers
- OutfitAnyone: Ultra-high Quality Virtual Try-On for Any Clothing and Any Person [38.69239957207417]
OutfitAnyone generates high-fidelity and detail-consistent images for virtual clothing trials.
It distinguishes itself with scalability-ulating factors such as pose, body shape and broad applicability.
OutfitAnyone's performance in diverse scenarios underscores its utility and readiness for real-world deployment.
arXiv Detail & Related papers (2024-07-23T07:04:42Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - DI-Net : Decomposed Implicit Garment Transfer Network for Digital
Clothed 3D Human [75.45488434002898]
Existing 2D virtual try-on methods cannot be directly extended to 3D since they lack the ability to perceive the depth of each pixel.
We propose a Decomposed Implicit garment transfer network (DI-Net), which can effortlessly reconstruct a 3D human mesh with the newly try-on result.
arXiv Detail & Related papers (2023-11-28T14:28:41Z) - Street TryOn: Learning In-the-Wild Virtual Try-On from Unpaired Person Images [14.616371216662227]
We introduce a StreetTryOn benchmark to support in-the-wild virtual try-on applications.
We also propose a novel method to learn virtual try-on from a set of in-the-wild person images directly without requiring paired data.
arXiv Detail & Related papers (2023-11-27T18:59:02Z) - OccluMix: Towards De-Occlusion Virtual Try-on by Semantically-Guided
Mixup [79.3118064406151]
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes)
Prior methods successfully preserve the character of clothing images.
Occlusion remains a pernicious effect for realistic virtual try-on.
arXiv Detail & Related papers (2023-01-03T06:29:11Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Data Augmentation using Random Image Cropping for High-resolution
Virtual Try-On (VITON-CROP) [18.347532903864597]
VITON-CROP synthesizes images more robustly when integrated with random crop augmentation compared to the existing state-of-the-art virtual try-on models.
In the experiments, we demonstrate that VITON-CROP is superior to VITON-HD both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-11-16T07:40:16Z) - Cloth Interactive Transformer for Virtual Try-On [106.21605249649957]
We propose a novel two-stage cloth interactive transformer (CIT) method for the virtual try-on task.
In the first stage, we design a CIT matching block, aiming to precisely capture the long-range correlations between the cloth-agnostic person information and the in-shop cloth information.
In the second stage, we put forth a CIT reasoning block for establishing global mutual interactive dependencies among person representation, the warped clothing item, and the corresponding warped cloth mask.
arXiv Detail & Related papers (2021-04-12T14:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.