WG-VITON: Wearing-Guide Virtual Try-On for Top and Bottom Clothes
- URL: http://arxiv.org/abs/2205.04759v1
- Date: Tue, 10 May 2022 09:09:02 GMT
- Title: WG-VITON: Wearing-Guide Virtual Try-On for Top and Bottom Clothes
- Authors: Soonchan Park, Jinah Park
- Abstract summary: We introduce Wearing-Guide VITON (i.e., WG-VITON) which utilizes an additional input binary mask to control the wearing styles of the generated image.
Our experiments show that WG-VITON effectively generates an image of the model wearing given top and bottom clothes, and create complicated wearing styles such as partly tucking in the top to the bottom.
- Score: 1.9290392443571387
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Studies of virtual try-on (VITON) have been shown their effectiveness in
utilizing the generative neural network for virtually exploring fashion
products, and some of recent researches of VITON attempted to synthesize human
image wearing given multiple types of garments (e.g., top and bottom clothes).
However, when replacing the top and bottom clothes of the target human,
numerous wearing styles are possible with a certain combination of the clothes.
In this paper, we address the problem of variation in wearing style when
simultaneously replacing the top and bottom clothes of the model. We introduce
Wearing-Guide VITON (i.e., WG-VITON) which utilizes an additional input binary
mask to control the wearing styles of the generated image. Our experiments show
that WG-VITON effectively generates an image of the model wearing given top and
bottom clothes, and create complicated wearing styles such as partly tucking in
the top to the bottom
Related papers
- M&M VTO: Multi-Garment Virtual Try-On and Editing [31.45715245587691]
M&M VTO is a mix and match virtual try-on method that takes as input multiple garment images, text description for garment layout and an image of a person.
An example input includes: an image of a shirt, an image of a pair of pants, "rolled sleeves, shirt tucked in", and an image of a person.
The output is a visualization of how those garments (in the desired layout) would look like on the given person.
arXiv Detail & Related papers (2024-06-06T22:46:37Z) - MV-VTON: Multi-View Virtual Try-On with Diffusion Models [91.71150387151042]
The goal of image-based virtual try-on is to generate an image of the target person naturally wearing the given clothing.
Existing methods solely focus on the frontal try-on using the frontal clothing.
We introduce Multi-View Virtual Try-ON (MV-VTON), which aims to reconstruct the dressing results from multiple views using the given clothes.
arXiv Detail & Related papers (2024-04-26T12:27:57Z) - Better Fit: Accommodate Variations in Clothing Types for Virtual Try-on [25.550019373321653]
Image-based virtual try-on aims to transfer target in-shop clothing to a dressed model image.
We propose an adaptive mask training paradigm that dynamically adjusts training masks.
For unpaired try-on validation, we construct a comprehensive cross-try-on benchmark.
arXiv Detail & Related papers (2024-03-13T12:07:14Z) - Learning Garment DensePose for Robust Warping in Virtual Try-On [72.13052519560462]
We propose a robust warping method for virtual try-on based on a learned garment DensePose.
Our method achieves the state-of-the-art equivalent on virtual try-on benchmarks.
arXiv Detail & Related papers (2023-03-30T20:02:29Z) - Dressing in the Wild by Watching Dance Videos [69.7692630502019]
This paper attends to virtual try-on in real-world scenes and brings improvements in authenticity and naturalness.
We propose a novel generative network called wFlow that can effectively push up garment transfer to in-the-wild context.
arXiv Detail & Related papers (2022-03-29T08:05:45Z) - Weakly Supervised High-Fidelity Clothing Model Generation [67.32235668920192]
We propose a cheap yet scalable weakly-supervised method called Deep Generative Projection (DGP) to address this specific scenario.
We show that projecting the rough alignment of clothing and body onto the StyleGAN space can yield photo-realistic wearing results.
arXiv Detail & Related papers (2021-12-14T07:15:15Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Data Augmentation using Random Image Cropping for High-resolution
Virtual Try-On (VITON-CROP) [18.347532903864597]
VITON-CROP synthesizes images more robustly when integrated with random crop augmentation compared to the existing state-of-the-art virtual try-on models.
In the experiments, we demonstrate that VITON-CROP is superior to VITON-HD both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-11-16T07:40:16Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.