GarmentGAN: Photo-realistic Adversarial Fashion Transfer
- URL: http://arxiv.org/abs/2003.01894v1
- Date: Wed, 4 Mar 2020 05:01:15 GMT
- Title: GarmentGAN: Photo-realistic Adversarial Fashion Transfer
- Authors: Amir Hossein Raffiee, Michael Sollami
- Abstract summary: GarmentGAN performs image-based garment transfer through generative adversarial methods.
The framework allows users to virtually try-on items before purchase and generalizes to various apparel types.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The garment transfer problem comprises two tasks: learning to separate a
person's body (pose, shape, color) from their clothing (garment type, shape,
style) and then generating new images of the wearer dressed in arbitrary
garments. We present GarmentGAN, a new algorithm that performs image-based
garment transfer through generative adversarial methods. The GarmentGAN
framework allows users to virtually try-on items before purchase and
generalizes to various apparel types. GarmentGAN requires as input only two
images, namely, a picture of the target fashion item and an image containing
the customer. The output is a synthetic image wherein the customer is wearing
the target apparel. In order to make the generated image look photo-realistic,
we employ the use of novel generative adversarial techniques. GarmentGAN
improves on existing methods in the realism of generated imagery and solves
various problems related to self-occlusions. Our proposed model incorporates
additional information during training, utilizing both segmentation maps and
body key-point information. We show qualitative and quantitative comparisons to
several other networks to demonstrate the effectiveness of this technique.
Related papers
- IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - Masked Extended Attention for Zero-Shot Virtual Try-On In The Wild [17.025262797698364]
Virtual Try-On aims to replace a piece of garment in an image with one from another, while preserving person and garment characteristics as well as image fidelity.
Current literature takes a supervised approach for the task, impairing generalization and imposing heavy computation.
We present a novel zero-shot training-free method for inpainting a clothing garment by reference.
arXiv Detail & Related papers (2024-06-21T17:45:37Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - StableVITON: Learning Semantic Correspondence with Latent Diffusion
Model for Virtual Try-On [35.227896906556026]
Given a clothing image and a person image, an image-based virtual try-on aims to generate a customized image that appears natural and accurately reflects the characteristics of the clothing image.
In this work, we aim to expand the applicability of the pre-trained diffusion model so that it can be utilized independently for the virtual try-on task.
Our proposed zero cross-attention blocks not only preserve the clothing details by learning the semantic correspondence but also generate high-fidelity images by utilizing the inherent knowledge of the pre-trained model in the warping process.
arXiv Detail & Related papers (2023-12-04T08:27:59Z) - TryOnDiffusion: A Tale of Two UNets [46.54704157349114]
Given two images depicting a person and a garment worn by another person, our goal is to generate a visualization of how the garment might look on the input person.
A key challenge is to synthesize a detail-preserving visualization of the garment, while warping the garment to accommodate a significant body pose and shape change.
We propose a diffusion-based architecture that unifies two UNets (referred to as Parallel-UNet)
arXiv Detail & Related papers (2023-06-14T06:25:58Z) - Per Garment Capture and Synthesis for Real-time Virtual Try-on [15.128477359632262]
Existing image-based works try to synthesize a try-on image from a single image of a target garment.
It is difficult to reproduce the change of wrinkles caused by pose and body size change, as well as pulling and stretching of the garment by hand.
We propose an alternative per garment capture and synthesis workflow to handle such rich interactions by training the model with many systematically captured images.
arXiv Detail & Related papers (2021-09-10T03:49:37Z) - Toward Accurate and Realistic Outfits Visualization with Attention to
Details [10.655149697873716]
We propose Outfit Visualization Net to capture important visual details necessary for commercial applications.
OVNet consists of 1) a semantic layout generator and 2) an image generation pipeline using multiple coordinated warps.
An interactive interface powered by this method has been deployed on fashion e-commerce websites and received overwhelmingly positive feedback.
arXiv Detail & Related papers (2021-06-11T19:53:34Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - XingGAN for Person Image Generation [149.54517767056382]
We propose a novel Generative Adversarial Network (XingGAN) for person image generation tasks.
XingGAN consists of two generation branches that model the person's appearance and shape information.
We show that the proposed XingGAN advances the state-of-the-art performance in terms of objective quantitative scores and subjective visual realness.
arXiv Detail & Related papers (2020-07-17T23:40:22Z) - Towards Photo-Realistic Virtual Try-On by Adaptively
Generating$\leftrightarrow$Preserving Image Content [85.24260811659094]
We propose a novel visual try-on network, namely Adaptive Content Generating and Preserving Network (ACGPN)
ACGPN first predicts semantic layout of the reference image that will be changed after try-on.
Second, a clothes warping module warps clothing images according to the generated semantic layout.
Third, an inpainting module for content fusion integrates all information (e.g. reference image, semantic layout, warped clothes) to adaptively produce each semantic part of human body.
arXiv Detail & Related papers (2020-03-12T15:55:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.