Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN
- URL: http://arxiv.org/abs/2111.10544v1
- Date: Sat, 20 Nov 2021 08:36:12 GMT
- Title: Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN
- Authors: Zhenyu Xie and Zaiyu Huang and Fuwei Zhao and Haoye Dong and Michael
Kampffmeyer and Xiaodan Liang
- Abstract summary: We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
- Score: 66.3650689395967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-based virtual try-on is one of the most promising applications of
human-centric image generation due to its tremendous real-world potential. Yet,
as most try-on approaches fit in-shop garments onto a target person, they
require the laborious and restrictive construction of a paired training
dataset, severely limiting their scalability. While a few recent works attempt
to transfer garments directly from one person to another, alleviating the need
to collect paired datasets, their performance is impacted by the lack of paired
(supervised) information. In particular, disentangling style and spatial
information of the garment becomes a challenge, which existing methods either
address by requiring auxiliary data or extensive online optimization
procedures, thereby still inhibiting their scalability. To achieve a
\emph{scalable} virtual try-on system that can transfer arbitrary garments
between a source and a target person in an unsupervised manner, we thus propose
a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive
GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
Specifically, to disentangle the style and spatial information of each garment,
PASTA-GAN consists of an innovative patch-routed disentanglement module for
successfully retaining garment texture and shape characteristics. Guided by the
source person keypoints, the patch-routed disentanglement module first
decouples garments into normalized patches, thus eliminating the inherent
spatial information of the garment, and then reconstructs the normalized
patches to the warped garment complying with the target person pose. Given the
warped garment, PASTA-GAN further introduces novel spatially-adaptive residual
blocks that guide the generator to synthesize more realistic garment details.
Related papers
- High-Fidelity Virtual Try-on with Large-Scale Unpaired Learning [36.7085107012134]
Virtual try-on (VTON) transfers a target clothing image to a reference person, where clothing fidelity is a key requirement for downstream e-commerce applications.
We propose a novel framework textbfBoosted Virtual Try-on (BVTON) to leverage the large-scale unpaired learning for high-fidelity try-on.
arXiv Detail & Related papers (2024-11-03T15:00:26Z) - IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - GraVITON: Graph based garment warping with attention guided inversion for Virtual-tryon [5.790630195329777]
We introduce a novel graph based warping technique which emphasizes the value of context in garment flow.
Our method, validated on VITON-HD and Dresscode datasets, showcases substantial improvement in garment warping, texture preservation, and overall realism.
arXiv Detail & Related papers (2024-06-04T10:29:18Z) - GP-VTON: Towards General Purpose Virtual Try-on via Collaborative
Local-Flow Global-Parsing Learning [63.8668179362151]
Virtual Try-ON aims to transfer an in-shop garment onto a specific person.
Existing methods employ a global warping module to model the anisotropic deformation for different garment parts.
We propose an innovative Local-Flow Global-Parsing (LFGP) warping module and a Dynamic Gradient Truncation (DGT) training strategy.
arXiv Detail & Related papers (2023-03-24T02:12:29Z) - PASTA-GAN++: A Versatile Framework for High-Resolution Unpaired Virtual
Try-on [70.12285433529998]
PASTA-GAN++ is a versatile system for high-resolution unpaired virtual try-on.
It supports unsupervised training, arbitrary garment categories, and controllable garment editing.
arXiv Detail & Related papers (2022-07-27T11:47:49Z) - Style-Based Global Appearance Flow for Virtual Try-On [119.95115739956661]
A novel global appearance flow estimation model is proposed in this work.
Experiment results on a popular virtual try-on benchmark show that our method achieves new state-of-the-art performance.
arXiv Detail & Related papers (2022-04-03T10:58:04Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - SPG-VTON: Semantic Prediction Guidance for Multi-pose Virtual Try-on [27.870740623131816]
Image-based virtual try-on is challenging in fitting a target in-shop clothes into a reference person under diverse human poses.
We propose an end-to-end Semantic Prediction Guidance multi-pose Virtual Try-On Network (SPG-VTON)
We evaluate the proposed method on the most massive multi-pose dataset (MPV) and the DeepFashion dataset.
arXiv Detail & Related papers (2021-08-03T15:40:50Z) - Shape Controllable Virtual Try-on for Underwear Models [0.0]
We propose a Shape Controllable Virtual Try-On Network (SC-VTON) to dress clothing for underwear models.
SC-VTON integrates information of model and clothing to generate warped clothing image.
Our method can generate high-resolution results with detailed textures.
arXiv Detail & Related papers (2021-07-28T04:01:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.