PASTA-GAN++: A Versatile Framework for High-Resolution Unpaired Virtual
Try-on
- URL: http://arxiv.org/abs/2207.13475v1
- Date: Wed, 27 Jul 2022 11:47:49 GMT
- Title: PASTA-GAN++: A Versatile Framework for High-Resolution Unpaired Virtual
Try-on
- Authors: Zhenyu Xie, Zaiyu Huang, Fuwei Zhao, Haoye Dong, Michael Kampffmeyer,
Xin Dong, Feida Zhu, Xiaodan Liang
- Abstract summary: PASTA-GAN++ is a versatile system for high-resolution unpaired virtual try-on.
It supports unsupervised training, arbitrary garment categories, and controllable garment editing.
- Score: 70.12285433529998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-based virtual try-on is one of the most promising applications of
human-centric image generation due to its tremendous real-world potential. In
this work, we take a step forwards to explore versatile virtual try-on
solutions, which we argue should possess three main properties, namely, they
should support unsupervised training, arbitrary garment categories, and
controllable garment editing. To this end, we propose a
characteristic-preserving end-to-end network, the PAtch-routed
SpaTially-Adaptive GAN++ (PASTA-GAN++), to achieve a versatile system for
high-resolution unpaired virtual try-on. Specifically, our PASTA-GAN++ consists
of an innovative patch-routed disentanglement module to decouple the intact
garment into normalized patches, which is capable of retaining garment style
information while eliminating the garment spatial information, thus alleviating
the overfitting issue during unsupervised training. Furthermore, PASTA-GAN++
introduces a patch-based garment representation and a patch-guided parsing
synthesis block, allowing it to handle arbitrary garment categories and support
local garment editing. Finally, to obtain try-on results with realistic texture
details, PASTA-GAN++ incorporates a novel spatially-adaptive residual module to
inject the coarse warped garment feature into the generator. Extensive
experiments on our newly collected UnPaired virtual Try-on (UPT) dataset
demonstrate the superiority of PASTA-GAN++ over existing SOTAs and its ability
for controllable garment editing.
Related papers
- DH-VTON: Deep Text-Driven Virtual Try-On via Hybrid Attention Learning [6.501730122478447]
DH-VTON is a deep text-driven virtual try-on model featuring a special hybrid attention learning strategy and deep garment semantic preservation module.
To extract the deep semantics of the garments, we first introduce InternViT-6B as fine-grained feature learner, which can be trained to align with the large-scale intrinsic knowledge.
To enhance the customized dressing abilities, we further introduce Garment-Feature ControlNet Plus (abbr. GFC+) module.
arXiv Detail & Related papers (2024-10-16T12:27:10Z) - IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario [50.62711489896909]
AnyFit surpasses all baselines on high-resolution benchmarks and real-world data by a large gap.
AnyFit's impressive performance on high-fidelity virtual try-ons in any scenario from any image, paves a new path for future research within the fashion community.
arXiv Detail & Related papers (2024-05-28T13:33:08Z) - StableGarment: Garment-Centric Generation via Stable Diffusion [29.5112874761836]
We introduce StableGarment, a unified framework to tackle garment-centric(GC) generation tasks.
Our solution involves the development of a garment encoder, a trainable copy of the denoising UNet equipped with additive self-attention layers.
The incorporation of a dedicated try-on ControlNet enables StableGarment to execute virtual try-on tasks with precision.
arXiv Detail & Related papers (2024-03-16T03:05:07Z) - GP-VTON: Towards General Purpose Virtual Try-on via Collaborative
Local-Flow Global-Parsing Learning [63.8668179362151]
Virtual Try-ON aims to transfer an in-shop garment onto a specific person.
Existing methods employ a global warping module to model the anisotropic deformation for different garment parts.
We propose an innovative Local-Flow Global-Parsing (LFGP) warping module and a Dynamic Gradient Truncation (DGT) training strategy.
arXiv Detail & Related papers (2023-03-24T02:12:29Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.