PG-VTON: A Novel Image-Based Virtual Try-On Method via Progressive
Inference Paradigm
- URL: http://arxiv.org/abs/2304.08956v2
- Date: Tue, 5 Dec 2023 03:04:12 GMT
- Title: PG-VTON: A Novel Image-Based Virtual Try-On Method via Progressive
Inference Paradigm
- Authors: Naiyu Fang, Lemiao Qiu, Shuyou Zhang, Zili Wang, Kerui Hu
- Abstract summary: We propose a novel virtual try-on method via progressive inference paradigm (PGVTON)
We exploit the try-on parsing as the shape guidance and implement the garment try-on via warping-mapping-composition.
Experiments demonstrate that our method has state-of-the-art performance under two challenging scenarios.
- Score: 6.929743379017671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual try-on is a promising computer vision topic with a high commercial
value wherein a new garment is visually worn on a person with a photo-realistic
effect. Previous studies conduct their shape and content inference at one
stage, employing a single-scale warping mechanism and a relatively
unsophisticated content inference mechanism. These approaches have led to
suboptimal results in terms of garment warping and skin reservation under
challenging try-on scenarios. To address these limitations, we propose a novel
virtual try-on method via progressive inference paradigm (PGVTON) that
leverages a top-down inference pipeline and a general garment try-on strategy.
Specifically, we propose a robust try-on parsing inference method by
disentangling semantic categories and introducing consistency. Exploiting the
try-on parsing as the shape guidance, we implement the garment try-on via
warping-mapping-composition. To facilitate adaptation to a wide range of try-on
scenarios, we adopt a covering more and selecting one warping strategy and
explicitly distinguish tasks based on alignment. Additionally, we regulate
StyleGAN2 to implement re-naked skin inpainting, conditioned on the target skin
shape and spatial-agnostic skin features. Experiments demonstrate that our
method has state-of-the-art performance under two challenging scenarios. The
code will be available at https://github.com/NerdFNY/PGVTON.
Related papers
- High-Fidelity Virtual Try-on with Large-Scale Unpaired Learning [36.7085107012134]
Virtual try-on (VTON) transfers a target clothing image to a reference person, where clothing fidelity is a key requirement for downstream e-commerce applications.
We propose a novel framework textbfBoosted Virtual Try-on (BVTON) to leverage the large-scale unpaired learning for high-fidelity try-on.
arXiv Detail & Related papers (2024-11-03T15:00:26Z) - GraVITON: Graph based garment warping with attention guided inversion for Virtual-tryon [5.790630195329777]
We introduce a novel graph based warping technique which emphasizes the value of context in garment flow.
Our method, validated on VITON-HD and Dresscode datasets, showcases substantial improvement in garment warping, texture preservation, and overall realism.
arXiv Detail & Related papers (2024-06-04T10:29:18Z) - Wear-Any-Way: Manipulable Virtual Try-on via Sparse Correspondence Alignment [8.335876030647118]
Wear-Any-Way is a customizable solution for virtual try-on.
We first construct a strong pipeline for standard virtual try-on, supporting single/multiple garment try-on and model-to-model settings.
We propose sparse correspondence alignment which involves point-based control to guide the generation for specific locations.
arXiv Detail & Related papers (2024-03-19T17:59:52Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - Single Stage Warped Cloth Learning and Semantic-Contextual Attention Feature Fusion for Virtual TryOn [5.790630195329777]
Image-based virtual try-on aims to fit an in-shop garment onto a clothed person image.
Garment warping, which aligns the target garment with the corresponding body parts in the person image, is a crucial step in achieving this goal.
We propose a novel single-stage framework that implicitly learns the same without explicit multi-stage learning.
arXiv Detail & Related papers (2023-10-08T06:05:01Z) - Style-Based Global Appearance Flow for Virtual Try-On [119.95115739956661]
A novel global appearance flow estimation model is proposed in this work.
Experiment results on a popular virtual try-on benchmark show that our method achieves new state-of-the-art performance.
arXiv Detail & Related papers (2022-04-03T10:58:04Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z) - Cloth Interactive Transformer for Virtual Try-On [106.21605249649957]
We propose a novel two-stage cloth interactive transformer (CIT) method for the virtual try-on task.
In the first stage, we design a CIT matching block, aiming to precisely capture the long-range correlations between the cloth-agnostic person information and the in-shop cloth information.
In the second stage, we put forth a CIT reasoning block for establishing global mutual interactive dependencies among person representation, the warped clothing item, and the corresponding warped cloth mask.
arXiv Detail & Related papers (2021-04-12T14:45:32Z) - CharacterGAN: Few-Shot Keypoint Character Animation and Reposing [64.19520387536741]
We introduce CharacterGAN, a generative model that can be trained on only a few samples of a given character.
Our model generates novel poses based on keypoint locations, which can be modified in real time while providing interactive feedback.
We show that our approach outperforms recent baselines and creates realistic animations for diverse characters.
arXiv Detail & Related papers (2021-02-05T12:38:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.