PFDM: Parser-Free Virtual Try-on via Diffusion Model
- URL: http://arxiv.org/abs/2402.03047v1
- Date: Mon, 5 Feb 2024 14:32:57 GMT
- Title: PFDM: Parser-Free Virtual Try-on via Diffusion Model
- Authors: Yunfang Niu, Dong Yi, Lingxiang Wu, Zhiwei Liu, Pengxiang Cai, Jinqiao
Wang
- Abstract summary: We propose a free virtual try-on method based on the diffusion model (PFDM)
Given two images, PFDM can "wear" garments on the target person seamlessly by implicitly warping without any other information.
Experiments demonstrate that our proposed PFDM can successfully handle complex images, and outperform both state-of-the-art-free and high-fidelity-based models.
- Score: 28.202996582963184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual try-on can significantly improve the garment shopping experiences in
both online and in-store scenarios, attracting broad interest in computer
vision. However, to achieve high-fidelity try-on performance, most
state-of-the-art methods still rely on accurate segmentation masks, which are
often produced by near-perfect parsers or manual labeling. To overcome the
bottleneck, we propose a parser-free virtual try-on method based on the
diffusion model (PFDM). Given two images, PFDM can "wear" garments on the
target person seamlessly by implicitly warping without any other information.
To learn the model effectively, we synthesize many pseudo-images and construct
sample pairs by wearing various garments on persons. Supervised by the
large-scale expanded dataset, we fuse the person and garment features using a
proposed Garment Fusion Attention (GFA) mechanism. Experiments demonstrate that
our proposed PFDM can successfully handle complex cases, synthesize
high-fidelity images, and outperform both state-of-the-art parser-free and
parser-based models.
Related papers
- Improving Virtual Try-On with Garment-focused Diffusion Models [91.95830983115474]
Diffusion models have led to the revolutionizing of generative modeling in numerous image synthesis tasks.
We shape a new Diffusion model, namely GarDiff, which triggers the garment-focused diffusion process.
Experiments on VITON-HD and DressCode datasets demonstrate the superiority of our GarDiff when compared to state-of-the-art VTON approaches.
arXiv Detail & Related papers (2024-09-12T17:55:11Z) - IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario [50.62711489896909]
AnyFit surpasses all baselines on high-resolution benchmarks and real-world data by a large gap.
AnyFit's impressive performance on high-fidelity virtual try-ons in any scenario from any image, paves a new path for future research within the fashion community.
arXiv Detail & Related papers (2024-05-28T13:33:08Z) - Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On [29.217423805933727]
Diffusion model-based approaches have recently become popular, as they are excellent at image synthesis tasks.
We propose an Texture-Preserving Diffusion (TPD) model for virtual try-on, which enhances the fidelity of the results.
Second, we propose a novel diffusion-based method that predicts a precise inpainting mask based on the person and reference garment images.
arXiv Detail & Related papers (2024-04-01T12:43:22Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable
Virtual Try-on [7.46772222515689]
OOTDiffusion is a novel network architecture for realistic and controllable image-based virtual try-on.
We leverage the power of pretrained latent diffusion models, designing an outfitting UNet to learn the garment detail features.
Our experiments on the VITON-HD and Dress Code datasets demonstrate that OOTDiffusion efficiently generates high-quality try-on results.
arXiv Detail & Related papers (2024-03-04T07:17:44Z) - Single Stage Warped Cloth Learning and Semantic-Contextual Attention Feature Fusion for Virtual TryOn [5.790630195329777]
Image-based virtual try-on aims to fit an in-shop garment onto a clothed person image.
Garment warping, which aligns the target garment with the corresponding body parts in the person image, is a crucial step in achieving this goal.
We propose a novel single-stage framework that implicitly learns the same without explicit multi-stage learning.
arXiv Detail & Related papers (2023-10-08T06:05:01Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on [23.198926150193472]
VTON aims at fitting target clothes to reference person images, which is widely adopted in e-commerce.
Existing VTON approaches can be narrowly categorized into.
-Based(PB) and.
-Free(PF)
We propose a novel PF method named Regional Mask Guided Network(RMGN)
arXiv Detail & Related papers (2022-04-24T12:30:13Z) - Cloth Interactive Transformer for Virtual Try-On [106.21605249649957]
We propose a novel two-stage cloth interactive transformer (CIT) method for the virtual try-on task.
In the first stage, we design a CIT matching block, aiming to precisely capture the long-range correlations between the cloth-agnostic person information and the in-shop cloth information.
In the second stage, we put forth a CIT reasoning block for establishing global mutual interactive dependencies among person representation, the warped clothing item, and the corresponding warped cloth mask.
arXiv Detail & Related papers (2021-04-12T14:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.