OccluMix: Towards De-Occlusion Virtual Try-on by Semantically-Guided
Mixup
- URL: http://arxiv.org/abs/2301.00965v1
- Date: Tue, 3 Jan 2023 06:29:11 GMT
- Title: OccluMix: Towards De-Occlusion Virtual Try-on by Semantically-Guided
Mixup
- Authors: Zhijing Yang, Junyang Chen, Yukai Shi, Hao Li, Tianshui Chen, Liang
Lin
- Abstract summary: Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes)
Prior methods successfully preserve the character of clothing images.
Occlusion remains a pernicious effect for realistic virtual try-on.
- Score: 79.3118064406151
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image Virtual try-on aims at replacing the cloth on a personal image with a
garment image (in-shop clothes), which has attracted increasing attention from
the multimedia and computer vision communities. Prior methods successfully
preserve the character of clothing images, however, occlusion remains a
pernicious effect for realistic virtual try-on. In this work, we first present
a comprehensive analysis of the occlusions and categorize them into two
aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in
the try-on image; ii) Acquired-Occlusion: the target cloth warps to the
unreasonable body part. Based on the in-depth analysis, we find that the
occlusions can be simulated by a novel semantically-guided mixup module, which
can generate semantic-specific occluded images that work together with the
try-on images to facilitate training a de-occlusion try-on (DOC-VTON)
framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing
on the try-on person. Aided by semantics guidance and pose prior, various
complexities of texture are selectively blending with human parts in a
copy-and-paste manner. Then, the Generative Module (GM) is utilized to take
charge of synthesizing the final try-on image and learning to de-occlusion
jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves
better perceptual quality by reducing occlusion effects.
Related papers
- Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On [29.217423805933727]
Diffusion model-based approaches have recently become popular, as they are excellent at image synthesis tasks.
We propose an Texture-Preserving Diffusion (TPD) model for virtual try-on, which enhances the fidelity of the results.
Second, we propose a novel diffusion-based method that predicts a precise inpainting mask based on the person and reference garment images.
arXiv Detail & Related papers (2024-04-01T12:43:22Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis [65.7968515029306]
We propose a novel Coarse-to-Fine Latent Diffusion (CFLD) method for Pose-Guided Person Image Synthesis (PGPIS)
A perception-refined decoder is designed to progressively refine a set of learnable queries and extract semantic understanding of person images as a coarse-grained prompt.
arXiv Detail & Related papers (2024-02-28T06:07:07Z) - Single Stage Warped Cloth Learning and Semantic-Contextual Attention Feature Fusion for Virtual TryOn [5.790630195329777]
Image-based virtual try-on aims to fit an in-shop garment onto a clothed person image.
Garment warping, which aligns the target garment with the corresponding body parts in the person image, is a crucial step in achieving this goal.
We propose a novel single-stage framework that implicitly learns the same without explicit multi-stage learning.
arXiv Detail & Related papers (2023-10-08T06:05:01Z) - Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - AMICO: Amodal Instance Composition [40.03865667370814]
Image composition aims to blend multiple objects to form a harmonized image.
We present Amodal Instance Composition for blending imperfect objects onto a target image.
Our results show state-of-the-art performance on public COCOA and KINS benchmarks.
arXiv Detail & Related papers (2022-10-11T23:23:14Z) - Disentangled Cycle Consistency for Highly-realistic Virtual Try-On [34.97658860425598]
Image virtual try-on replaces the clothes on a person image with a desired in-shop clothes image.
Existing methods formulate virtual try-on as either in-painting or cycle consistency.
We propose a Disentangled Cycle-consistency Try-On Network (DCTON)
arXiv Detail & Related papers (2021-03-17T07:18:55Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.