Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing
- URL: http://arxiv.org/abs/2111.12346v1
- Date: Wed, 24 Nov 2021 08:59:56 GMT
- Title: Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing
- Authors: Yu Liu and Mingbo Zhao and Zhao Zhang and Haijun Zhang and Shuicheng
Yan
- Abstract summary: We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
- Score: 85.74977256940855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based virtual try-on system has achieved some encouraging
progress recently, but there still remain several big challenges that need to
be solved, such as trying on arbitrary clothes of all types, trying on the
clothes from one category to another and generating image-realistic results
with few artifacts. To handle this issue, we in this paper first collect a new
dataset with all types of clothes, \ie tops, bottoms, and whole clothes, each
one has multiple categories with rich information of clothing characteristics
such as patterns, logos, and other details. Based on this dataset, we then
propose the Arbitrary Virtual Try-On Network (AVTON) that is utilized for
all-type clothes, which can synthesize realistic try-on images by preserving
and trading off characteristics of the target clothes and the reference person.
Our approach includes three modules: 1) Limbs Prediction Module, which is
utilized for predicting the human body parts by preserving the characteristics
of the reference person. This is especially good for handling cross-category
try-on task (\eg long sleeves \(\leftrightarrow\) short sleeves or long pants
\(\leftrightarrow\) skirts, \etc), where the exposed arms or legs with the skin
colors and details can be reasonably predicted; 2) Improved Geometric Matching
Module, which is designed to warp clothes according to the geometry of the
target person. We improve the TPS based warping method with a compactly
supported radial function (Wendland's \(\Psi\)-function); 3) Trade-Off Fusion
Module, which is to trade off the characteristics of the warped clothes and the
reference person. This module is to make the generated try-on images look more
natural and realistic based on a fine-tune symmetry of the network structure.
Extensive simulations are conducted and our approach can achieve better
performance compared with the state-of-the-art virtual try-on methods.
Related papers
- PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - High-Fidelity Virtual Try-on with Large-Scale Unpaired Learning [36.7085107012134]
Virtual try-on (VTON) transfers a target clothing image to a reference person, where clothing fidelity is a key requirement for downstream e-commerce applications.
We propose a novel framework textbfBoosted Virtual Try-on (BVTON) to leverage the large-scale unpaired learning for high-fidelity try-on.
arXiv Detail & Related papers (2024-11-03T15:00:26Z) - ClothCombo: Modeling Inter-Cloth Interaction for Draping Multi-Layered
Clothes [3.8079353598215757]
We present ClothCombo, a pipeline to drape arbitrary combinations of clothes on 3D human models.
Our method utilizes a GNN-based network to efficiently model the interaction between clothes in different layers.
arXiv Detail & Related papers (2023-04-07T06:23:54Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Significance of Skeleton-based Features in Virtual Try-On [3.7552180803118325]
The idea of textitVirtual Try-ON (VTON) benefits e-retailing by giving an user the convenience of trying a clothing at the comfort of their home.
Most of the existing VTON methods produce inconsistent results when a person posing with his arms folded.
We propose two learning-based modules: a synthesizer network and a mask prediction network.
arXiv Detail & Related papers (2022-08-17T05:24:03Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z) - SPG-VTON: Semantic Prediction Guidance for Multi-pose Virtual Try-on [27.870740623131816]
Image-based virtual try-on is challenging in fitting a target in-shop clothes into a reference person under diverse human poses.
We propose an end-to-end Semantic Prediction Guidance multi-pose Virtual Try-On Network (SPG-VTON)
We evaluate the proposed method on the most massive multi-pose dataset (MPV) and the DeepFashion dataset.
arXiv Detail & Related papers (2021-08-03T15:40:50Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z) - LGVTON: A Landmark Guided Approach to Virtual Try-On [4.617329011921226]
Given the images of two people: a person and a model, it generates a rendition of the person wearing the clothes of the model.
This is useful considering the fact that on most e-commerce websites images of only clothes are not usually available.
arXiv Detail & Related papers (2020-04-01T16:49:57Z) - Towards Photo-Realistic Virtual Try-On by Adaptively
Generating$\leftrightarrow$Preserving Image Content [85.24260811659094]
We propose a novel visual try-on network, namely Adaptive Content Generating and Preserving Network (ACGPN)
ACGPN first predicts semantic layout of the reference image that will be changed after try-on.
Second, a clothes warping module warps clothing images according to the generated semantic layout.
Third, an inpainting module for content fusion integrates all information (e.g. reference image, semantic layout, warped clothes) to adaptively produce each semantic part of human body.
arXiv Detail & Related papers (2020-03-12T15:55:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.