Limb-Aware Virtual Try-On Network with Progressive Clothing Warping
- URL: http://arxiv.org/abs/2503.14074v1
- Date: Tue, 18 Mar 2025 09:52:41 GMT
- Title: Limb-Aware Virtual Try-On Network with Progressive Clothing Warping
- Authors: Shengping Zhang, Xiaoyu Han, Weigang Zhang, Xiangyuan Lan, Hongxun Yao, Qingming Huang,
- Abstract summary: Image-based virtual try-on aims to transfer an in-shop clothing image to a person image.<n>Most existing methods adopt a single global deformation to perform clothing warping directly.<n>We propose Limb-aware Virtual Try-on Network named PL-VTON, which performs fine-grained clothing warping progressively.
- Score: 64.84181064722084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-based virtual try-on aims to transfer an in-shop clothing image to a person image. Most existing methods adopt a single global deformation to perform clothing warping directly, which lacks fine-grained modeling of in-shop clothing and leads to distorted clothing appearance. In addition, existing methods usually fail to generate limb details well because they are limited by the used clothing-agnostic person representation without referring to the limb textures of the person image. To address these problems, we propose Limb-aware Virtual Try-on Network named PL-VTON, which performs fine-grained clothing warping progressively and generates high-quality try-on results with realistic limb details. Specifically, we present Progressive Clothing Warping (PCW) that explicitly models the location and size of in-shop clothing and utilizes a two-stage alignment strategy to progressively align the in-shop clothing with the human body. Moreover, a novel gravity-aware loss that considers the fit of the person wearing clothing is adopted to better handle the clothing edges. Then, we design Person Parsing Estimator (PPE) with a non-limb target parsing map to semantically divide the person into various regions, which provides structural constraints on the human body and therefore alleviates texture bleeding between clothing and body regions. Finally, we introduce Limb-aware Texture Fusion (LTF) that focuses on generating realistic details in limb regions, where a coarse try-on result is first generated by fusing the warped clothing image with the person image, then limb textures are further fused with the coarse result under limb-aware guidance to refine limb details. Extensive experiments demonstrate that our PL-VTON outperforms the state-of-the-art methods both qualitatively and quantitatively.
Related papers
- Shape-Guided Clothing Warping for Virtual Try-On [6.750870148213539]
Image-based virtual try-on aims to seamlessly fit in-shop clothing to a person image.
We propose a novel shape-guided clothing warping method for virtual try-on, dubbed SCW-VTON.
arXiv Detail & Related papers (2025-04-21T17:08:36Z) - Progressive Limb-Aware Virtual Try-On [14.334222729238608]
Existing image-based virtual try-on methods directly transfer specific clothing to a human image.<n>We present a progressive virtual try-on framework, named PL-VTON, which performs pixel-level clothing warping.<n>We also propose a Limb-aware Texture Fusion module to estimate high-quality details in limb regions.
arXiv Detail & Related papers (2025-03-16T17:41:02Z) - Significance of Anatomical Constraints in Virtual Try-On [3.5002397743250504]
VTON system takes a clothing source and a person's image to predict try-on output of the person in the given clothing.
Existing methods fail by generating inaccurate clothing deformations.
We propose a part-based warping approach that divides the clothing into independently warpable parts to warp them separately and later combine them.
arXiv Detail & Related papers (2024-01-04T07:43:40Z) - Learning Garment DensePose for Robust Warping in Virtual Try-On [72.13052519560462]
We propose a robust warping method for virtual try-on based on a learned garment DensePose.
Our method achieves the state-of-the-art equivalent on virtual try-on benchmarks.
arXiv Detail & Related papers (2023-03-30T20:02:29Z) - Fill in Fabrics: Body-Aware Self-Supervised Inpainting for Image-Based
Virtual Try-On [3.5698678013121334]
We propose a self-supervised conditional generative adversarial network based framework comprised of a Fabricator and a Segmenter, Warper and Fuser.
The Fabricator reconstructs the clothing image when provided with a masked clothing as input, and learns the overall structure of the clothing by filling in fabrics.
A virtual try-on pipeline is then trained by transferring the learned representations from the Fabricator to Warper in an effort to warp and refine the target clothing.
arXiv Detail & Related papers (2022-10-03T13:25:31Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Significance of Skeleton-based Features in Virtual Try-On [3.7552180803118325]
The idea of textitVirtual Try-ON (VTON) benefits e-retailing by giving an user the convenience of trying a clothing at the comfort of their home.
Most of the existing VTON methods produce inconsistent results when a person posing with his arms folded.
We propose two learning-based modules: a synthesizer network and a mask prediction network.
arXiv Detail & Related papers (2022-08-17T05:24:03Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z) - PISE: Person Image Synthesis and Editing with Decoupled GAN [64.70360318367943]
We propose PISE, a novel two-stage generative model for Person Image Synthesis and Editing.
For human pose transfer, we first synthesize a human parsing map aligned with the target pose to represent the shape of clothing.
To decouple the shape and style of clothing, we propose joint global and local per-region encoding and normalization.
arXiv Detail & Related papers (2021-03-06T04:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.