Shape-Guided Clothing Warping for Virtual Try-On
- URL: http://arxiv.org/abs/2504.15232v1
- Date: Mon, 21 Apr 2025 17:08:36 GMT
- Title: Shape-Guided Clothing Warping for Virtual Try-On
- Authors: Xiaoyu Han, Shunyuan Zheng, Zonglin Li, Chenyang Wang, Xin Sun, Quanling Meng,
- Abstract summary: Image-based virtual try-on aims to seamlessly fit in-shop clothing to a person image.<n>We propose a novel shape-guided clothing warping method for virtual try-on, dubbed SCW-VTON.
- Score: 6.750870148213539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-based virtual try-on aims to seamlessly fit in-shop clothing to a person image while maintaining pose consistency. Existing methods commonly employ the thin plate spline (TPS) transformation or appearance flow to deform in-shop clothing for aligning with the person's body. Despite their promising performance, these methods often lack precise control over fine details, leading to inconsistencies in shape between clothing and the person's body as well as distortions in exposed limb regions. To tackle these challenges, we propose a novel shape-guided clothing warping method for virtual try-on, dubbed SCW-VTON, which incorporates global shape constraints and additional limb textures to enhance the realism and consistency of the warped clothing and try-on results. To integrate global shape constraints for clothing warping, we devise a dual-path clothing warping module comprising a shape path and a flow path. The former path captures the clothing shape aligned with the person's body, while the latter path leverages the mapping between the pre- and post-deformation of the clothing shape to guide the estimation of appearance flow. Furthermore, to alleviate distortions in limb regions of try-on results, we integrate detailed limb guidance by developing a limb reconstruction network based on masked image modeling. Through the utilization of SCW-VTON, we are able to generate try-on results with enhanced clothing shape consistency and precise control over details. Extensive experiments demonstrate the superiority of our approach over state-of-the-art methods both qualitatively and quantitatively. The code is available at https://github.com/xyhanHIT/SCW-VTON.
Related papers
- Limb-Aware Virtual Try-On Network with Progressive Clothing Warping [64.84181064722084]
Image-based virtual try-on aims to transfer an in-shop clothing image to a person image.<n>Most existing methods adopt a single global deformation to perform clothing warping directly.<n>We propose Limb-aware Virtual Try-on Network named PL-VTON, which performs fine-grained clothing warping progressively.
arXiv Detail & Related papers (2025-03-18T09:52:41Z) - Progressive Limb-Aware Virtual Try-On [14.334222729238608]
Existing image-based virtual try-on methods directly transfer specific clothing to a human image.<n>We present a progressive virtual try-on framework, named PL-VTON, which performs pixel-level clothing warping.<n>We also propose a Limb-aware Texture Fusion module to estimate high-quality details in limb regions.
arXiv Detail & Related papers (2025-03-16T17:41:02Z) - GraVITON: Graph based garment warping with attention guided inversion for Virtual-tryon [5.790630195329777]
We introduce a novel graph based warping technique which emphasizes the value of context in garment flow.
Our method, validated on VITON-HD and Dresscode datasets, showcases substantial improvement in garment warping, texture preservation, and overall realism.
arXiv Detail & Related papers (2024-06-04T10:29:18Z) - Fill in Fabrics: Body-Aware Self-Supervised Inpainting for Image-Based
Virtual Try-On [3.5698678013121334]
We propose a self-supervised conditional generative adversarial network based framework comprised of a Fabricator and a Segmenter, Warper and Fuser.
The Fabricator reconstructs the clothing image when provided with a masked clothing as input, and learns the overall structure of the clothing by filling in fabrics.
A virtual try-on pipeline is then trained by transferring the learned representations from the Fabricator to Warper in an effort to warp and refine the target clothing.
arXiv Detail & Related papers (2022-10-03T13:25:31Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Style-Based Global Appearance Flow for Virtual Try-On [119.95115739956661]
A novel global appearance flow estimation model is proposed in this work.
Experiment results on a popular virtual try-on benchmark show that our method achieves new state-of-the-art performance.
arXiv Detail & Related papers (2022-04-03T10:58:04Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z) - Shape Controllable Virtual Try-on for Underwear Models [0.0]
We propose a Shape Controllable Virtual Try-On Network (SC-VTON) to dress clothing for underwear models.
SC-VTON integrates information of model and clothing to generate warped clothing image.
Our method can generate high-resolution results with detailed textures.
arXiv Detail & Related papers (2021-07-28T04:01:01Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.