Fill in Fabrics: Body-Aware Self-Supervised Inpainting for Image-Based
Virtual Try-On
- URL: http://arxiv.org/abs/2210.00918v1
- Date: Mon, 3 Oct 2022 13:25:31 GMT
- Title: Fill in Fabrics: Body-Aware Self-Supervised Inpainting for Image-Based
Virtual Try-On
- Authors: H. Zunair, Y. Gobeil, S. Mercier, and A. Ben Hamza
- Abstract summary: We propose a self-supervised conditional generative adversarial network based framework comprised of a Fabricator and a Segmenter, Warper and Fuser.
The Fabricator reconstructs the clothing image when provided with a masked clothing as input, and learns the overall structure of the clothing by filling in fabrics.
A virtual try-on pipeline is then trained by transferring the learned representations from the Fabricator to Warper in an effort to warp and refine the target clothing.
- Score: 3.5698678013121334
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Previous virtual try-on methods usually focus on aligning a clothing item
with a person, limiting their ability to exploit the complex pose, shape and
skin color of the person, as well as the overall structure of the clothing,
which is vital to photo-realistic virtual try-on. To address this potential
weakness, we propose a fill in fabrics (FIFA) model, a self-supervised
conditional generative adversarial network based framework comprised of a
Fabricator and a unified virtual try-on pipeline with a Segmenter, Warper and
Fuser. The Fabricator aims to reconstruct the clothing image when provided with
a masked clothing as input, and learns the overall structure of the clothing by
filling in fabrics. A virtual try-on pipeline is then trained by transferring
the learned representations from the Fabricator to Warper in an effort to warp
and refine the target clothing. We also propose to use a multi-scale structural
constraint to enforce global context at multiple scales while warping the
target clothing to better fit the pose and shape of the person. Extensive
experiments demonstrate that our FIFA model achieves state-of-the-art results
on the standard VITON dataset for virtual try-on of clothing items, and is
shown to be effective at handling complex poses and retaining the texture and
embroidery of the clothing.
Related papers
- Better Fit: Accommodate Variations in Clothing Types for Virtual Try-on [25.550019373321653]
Image-based virtual try-on aims to transfer target in-shop clothing to a dressed model image.
We propose an adaptive mask training paradigm that dynamically adjusts training masks.
For unpaired try-on validation, we construct a comprehensive cross-try-on benchmark.
arXiv Detail & Related papers (2024-03-13T12:07:14Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - ClothFit: Cloth-Human-Attribute Guided Virtual Try-On Network Using 3D
Simulated Dataset [5.260305201345232]
We propose a novel virtual try-on method called ClothFit.
It can predict the draping shape of a garment on a target body based on the actual size of the garment and human attributes.
Our experimental results demonstrate that ClothFit can significantly improve the existing state-of-the-art methods in terms of photo-realistic virtual try-on results.
arXiv Detail & Related papers (2023-06-24T08:57:36Z) - Learning Garment DensePose for Robust Warping in Virtual Try-On [72.13052519560462]
We propose a robust warping method for virtual try-on based on a learned garment DensePose.
Our method achieves the state-of-the-art equivalent on virtual try-on benchmarks.
arXiv Detail & Related papers (2023-03-30T20:02:29Z) - Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing [49.96406805006839]
We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
arXiv Detail & Related papers (2022-06-30T17:58:20Z) - Garment Avatars: Realistic Cloth Driving using Pattern Registration [39.936812232884954]
We propose an end-to-end pipeline for building drivable representations for clothing.
A Garment Avatar is an expressive and fully-drivable geometry model for a piece of clothing.
We demonstrate the efficacy of our pipeline on a realistic virtual telepresence application.
arXiv Detail & Related papers (2022-06-07T15:06:55Z) - Dressing in the Wild by Watching Dance Videos [69.7692630502019]
This paper attends to virtual try-on in real-world scenes and brings improvements in authenticity and naturalness.
We propose a novel generative network called wFlow that can effectively push up garment transfer to in-the-wild context.
arXiv Detail & Related papers (2022-03-29T08:05:45Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Per Garment Capture and Synthesis for Real-time Virtual Try-on [15.128477359632262]
Existing image-based works try to synthesize a try-on image from a single image of a target garment.
It is difficult to reproduce the change of wrinkles caused by pose and body size change, as well as pulling and stretching of the garment by hand.
We propose an alternative per garment capture and synthesis workflow to handle such rich interactions by training the model with many systematically captured images.
arXiv Detail & Related papers (2021-09-10T03:49:37Z) - Shape Controllable Virtual Try-on for Underwear Models [0.0]
We propose a Shape Controllable Virtual Try-On Network (SC-VTON) to dress clothing for underwear models.
SC-VTON integrates information of model and clothing to generate warped clothing image.
Our method can generate high-resolution results with detailed textures.
arXiv Detail & Related papers (2021-07-28T04:01:01Z) - Cloth Interactive Transformer for Virtual Try-On [106.21605249649957]
We propose a novel two-stage cloth interactive transformer (CIT) method for the virtual try-on task.
In the first stage, we design a CIT matching block, aiming to precisely capture the long-range correlations between the cloth-agnostic person information and the in-shop cloth information.
In the second stage, we put forth a CIT reasoning block for establishing global mutual interactive dependencies among person representation, the warped clothing item, and the corresponding warped cloth mask.
arXiv Detail & Related papers (2021-04-12T14:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.