Per Garment Capture and Synthesis for Real-time Virtual Try-on
- URL: http://arxiv.org/abs/2109.04654v1
- Date: Fri, 10 Sep 2021 03:49:37 GMT
- Title: Per Garment Capture and Synthesis for Real-time Virtual Try-on
- Authors: Toby Chong, I-Chao Shen, Nobuyuki Umetani, Takeo Igarashi
- Abstract summary: Existing image-based works try to synthesize a try-on image from a single image of a target garment.
It is difficult to reproduce the change of wrinkles caused by pose and body size change, as well as pulling and stretching of the garment by hand.
We propose an alternative per garment capture and synthesis workflow to handle such rich interactions by training the model with many systematically captured images.
- Score: 15.128477359632262
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Virtual try-on is a promising application of computer graphics and human
computer interaction that can have a profound real-world impact especially
during this pandemic. Existing image-based works try to synthesize a try-on
image from a single image of a target garment, but it inherently limits the
ability to react to possible interactions. It is difficult to reproduce the
change of wrinkles caused by pose and body size change, as well as pulling and
stretching of the garment by hand. In this paper, we propose an alternative per
garment capture and synthesis workflow to handle such rich interactions by
training the model with many systematically captured images. Our workflow is
composed of two parts: garment capturing and clothed person image synthesis. We
designed an actuated mannequin and an efficient capturing process that collects
the detailed deformations of the target garments under diverse body sizes and
poses. Furthermore, we proposed to use a custom-designed measurement garment,
and we captured paired images of the measurement garment and the target
garments. We then learn a mapping between the measurement garment and the
target garments using deep image-to-image translation. The customer can then
try on the target garments interactively during online shopping.
Related papers
- IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - ClothFit: Cloth-Human-Attribute Guided Virtual Try-On Network Using 3D
Simulated Dataset [5.260305201345232]
We propose a novel virtual try-on method called ClothFit.
It can predict the draping shape of a garment on a target body based on the actual size of the garment and human attributes.
Our experimental results demonstrate that ClothFit can significantly improve the existing state-of-the-art methods in terms of photo-realistic virtual try-on results.
arXiv Detail & Related papers (2023-06-24T08:57:36Z) - Fill in Fabrics: Body-Aware Self-Supervised Inpainting for Image-Based
Virtual Try-On [3.5698678013121334]
We propose a self-supervised conditional generative adversarial network based framework comprised of a Fabricator and a Segmenter, Warper and Fuser.
The Fabricator reconstructs the clothing image when provided with a masked clothing as input, and learns the overall structure of the clothing by filling in fabrics.
A virtual try-on pipeline is then trained by transferring the learned representations from the Fabricator to Warper in an effort to warp and refine the target clothing.
arXiv Detail & Related papers (2022-10-03T13:25:31Z) - Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing [49.96406805006839]
We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
arXiv Detail & Related papers (2022-06-30T17:58:20Z) - Dressing in the Wild by Watching Dance Videos [69.7692630502019]
This paper attends to virtual try-on in real-world scenes and brings improvements in authenticity and naturalness.
We propose a novel generative network called wFlow that can effectively push up garment transfer to in-the-wild context.
arXiv Detail & Related papers (2022-03-29T08:05:45Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z) - VOGUE: Try-On by StyleGAN Interpolation Optimization [14.327659393182204]
Given an image of a target person and an image of another person wearing a garment, we automatically generate the target garment.
At the core of our method is a pose-conditioned StyleGAN2 latent space, which seamlessly combines the areas of interest from each image.
Our algorithm allows for garments to deform according to the given body shape, while preserving pattern and material details.
arXiv Detail & Related papers (2021-01-06T22:01:46Z) - Pose-Guided Human Animation from a Single Image in the Wild [83.86903892201656]
We present a new pose transfer method for synthesizing a human animation from a single image of a person controlled by a sequence of body poses.
Existing pose transfer methods exhibit significant visual artifacts when applying to a novel scene.
We design a compositional neural network that predicts the silhouette, garment labels, and textures.
We are able to synthesize human animations that can preserve the identity and appearance of the person in a temporally coherent way without any fine-tuning of the network on the testing scene.
arXiv Detail & Related papers (2020-12-07T15:38:29Z) - GarmentGAN: Photo-realistic Adversarial Fashion Transfer [0.0]
GarmentGAN performs image-based garment transfer through generative adversarial methods.
The framework allows users to virtually try-on items before purchase and generalizes to various apparel types.
arXiv Detail & Related papers (2020-03-04T05:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.