Toward Accurate and Realistic Outfits Visualization with Attention to
Details
- URL: http://arxiv.org/abs/2106.06593v1
- Date: Fri, 11 Jun 2021 19:53:34 GMT
- Title: Toward Accurate and Realistic Outfits Visualization with Attention to
Details
- Authors: Kedan Li, Min jin Chong, Jeffrey Zhang, Jingen Liu
- Abstract summary: We propose Outfit Visualization Net to capture important visual details necessary for commercial applications.
OVNet consists of 1) a semantic layout generator and 2) an image generation pipeline using multiple coordinated warps.
An interactive interface powered by this method has been deployed on fashion e-commerce websites and received overwhelmingly positive feedback.
- Score: 10.655149697873716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual try-on methods aim to generate images of fashion models wearing
arbitrary combinations of garments. This is a challenging task because the
generated image must appear realistic and accurately display the interaction
between garments. Prior works produce images that are filled with artifacts and
fail to capture important visual details necessary for commercial applications.
We propose Outfit Visualization Net (OVNet) to capture these important details
(e.g. buttons, shading, textures, realistic hemlines, and interactions between
garments) and produce high quality multiple-garment virtual try-on images.
OVNet consists of 1) a semantic layout generator and 2) an image generation
pipeline using multiple coordinated warps. We train the warper to output
multiple warps using a cascade loss, which refines each successive warp to
focus on poorly generated regions of a previous warp and yields consistent
improvements in detail. In addition, we introduce a method for matching outfits
with the most suitable model and produce significant improvements for both our
and other previous try-on methods. Through quantitative and qualitative
analysis, we demonstrate our method generates substantially higher-quality
studio images compared to prior works for multi-garment outfits. An interactive
interface powered by this method has been deployed on fashion e-commerce
websites and received overwhelmingly positive feedback.
Related papers
- Improving Virtual Try-On with Garment-focused Diffusion Models [91.95830983115474]
Diffusion models have led to the revolutionizing of generative modeling in numerous image synthesis tasks.
We shape a new Diffusion model, namely GarDiff, which triggers the garment-focused diffusion process.
Experiments on VITON-HD and DressCode datasets demonstrate the superiority of our GarDiff when compared to state-of-the-art VTON approaches.
arXiv Detail & Related papers (2024-09-12T17:55:11Z) - IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - GraVITON: Graph based garment warping with attention guided inversion for Virtual-tryon [5.790630195329777]
We introduce a novel graph based warping technique which emphasizes the value of context in garment flow.
Our method, validated on VITON-HD and Dresscode datasets, showcases substantial improvement in garment warping, texture preservation, and overall realism.
arXiv Detail & Related papers (2024-06-04T10:29:18Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - StableVITON: Learning Semantic Correspondence with Latent Diffusion
Model for Virtual Try-On [35.227896906556026]
Given a clothing image and a person image, an image-based virtual try-on aims to generate a customized image that appears natural and accurately reflects the characteristics of the clothing image.
In this work, we aim to expand the applicability of the pre-trained diffusion model so that it can be utilized independently for the virtual try-on task.
Our proposed zero cross-attention blocks not only preserve the clothing details by learning the semantic correspondence but also generate high-fidelity images by utilizing the inherent knowledge of the pre-trained model in the warping process.
arXiv Detail & Related papers (2023-12-04T08:27:59Z) - Single Stage Warped Cloth Learning and Semantic-Contextual Attention Feature Fusion for Virtual TryOn [5.790630195329777]
Image-based virtual try-on aims to fit an in-shop garment onto a clothed person image.
Garment warping, which aligns the target garment with the corresponding body parts in the person image, is a crucial step in achieving this goal.
We propose a novel single-stage framework that implicitly learns the same without explicit multi-stage learning.
arXiv Detail & Related papers (2023-10-08T06:05:01Z) - Single Stage Virtual Try-on via Deformable Attention Flows [51.70606454288168]
Virtual try-on aims to generate a photo-realistic fitting result given an in-shop garment and a reference person image.
We develop a novel Deformable Attention Flow (DAFlow) which applies the deformable attention scheme to multi-flow estimation.
Our proposed method achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-07-19T10:01:31Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Cloth Interactive Transformer for Virtual Try-On [106.21605249649957]
We propose a novel two-stage cloth interactive transformer (CIT) method for the virtual try-on task.
In the first stage, we design a CIT matching block, aiming to precisely capture the long-range correlations between the cloth-agnostic person information and the in-shop cloth information.
In the second stage, we put forth a CIT reasoning block for establishing global mutual interactive dependencies among person representation, the warped clothing item, and the corresponding warped cloth mask.
arXiv Detail & Related papers (2021-04-12T14:45:32Z) - Toward Accurate and Realistic Virtual Try-on Through Shape Matching and
Multiple Warps [25.157142707318304]
A virtual try-on method takes a product image and an image of a model and produces an image of the model wearing the product.
Most methods essentially compute warps from the product image to the model image and combine using image generation methods.
This paper uses quantitative evaluation on a challenging, novel dataset to demonstrate that (a) for any warping method, one can choose target models automatically to improve results, and (b) learning multiple coordinated specialized warpers offers further improvements on results.
arXiv Detail & Related papers (2020-03-22T03:59:06Z) - SieveNet: A Unified Framework for Robust Image-Based Virtual Try-On [14.198545992098309]
SieveNet is a framework for robust image-based virtual try-on.
We introduce a multi-stage coarse-to-fine warping network to better model fine-grained intricacies.
We also introduce a try-on cloth conditioned segmentation mask prior to improve the texture transfer network.
arXiv Detail & Related papers (2020-01-17T12:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.