Street TryOn: Learning In-the-Wild Virtual Try-On from Unpaired Person Images
- URL: http://arxiv.org/abs/2311.16094v3
- Date: Tue, 16 Jul 2024 19:04:41 GMT
- Title: Street TryOn: Learning In-the-Wild Virtual Try-On from Unpaired Person Images
- Authors: Aiyu Cui, Jay Mahajan, Viraj Shah, Preeti Gomathinayagam, Chang Liu, Svetlana Lazebnik,
- Abstract summary: We introduce a StreetTryOn benchmark to support in-the-wild virtual try-on applications.
We also propose a novel method to learn virtual try-on from a set of in-the-wild person images directly without requiring paired data.
- Score: 14.616371216662227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most virtual try-on research is motivated to serve the fashion business by generating images to demonstrate garments on studio models at a lower cost. However, virtual try-on should be a broader application that also allows customers to visualize garments on themselves using their own casual photos, known as in-the-wild try-on. Unfortunately, the existing methods, which achieve plausible results for studio try-on settings, perform poorly in the in-the-wild context. This is because these methods often require paired images (garment images paired with images of people wearing the same garment) for training. While such paired data is easy to collect from shopping websites for studio settings, it is difficult to obtain for in-the-wild scenes. In this work, we fill the gap by (1) introducing a StreetTryOn benchmark to support in-the-wild virtual try-on applications and (2) proposing a novel method to learn virtual try-on from a set of in-the-wild person images directly without requiring paired data. We tackle the unique challenges, including warping garments to more diverse human poses and rendering more complex backgrounds faithfully, by a novel DensePose warping correction method combined with diffusion-based conditional inpainting. Our experiments show competitive performance for standard studio try-on tasks and SOTA performance for street try-on and cross-domain try-on tasks.
Related papers
- Try-On-Adapter: A Simple and Flexible Try-On Paradigm [42.2724473500475]
Image-based virtual try-on, widely used in online shopping, aims to generate images of a naturally dressed person conditioned on certain garments.
Previous methods focus on masking certain parts of the original model's standing image, and then inpainting on masked areas to generate realistic images of the model wearing corresponding reference garments.
We propose Try-On-Adapter (TOA), an outpainting paradigm that differs from the existing inpainting paradigm.
arXiv Detail & Related papers (2024-11-15T13:35:58Z) - Better Fit: Accommodate Variations in Clothing Types for Virtual Try-on [25.550019373321653]
Image-based virtual try-on aims to transfer target in-shop clothing to a dressed model image.
We propose an adaptive mask training paradigm that dynamically adjusts training masks.
For unpaired try-on validation, we construct a comprehensive cross-try-on benchmark.
arXiv Detail & Related papers (2024-03-13T12:07:14Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - Learning Garment DensePose for Robust Warping in Virtual Try-On [72.13052519560462]
We propose a robust warping method for virtual try-on based on a learned garment DensePose.
Our method achieves the state-of-the-art equivalent on virtual try-on benchmarks.
arXiv Detail & Related papers (2023-03-30T20:02:29Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Learning Fashion Compatibility from In-the-wild Images [6.591937706757015]
We propose to learn representations for compatibility prediction from in-the-wild street fashion images through self-supervised learning.
Our pretext task is formulated such that the representations of different items worn by the same person are closer compared to those worn by other people.
We conduct experiments on two popular fashion compatibility benchmarks - Polyvore and Polyvore-Disjoint outfits.
arXiv Detail & Related papers (2022-06-13T09:05:25Z) - Dressing in the Wild by Watching Dance Videos [69.7692630502019]
This paper attends to virtual try-on in real-world scenes and brings improvements in authenticity and naturalness.
We propose a novel generative network called wFlow that can effectively push up garment transfer to in-the-wild context.
arXiv Detail & Related papers (2022-03-29T08:05:45Z) - Weakly Supervised High-Fidelity Clothing Model Generation [67.32235668920192]
We propose a cheap yet scalable weakly-supervised method called Deep Generative Projection (DGP) to address this specific scenario.
We show that projecting the rough alignment of clothing and body onto the StyleGAN space can yield photo-realistic wearing results.
arXiv Detail & Related papers (2021-12-14T07:15:15Z) - PhotoApp: Photorealistic Appearance Editing of Head Portraits [97.23638022484153]
We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination in a portrait image.
Most editing approaches rely on supervised learning using training data captured with setups such as light and camera stages.
We design a supervised problem which learns in the latent space of StyleGAN.
This combines the best of supervised learning and generative adversarial modeling.
arXiv Detail & Related papers (2021-03-13T08:59:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.