Dressing in Order: Recurrent Person Image Generation for Pose Transfer,
Virtual Try-on and Outfit Editing
- URL: http://arxiv.org/abs/2104.07021v1
- Date: Wed, 14 Apr 2021 17:58:54 GMT
- Title: Dressing in Order: Recurrent Person Image Generation for Pose Transfer,
Virtual Try-on and Outfit Editing
- Authors: Aiyu Cui, Daniel McKee, Svetlana Lazebnik
- Abstract summary: This paper proposes a flexible person generation framework called Dressing in Order (DiOr)
It supports 2D pose transfer, virtual try-on, and several fashion editing tasks.
- Score: 15.764620091391603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a flexible person generation framework called Dressing in
Order (DiOr), which supports 2D pose transfer, virtual try-on, and several
fashion editing tasks. Key to DiOr is a novel recurrent generation pipeline to
sequentially put garments on a person, so that trying on the same garments in
different orders will result in different looks. Our system can produce
dressing effects not achievable by existing work, including different
interactions of garments (e.g., wearing a top tucked into the bottom or over
it), as well as layering of multiple garments of the same type (e.g., jacket
over shirt over t-shirt). DiOr explicitly encodes the shape and texture of each
garment, enabling these elements to be edited separately. Joint training on
pose transfer and inpainting helps with detail preservation and coherence of
generated garments. Extensive evaluations show that DiOr outperforms other
recent methods like ADGAN in terms of output quality, and handles a wide range
of editing functions for which there is no direct supervision.
Related papers
- IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - TryOnDiffusion: A Tale of Two UNets [46.54704157349114]
Given two images depicting a person and a garment worn by another person, our goal is to generate a visualization of how the garment might look on the input person.
A key challenge is to synthesize a detail-preserving visualization of the garment, while warping the garment to accommodate a significant body pose and shape change.
We propose a diffusion-based architecture that unifies two UNets (referred to as Parallel-UNet)
arXiv Detail & Related papers (2023-06-14T06:25:58Z) - Transformer-based Graph Neural Networks for Outfit Generation [22.86041284499166]
TGNN exploits multi-headed self attention to capture relations between clothing items in a graph as a message passing step in Convolutional Graph Neural Networks.
We propose a transformer-based architecture, which exploits multi-headed self attention to capture relations between clothing items in a graph as a message passing step in Convolutional Graph Neural Networks.
arXiv Detail & Related papers (2023-04-17T09:18:45Z) - Wearing the Same Outfit in Different Ways -- A Controllable Virtual
Try-on Method [9.176056742068813]
An outfit visualization method generates an image of a person wearing real garments from images of those garments.
Current methods can produce images that look realistic and preserve garment identity, captured in details such as collar, cuffs, texture, hem, and sleeve length.
We describe an outfit visualization method that controls drape while preserving garment identity.
arXiv Detail & Related papers (2022-11-29T01:01:01Z) - DIG: Draping Implicit Garment over the Human Body [56.68349332089129]
We propose an end-to-end differentiable pipeline that represents garments using implicit surfaces and learns a skinning field conditioned on shape and pose parameters of an articulated body model.
We show that our method, thanks to its end-to-end differentiability, allows to recover body and garments parameters jointly from image observations.
arXiv Detail & Related papers (2022-09-22T08:13:59Z) - Dressing in the Wild by Watching Dance Videos [69.7692630502019]
This paper attends to virtual try-on in real-world scenes and brings improvements in authenticity and naturalness.
We propose a novel generative network called wFlow that can effectively push up garment transfer to in-the-wild context.
arXiv Detail & Related papers (2022-03-29T08:05:45Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Apparel-invariant Feature Learning for Apparel-changed Person
Re-identification [70.16040194572406]
Most public ReID datasets are collected in a short time window in which persons' appearance rarely changes.
In real-world applications such as in a shopping mall, the same person's clothing may change, and different persons may wearing similar clothes.
It is critical to learn an apparel-invariant person representation under cases like cloth changing or several persons wearing similar clothes.
arXiv Detail & Related papers (2020-08-14T03:49:14Z) - GarmentGAN: Photo-realistic Adversarial Fashion Transfer [0.0]
GarmentGAN performs image-based garment transfer through generative adversarial methods.
The framework allows users to virtually try-on items before purchase and generalizes to various apparel types.
arXiv Detail & Related papers (2020-03-04T05:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.