Wearing the Same Outfit in Different Ways -- A Controllable Virtual
Try-on Method
- URL: http://arxiv.org/abs/2211.16989v1
- Date: Tue, 29 Nov 2022 01:01:01 GMT
- Title: Wearing the Same Outfit in Different Ways -- A Controllable Virtual
Try-on Method
- Authors: Kedan Li, Jeffrey Zhang, Shao-Yu Chang, David Forsyth
- Abstract summary: An outfit visualization method generates an image of a person wearing real garments from images of those garments.
Current methods can produce images that look realistic and preserve garment identity, captured in details such as collar, cuffs, texture, hem, and sleeve length.
We describe an outfit visualization method that controls drape while preserving garment identity.
- Score: 9.176056742068813
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: An outfit visualization method generates an image of a person wearing real
garments from images of those garments. Current methods can produce images that
look realistic and preserve garment identity, captured in details such as
collar, cuffs, texture, hem, and sleeve length. However, no current method can
both control how the garment is worn -- including tuck or untuck, opened or
closed, high or low on the waist, etc.. -- and generate realistic images that
accurately preserve the properties of the original garment. We describe an
outfit visualization method that controls drape while preserving garment
identity. Our system allows instance independent editing of garment drape,
which means a user can construct an edit (e.g. tucking a shirt in a specific
way) that can be applied to all shirts in a garment collection. Garment detail
is preserved by relying on a warping procedure to place the garment on the body
and a generator then supplies fine shading detail. To achieve instance
independent control, we use control points with garment category-level
semantics to guide the warp. The method produces state-of-the-art quality
images, while allowing creative ways to style garments, including allowing tops
to be tucked or untucked; jackets to be worn open or closed; skirts to be worn
higher or lower on the waist; and so on. The method allows interactive control
to correct errors in individual renderings too. Because the edits are instance
independent, they can be applied to large pools of garments automatically and
can be conditioned on garment metadata (e.g. all cropped jackets are worn
closed or all bomber jackets are worn closed).
Related papers
- IMAGDressing-v1: Customizable Virtual Dressing [58.44155202253754]
IMAGDressing-v1 is a virtual dressing task that generates freely editable human images with fixed garments and optional conditions.
IMAGDressing-v1 incorporates a garment UNet that captures semantic features from CLIP and texture features from VAE.
We present a hybrid attention module, including a frozen self-attention and a trainable cross-attention, to integrate garment features from the garment UNet into a frozen denoising UNet.
arXiv Detail & Related papers (2024-07-17T16:26:30Z) - SPnet: Estimating Garment Sewing Patterns from a Single Image [10.604555099281173]
This paper presents a novel method for reconstructing 3D garment models from a single image of a posed user.
By inferring the fundamental shape of the garment through sewing patterns from a single image, we can generate 3D garments that can adaptively deform to arbitrary poses.
arXiv Detail & Related papers (2023-12-26T09:51:25Z) - TryOnDiffusion: A Tale of Two UNets [46.54704157349114]
Given two images depicting a person and a garment worn by another person, our goal is to generate a visualization of how the garment might look on the input person.
A key challenge is to synthesize a detail-preserving visualization of the garment, while warping the garment to accommodate a significant body pose and shape change.
We propose a diffusion-based architecture that unifies two UNets (referred to as Parallel-UNet)
arXiv Detail & Related papers (2023-06-14T06:25:58Z) - Learning Garment DensePose for Robust Warping in Virtual Try-On [72.13052519560462]
We propose a robust warping method for virtual try-on based on a learned garment DensePose.
Our method achieves the state-of-the-art equivalent on virtual try-on benchmarks.
arXiv Detail & Related papers (2023-03-30T20:02:29Z) - DIG: Draping Implicit Garment over the Human Body [56.68349332089129]
We propose an end-to-end differentiable pipeline that represents garments using implicit surfaces and learns a skinning field conditioned on shape and pose parameters of an articulated body model.
We show that our method, thanks to its end-to-end differentiability, allows to recover body and garments parameters jointly from image observations.
arXiv Detail & Related papers (2022-09-22T08:13:59Z) - Controllable Garment Transfer [0.726437825413781]
Image-based garment transfer replaces the garment on the target human with the desired garment.
We aim to add this customizable option of "garment tweaking" to our model to control garment attributes, such as sleeve length, waist width, and garment texture.
arXiv Detail & Related papers (2022-04-05T03:43:21Z) - Dressing in the Wild by Watching Dance Videos [69.7692630502019]
This paper attends to virtual try-on in real-world scenes and brings improvements in authenticity and naturalness.
We propose a novel generative network called wFlow that can effectively push up garment transfer to in-the-wild context.
arXiv Detail & Related papers (2022-03-29T08:05:45Z) - Dressing in Order: Recurrent Person Image Generation for Pose Transfer,
Virtual Try-on and Outfit Editing [15.764620091391603]
This paper proposes a flexible person generation framework called Dressing in Order (DiOr)
It supports 2D pose transfer, virtual try-on, and several fashion editing tasks.
arXiv Detail & Related papers (2021-04-14T17:58:54Z) - BCNet: Learning Body and Cloth Shape from A Single Image [56.486796244320125]
We propose a layered garment representation on top of SMPL and novelly make the skinning weight of garment independent of the body mesh.
Compared with existing methods, our method can support more garment categories and recover more accurate geometry.
arXiv Detail & Related papers (2020-04-01T03:41:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.