Single Stage Multi-Pose Virtual Try-On
- URL: http://arxiv.org/abs/2211.10715v1
- Date: Sat, 19 Nov 2022 15:02:11 GMT
- Title: Single Stage Multi-Pose Virtual Try-On
- Authors: Sen He, Yi-Zhe Song, Tao Xiang
- Abstract summary: Multi-pose virtual try-on (MPVTON) aims to fit a target garment onto a person at a target pose.
MPVTON provides a better try-on experience, but is also more challenging due to the dual garment and pose editing objectives.
Existing methods adopt a pipeline comprising three disjoint modules including a target semantic layout prediction module, a coarse try-on image generator and a refinement try-on image generator.
In this paper, we propose a novel single stage model forTON. Key to our model is a parallel flow estimation module that predicts the flow fields for both person and garment images conditioned on
- Score: 119.95115739956661
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-pose virtual try-on (MPVTON) aims to fit a target garment onto a person
at a target pose. Compared to traditional virtual try-on (VTON) that fits the
garment but keeps the pose unchanged, MPVTON provides a better try-on
experience, but is also more challenging due to the dual garment and pose
editing objectives. Existing MPVTON methods adopt a pipeline comprising three
disjoint modules including a target semantic layout prediction module, a coarse
try-on image generator and a refinement try-on image generator. These models
are trained separately, leading to sub-optimal model training and
unsatisfactory results. In this paper, we propose a novel single stage model
for MPVTON. Key to our model is a parallel flow estimation module that predicts
the flow fields for both person and garment images conditioned on the target
pose. The predicted flows are subsequently used to warp the appearance feature
maps of the person and the garment images to construct a style map. The map is
then used to modulate the target pose's feature map for target try-on image
generation. With the parallel flow estimation design, our model can be trained
end-to-end in a single stage and is more computationally efficient, resulting
in new SOTA performance on existing MPVTON benchmarks. We further introduce
multi-task training and demonstrate that our model can also be applied for
traditional VTON and pose transfer tasks and achieve comparable performance to
SOTA specialized models on both tasks.
Related papers
- Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution [82.38677987249348]
We present the Qwen2-VL Series, which redefines the conventional predetermined-resolution approach in visual processing.
Qwen2-VL introduces the Naive Dynamic Resolution mechanism, which enables the model to dynamically process images of varying resolutions into different numbers of visual tokens.
The model also integrates Multimodal Rotary Position Embedding (M-RoPE), facilitating the effective fusion of positional information across text, images, and videos.
arXiv Detail & Related papers (2024-09-18T17:59:32Z) - MOWA: Multiple-in-One Image Warping Model [65.73060159073644]
We propose a Multiple-in-One image warping model (named MOWA) in this work.
We mitigate the difficulty of multi-task learning by disentangling the motion estimation at both the region level and pixel level.
To our knowledge, this is the first work that solves multiple practical warping tasks in one single model.
arXiv Detail & Related papers (2024-04-16T16:50:35Z) - MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - Customize StyleGAN with One Hand Sketch [0.0]
We propose a framework to control StyleGAN imagery with a single user sketch.
We learn a conditional distribution in the latent space of a pre-trained StyleGAN model via energy-based learning.
Our model can generate multi-modal images semantically aligned with the input sketch.
arXiv Detail & Related papers (2023-10-29T09:32:33Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection
Consistency [114.02182755620784]
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
Our framework is shown to outperform the state-of-the-art depth and motion estimation methods.
arXiv Detail & Related papers (2021-02-04T14:26:42Z) - SieveNet: A Unified Framework for Robust Image-Based Virtual Try-On [14.198545992098309]
SieveNet is a framework for robust image-based virtual try-on.
We introduce a multi-stage coarse-to-fine warping network to better model fine-grained intricacies.
We also introduce a try-on cloth conditioned segmentation mask prior to improve the texture transfer network.
arXiv Detail & Related papers (2020-01-17T12:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.