CT-Net: Complementary Transfering Network for Garment Transfer with
Arbitrary Geometric Changes
- URL: http://arxiv.org/abs/2105.05497v1
- Date: Wed, 12 May 2021 08:07:07 GMT
- Title: CT-Net: Complementary Transfering Network for Garment Transfer with
Arbitrary Geometric Changes
- Authors: Fan Yang, Guosheng Lin
- Abstract summary: We propose Complementary Transfering Network (CT-Net) to adaptively model different levels of geometric changes and transfer outfits between different people.
Our network synthesizes high-quality garment transfer images and significantly outperforms the state-of-art methods both qualitatively and quantitatively.
- Score: 49.06982066976623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Garment transfer shows great potential in realistic applications with the
goal of transfering outfits across different people images. However, garment
transfer between images with heavy misalignments or severe occlusions still
remains as a challenge. In this work, we propose Complementary Transfering
Network (CT-Net) to adaptively model different levels of geometric changes and
transfer outfits between different people. In specific, CT-Net consists of
three modules: 1) A complementary warping module first estimates two
complementary warpings to transfer the desired clothes in different
granularities. 2) A layout prediction module is proposed to predict the target
layout, which guides the preservation or generation of the body parts in the
synthesized images. 3) A dynamic fusion module adaptively combines the
advantages of the complementary warpings to render the garment transfer
results. Extensive experiments conducted on DeepFashion dataset demonstrate
that our network synthesizes high-quality garment transfer images and
significantly outperforms the state-of-art methods both qualitatively and
quantitatively.
Related papers
- Cross-domain and Cross-dimension Learning for Image-to-Graph
Transformers [50.576354045312115]
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
We introduce a set of methods enabling cross-domain and cross-dimension transfer learning for image-to-graph transformers.
We demonstrate our method's utility in cross-domain and cross-dimension experiments, where we pretrain our models on 2D satellite images before applying them to vastly different target domains in 2D and 3D.
arXiv Detail & Related papers (2024-03-11T10:48:56Z) - Mutual Information-driven Triple Interaction Network for Efficient Image
Dehazing [54.168567276280505]
We propose a novel Mutual Information-driven Triple interaction Network (MITNet) for image dehazing.
The first stage, named amplitude-guided haze removal, aims to recover the amplitude spectrum of the hazy images for haze removal.
The second stage, named phase-guided structure refined, devotes to learning the transformation and refinement of the phase spectrum.
arXiv Detail & Related papers (2023-08-14T08:23:58Z) - Unsupervised 3D Pose Transfer with Cross Consistency and Dual
Reconstruction [50.94171353583328]
The goal of 3D pose transfer is to transfer the pose from the source mesh to the target mesh while preserving the identity information.
Deep learning-based methods improved the efficiency and performance of 3D pose transfer.
We present X-DualNet, a simple yet effective approach that enables unsupervised 3D pose transfer.
arXiv Detail & Related papers (2022-11-18T15:09:56Z) - Playing Lottery Tickets in Style Transfer Models [57.55795986289975]
Style transfer has achieved great success and attracted a wide range of attention from both academic and industrial communities.
However, the dependence on pretty large VGG based autoencoder leads to existing style transfer models having a high parameter complexities.
In this work, we perform the first empirical study to verify whether such trainable networks also exist in style transfer models.
arXiv Detail & Related papers (2022-03-25T17:43:18Z) - Class-Aware Generative Adversarial Transformers for Medical Image
Segmentation [39.14169989603906]
We present CA-GANformer, a novel type of generative adversarial transformers, for medical image segmentation.
First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations.
We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures.
arXiv Detail & Related papers (2022-01-26T03:50:02Z) - Progressive and Aligned Pose Attention Transfer for Person Image
Generation [59.87492938953545]
This paper proposes a new generative adversarial network for pose transfer, i.e., transferring the pose of a given person to a target pose.
We use two types of blocks, namely Pose-Attentional Transfer Block (PATB) and Aligned Pose-Attentional Transfer Bloc (APATB)
We verify the efficacy of the model on the Market-1501 and DeepFashion datasets, using quantitative and qualitative measures.
arXiv Detail & Related papers (2021-03-22T07:24:57Z) - Two-Stream Appearance Transfer Network for Person Image Generation [16.681839931864886]
generative adversarial networks (GANs) widely used for image generation and translation rely on spatially local and translation equivariant operators.
This paper introduces a novel two-stream appearance transfer network (2s-ATN) to address this challenge.
It is a multi-stage architecture consisting of a source stream and a target stream. Each stage features an appearance transfer module and several two-stream feature fusion modules.
arXiv Detail & Related papers (2020-11-09T04:21:02Z) - SieveNet: A Unified Framework for Robust Image-Based Virtual Try-On [14.198545992098309]
SieveNet is a framework for robust image-based virtual try-on.
We introduce a multi-stage coarse-to-fine warping network to better model fine-grained intricacies.
We also introduce a try-on cloth conditioned segmentation mask prior to improve the texture transfer network.
arXiv Detail & Related papers (2020-01-17T12:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.