Unpaired Image-to-Image Translation using Adversarial Consistency Loss
- URL: http://arxiv.org/abs/2003.04858v7
- Date: Mon, 18 Jan 2021 12:13:57 GMT
- Title: Unpaired Image-to-Image Translation using Adversarial Consistency Loss
- Authors: Yihao Zhao, Ruihai Wu, Hao Dong
- Abstract summary: We propose a novel adversarial-consistency loss for image-to-image translation.
Our method achieves state-of-the-art results on three challenging tasks: glasses removal, male-to-female translation, and selfie-to-anime translation.
- Score: 6.900819011690599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unpaired image-to-image translation is a class of vision problems whose goal
is to find the mapping between different image domains using unpaired training
data. Cycle-consistency loss is a widely used constraint for such problems.
However, due to the strict pixel-level constraint, it cannot perform geometric
changes, remove large objects, or ignore irrelevant texture. In this paper, we
propose a novel adversarial-consistency loss for image-to-image translation.
This loss does not require the translated image to be translated back to be a
specific source image but can encourage the translated images to retain
important features of the source images and overcome the drawbacks of
cycle-consistency loss noted above. Our method achieves state-of-the-art
results on three challenging tasks: glasses removal, male-to-female
translation, and selfie-to-anime translation.
Related papers
- High-Resolution Image Translation Model Based on Grayscale Redefinition [3.6996084306161277]
We propose an innovative method for image translation between different domains.
For high-resolution image translation tasks, we use a grayscale adjustment method to achieve pixel-level translation.
For other tasks, we utilize the Pix2PixHD model with a coarse-to-fine generator, multi-scale discriminator, and improved loss to enhance the image translation performance.
arXiv Detail & Related papers (2024-03-26T12:21:47Z) - What can we learn about a generated image corrupting its latent
representation? [57.1841740328509]
We investigate the hypothesis that we can predict image quality based on its latent representation in the GANs bottleneck.
We achieve this by corrupting the latent representation with noise and generating multiple outputs.
arXiv Detail & Related papers (2022-10-12T14:40:32Z) - Semi-Supervised Image-to-Image Translation using Latent Space Mapping [37.232496213047845]
We introduce a general framework for semi-supervised image translation.
Our main idea is to learn the translation over the latent feature space instead of the image space.
Thanks to the low dimensional feature space, it is easier to find the desired mapping function.
arXiv Detail & Related papers (2022-03-29T05:14:26Z) - Contrastive Unpaired Translation using Focal Loss for Patch
Classification [0.0]
Contrastive Unpaired Translation is a new method for image-to-image translation.
We show that using focal loss in place of cross-entropy loss within the PatchNCE loss can improve on the model's performance.
arXiv Detail & Related papers (2021-09-25T20:22:33Z) - Unaligned Image-to-Image Translation by Learning to Reweight [40.93678165567824]
Unsupervised image-to-image translation aims at learning the mapping from the source to target domain without using paired images for training.
An essential yet restrictive assumption for unsupervised image translation is that the two domains are aligned.
We propose to select images based on importance reweighting and develop a method to learn the weights and perform translation simultaneously and automatically.
arXiv Detail & Related papers (2021-09-24T04:08:22Z) - The Spatially-Correlative Loss for Various Image Translation Tasks [69.62228639870114]
We propose a novel spatially-correlative loss that is simple, efficient and yet effective for preserving scene structure consistency.
Previous methods attempt this by using pixel-level cycle-consistency or feature-level matching losses.
We show distinct improvement over baseline models in all three modes of unpaired I2I translation: single-modal, multi-modal, and even single-image translation.
arXiv Detail & Related papers (2021-04-02T02:13:30Z) - BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal
Transfer [53.79505340315916]
We introduce BalaGAN, specifically designed to tackle the domain imbalance problem.
We leverage the latent modalities of the richer domain to turn the image-to-image translation problem into a balanced, multi-class, and conditional translation problem.
We show that BalaGAN outperforms strong baselines of both unconditioned and style-transfer-based image-to-image translation methods.
arXiv Detail & Related papers (2020-10-05T14:16:41Z) - Contrastive Learning for Unpaired Image-to-Image Translation [64.47477071705866]
In image-to-image translation, each patch in the output should reflect the content of the corresponding patch in the input, independent of domain.
We propose a framework based on contrastive learning to maximize mutual information between the two.
We demonstrate that our framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time.
arXiv Detail & Related papers (2020-07-30T17:59:58Z) - COCO-FUNIT: Few-Shot Unsupervised Image Translation with a Content
Conditioned Style Encoder [70.23358875904891]
Unsupervised image-to-image translation aims to learn a mapping of an image in a given domain to an analogous image in a different domain.
We propose a new few-shot image translation model, COCO-FUNIT, which computes the style embedding of the example images conditioned on the input image.
Our model shows effectiveness in addressing the content loss problem.
arXiv Detail & Related papers (2020-07-15T02:01:14Z) - Semi-supervised Learning for Few-shot Image-to-Image Translation [89.48165936436183]
We propose a semi-supervised method for few-shot image translation, called SEMIT.
Our method achieves excellent results on four different datasets using as little as 10% of the source labels.
arXiv Detail & Related papers (2020-03-30T22:46:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.