GAIT: Gradient Adjusted Unsupervised Image-to-Image Translation
- URL: http://arxiv.org/abs/2009.00878v1
- Date: Wed, 2 Sep 2020 08:04:00 GMT
- Title: GAIT: Gradient Adjusted Unsupervised Image-to-Image Translation
- Authors: Ibrahim Batuhan Akkaya and Ugur Halici
- Abstract summary: An adversarial loss is utilized to match the distributions of the translated and target image sets.
This may create artifacts if two domains have different marginal distributions, for example, in uniform areas.
We propose an unsupervised IIT that preserves the uniform regions after the translation.
- Score: 5.076419064097734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-to-image translation (IIT) has made much progress recently with the
development of adversarial learning. In most of the recent work, an adversarial
loss is utilized to match the distributions of the translated and target image
sets. However, this may create artifacts if two domains have different marginal
distributions, for example, in uniform areas. In this work, we propose an
unsupervised IIT method that preserves the uniform regions after the
translation. The gradient adjustment loss, which is the L2 norm between the
Sobel response of the target image and the adjusted Sobel response of the
source images, is utilized. The proposed method is validated on the
jellyfish-to-Haeckel dataset, which is prepared to demonstrate the mentioned
problem, which contains images with different background distributions. We
demonstrate that our method obtained a performance gain compared to the
baseline method qualitatively and quantitatively, showing the effectiveness of
the proposed method.
Related papers
- Masked Discriminators for Content-Consistent Unpaired Image-to-Image
Translation [1.3654846342364308]
A common goal of unpaired image-to-image translation is to preserve content consistency between source images and translated images.
We show that masking the inputs of a global discriminator for both domains with a content-based mask is sufficient to reduce content inconsistencies significantly.
In our experiments, we show that our method achieves state-of-the-art performance in photorealistic sim-to-real translation and weather translation.
arXiv Detail & Related papers (2023-09-22T21:32:07Z) - Conditional Score Guidance for Text-Driven Image-to-Image Translation [52.73564644268749]
We present a novel algorithm for text-driven image-to-image translation based on a pretrained text-to-image diffusion model.
Our method aims to generate a target image by selectively editing the regions of interest in a source image.
arXiv Detail & Related papers (2023-05-29T10:48:34Z) - Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution [62.4972011636884]
We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-06T14:12:34Z) - Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation [56.55178339375146]
Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
arXiv Detail & Related papers (2021-06-16T17:58:21Z) - Dual Contrastive Learning for Unsupervised Image-to-Image Translation [16.759958400617947]
Unsupervised image-to-image translation tasks aim to find a mapping between a source domain X and a target domain Y from unpaired training data.
Contrastive learning for Unpaired image-to-image Translation yields state-of-the-art results.
We propose a novel method based on contrastive learning and a dual learning setting to infer an efficient mapping between unpaired data.
arXiv Detail & Related papers (2021-04-15T18:00:22Z) - Multiple GAN Inversion for Exemplar-based Image-to-Image Translation [0.0]
We propose Multiple GAN Inversion for Exemplar-based Image-to-Image Translation.
Our novel Multiple GAN Inversion avoids human intervention using a self-deciding algorithm in choosing the number of layers.
Experimental results shows the advantage of the proposed method compared to existing state-of-the-art exemplar-based image-to-image translation methods.
arXiv Detail & Related papers (2021-03-26T13:46:14Z) - Learning Unsupervised Cross-domain Image-to-Image Translation Using a
Shared Discriminator [2.1377923666134118]
Unsupervised image-to-image translation is used to transform images from a source domain to generate images in a target domain without using source-target image pairs.
We propose a new method that uses a single shared discriminator between the two GANs, which improves the overall efficacy.
Our results indicate that even without adding attention mechanisms, our method performs at par with attention-based methods and generates images of comparable quality.
arXiv Detail & Related papers (2021-02-09T08:26:23Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z) - Contrastive Learning for Unpaired Image-to-Image Translation [64.47477071705866]
In image-to-image translation, each patch in the output should reflect the content of the corresponding patch in the input, independent of domain.
We propose a framework based on contrastive learning to maximize mutual information between the two.
We demonstrate that our framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time.
arXiv Detail & Related papers (2020-07-30T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.