Image-to-image Mapping with Many Domains by Sparse Attribute Transfer
- URL: http://arxiv.org/abs/2006.13291v1
- Date: Tue, 23 Jun 2020 19:52:23 GMT
- Title: Image-to-image Mapping with Many Domains by Sparse Attribute Transfer
- Authors: Matthew Amodio, Rim Assouel, Victor Schmidt, Tristan Sylvain, Smita
Krishnaswamy, Yoshua Bengio
- Abstract summary: Unsupervised image-to-image translation consists of learning a pair of mappings between two domains without known pairwise correspondences between points.
Current convention is to approach this task with cycle-consistent GANs.
We propose an alternate approach that directly restricts the generator to performing a simple sparse transformation in a latent layer.
- Score: 71.28847881318013
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised image-to-image translation consists of learning a pair of
mappings between two domains without known pairwise correspondences between
points. The current convention is to approach this task with cycle-consistent
GANs: using a discriminator to encourage the generator to change the image to
match the target domain, while training the generator to be inverted with
another mapping. While ending up with paired inverse functions may be a good
end result, enforcing this restriction at all times during training can be a
hindrance to effective modeling. We propose an alternate approach that directly
restricts the generator to performing a simple sparse transformation in a
latent layer, motivated by recent work from cognitive neuroscience suggesting
an architectural prior on representations corresponding to consciousness. Our
biologically motivated approach leads to representations more amenable to
transformation by disentangling high-level abstract concepts in the latent
space. We demonstrate that image-to-image domain translation with many
different domains can be learned more effectively with our architecturally
constrained, simple transformation than with previous unconstrained
architectures that rely on a cycle-consistency loss.
Related papers
- In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - Multi-cropping Contrastive Learning and Domain Consistency for
Unsupervised Image-to-Image Translation [5.562419999563734]
We propose a novel unsupervised image-to-image translation framework based on multi-cropping contrastive learning and domain consistency, called MCDUT.
In many image-to-image translation tasks, our method achieves state-of-the-art results, and the advantages of our method have been proven through comparison experiments and ablation research.
arXiv Detail & Related papers (2023-04-24T16:20:28Z) - A Domain Gap Aware Generative Adversarial Network for Multi-domain Image
Translation [22.47113158859034]
The paper proposes a unified model to translate images across multiple domains with significant domain gaps.
With a single unified generator, the model can maintain consistency over the global shapes as well as the local texture information across multiple domains.
arXiv Detail & Related papers (2021-10-21T00:33:06Z) - StEP: Style-based Encoder Pre-training for Multi-modal Image Synthesis [68.3787368024951]
We propose a novel approach for multi-modal Image-to-image (I2I) translation.
We learn a latent embedding, jointly with the generator, that models the variability of the output domain.
Specifically, we pre-train a generic style encoder using a novel proxy task to learn an embedding of images, from arbitrary domains, into a low-dimensional style latent space.
arXiv Detail & Related papers (2021-04-14T19:58:24Z) - Flow-based Deformation Guidance for Unpaired Multi-Contrast MRI
Image-to-Image Translation [7.8333615755210175]
In this paper, we introduce a novel approach to unpaired image-to-image translation based on the invertible architecture.
We utilize the temporal information between consecutive slices to provide more constraints to the optimization for transforming one domain to another in unpaired medical images.
arXiv Detail & Related papers (2020-12-03T09:10:22Z) - In-Domain GAN Inversion for Real Image Editing [56.924323432048304]
A common practice of feeding a real image to a trained GAN generator is to invert it back to a latent code.
Existing inversion methods typically focus on reconstructing the target image by pixel values yet fail to land the inverted code in the semantic domain of the original latent space.
We propose an in-domain GAN inversion approach, which faithfully reconstructs the input image and ensures the inverted code to be semantically meaningful for editing.
arXiv Detail & Related papers (2020-03-31T18:20:18Z) - Fast Symmetric Diffeomorphic Image Registration with Convolutional
Neural Networks [11.4219428942199]
We present a novel, efficient unsupervised symmetric image registration method.
We evaluate our method on 3D image registration with a large scale brain image dataset.
arXiv Detail & Related papers (2020-03-20T22:07:24Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.