Image-to-image Translation as a Unique Source of Knowledge
- URL: http://arxiv.org/abs/2112.01873v1
- Date: Fri, 3 Dec 2021 12:12:04 GMT
- Title: Image-to-image Translation as a Unique Source of Knowledge
- Authors: Alejandro D. Mousist
- Abstract summary: This article performs translations of labelled datasets from the optical domain to the SAR domain with different I2I algorithms from the state-of-the-art.
stacking is proposed as a way of combining the knowledge learned from the different I2I translations and evaluated against single models.
- Score: 91.3755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-to-image (I2I) translation is an established way of translating data
from one domain to another but the usability of the translated images in the
target domain when working with such dissimilar domains as the SAR/optical
satellite imagery ones and how much of the origin domain is translated to the
target domain is still not clear enough. This article address this by
performing translations of labelled datasets from the optical domain to the SAR
domain with different I2I algorithms from the state-of-the-art, learning from
transferred features in the destination domain and evaluating later how much
from the original dataset was transferred. Added to this, stacking is proposed
as a way of combining the knowledge learned from the different I2I translations
and evaluated against single models.
Related papers
- Domain-Scalable Unpaired Image Translation via Latent Space Anchoring [88.7642967393508]
Unpaired image-to-image translation (UNIT) aims to map images between two visual domains without paired training data.
We propose a new domain-scalable UNIT method, termed as latent space anchoring.
Our method anchors images of different domains to the same latent space of frozen GANs by learning lightweight encoder and regressor models.
In the inference phase, the learned encoders and decoders of different domains can be arbitrarily combined to translate images between any two domains without fine-tuning.
arXiv Detail & Related papers (2023-06-26T17:50:02Z) - Domain Agnostic Image-to-image Translation using Low-Resolution
Conditioning [6.470760375991825]
We propose a domain-agnostic i2i method for fine-grained problems, where the domains are related.
We present a novel approach that relies on training the generative model to produce images that both share distinctive information of the associated source image.
We validate our method on the CelebA-HQ and AFHQ datasets by demonstrating improvements in terms of visual quality.
arXiv Detail & Related papers (2023-05-08T19:58:49Z) - Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution [62.4972011636884]
We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-06T14:12:34Z) - Disentangled Unsupervised Image Translation via Restricted Information
Flow [61.44666983942965]
Many state-of-art methods hard-code the desired shared-vs-specific split into their architecture.
We propose a new method that does not rely on inductive architectural biases.
We show that the proposed method achieves consistently high manipulation accuracy across two synthetic and one natural dataset.
arXiv Detail & Related papers (2021-11-26T00:27:54Z) - Semantic Consistency in Image-to-Image Translation for Unsupervised
Domain Adaptation [22.269565708490465]
Unsupervised Domain Adaptation (UDA) aims to adapt models trained on a source domain to a new target domain where no labelled data is available.
We propose a semantically consistent image-to-image translation method in combination with a consistency regularisation method for UDA.
arXiv Detail & Related papers (2021-11-05T14:22:20Z) - Learning Unsupervised Cross-domain Image-to-Image Translation Using a
Shared Discriminator [2.1377923666134118]
Unsupervised image-to-image translation is used to transform images from a source domain to generate images in a target domain without using source-target image pairs.
We propose a new method that uses a single shared discriminator between the two GANs, which improves the overall efficacy.
Our results indicate that even without adding attention mechanisms, our method performs at par with attention-based methods and generates images of comparable quality.
arXiv Detail & Related papers (2021-02-09T08:26:23Z) - Deep Symmetric Adaptation Network for Cross-modality Medical Image
Segmentation [40.95845629932874]
Unsupervised domain adaptation (UDA) methods have shown their promising performance in the cross-modality medical image segmentation tasks.
We present a novel deep symmetric architecture of UDA for medical image segmentation, which consists of a segmentation sub-network and two symmetric source and target domain translation sub-networks.
Our method has remarkable advantages compared to the state-of-the-art methods in both cross-modality Cardiac and BraTS segmentation tasks.
arXiv Detail & Related papers (2021-01-18T02:54:30Z) - Continuous and Diverse Image-to-Image Translation via Signed Attribute
Vectors [120.13149176992896]
We present an effectively signed attribute vector, which enables continuous translation on diverse mapping paths across various domains.
To enhance the visual quality of continuous translation results, we generate a trajectory between two sign-symmetrical attribute vectors.
arXiv Detail & Related papers (2020-11-02T18:59:03Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.