TriGAN: Image-to-Image Translation for Multi-Source Domain Adaptation
- URL: http://arxiv.org/abs/2004.08769v1
- Date: Sun, 19 Apr 2020 05:07:22 GMT
- Title: TriGAN: Image-to-Image Translation for Multi-Source Domain Adaptation
- Authors: Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Nicu Sebe, Elisa
Ricci
- Abstract summary: We propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks.
Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style and the content.
We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.
- Score: 82.52514546441247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most domain adaptation methods consider the problem of transferring knowledge
to the target domain from a single source dataset. However, in practical
applications, we typically have access to multiple sources. In this paper we
propose the first approach for Multi-Source Domain Adaptation (MSDA) based on
Generative Adversarial Networks. Our method is inspired by the observation that
the appearance of a given image depends on three factors: the domain, the style
(characterized in terms of low-level features variations) and the content. For
this reason we propose to project the image features onto a space where only
the dependence from the content is kept, and then re-project this invariant
representation onto the pixel space using the target domain and style. In this
way, new labeled images can be generated which are used to train a final target
classifier. We test our approach using common MSDA benchmarks, showing that it
outperforms state-of-the-art methods.
Related papers
- Domain Agnostic Image-to-image Translation using Low-Resolution
Conditioning [6.470760375991825]
We propose a domain-agnostic i2i method for fine-grained problems, where the domains are related.
We present a novel approach that relies on training the generative model to produce images that both share distinctive information of the associated source image.
We validate our method on the CelebA-HQ and AFHQ datasets by demonstrating improvements in terms of visual quality.
arXiv Detail & Related papers (2023-05-08T19:58:49Z) - I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic
Segmentation [55.633859439375044]
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work.
Key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly.
This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation.
arXiv Detail & Related papers (2023-01-03T15:19:48Z) - Instance-level Heterogeneous Domain Adaptation for Limited-labeled
Sketch-to-Photo Retrieval [36.32367182571164]
We propose an Instance-level Heterogeneous Domain Adaptation (IHDA) framework.
We apply the fine-tuning strategy for identity label learning, aiming to transfer the instance-level knowledge in an inductive transfer manner.
Experiments show that our method has set a new state of the art on three sketch-to-photo image retrieval benchmarks without extra annotations.
arXiv Detail & Related papers (2022-11-26T08:50:08Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - Disentangled Unsupervised Image Translation via Restricted Information
Flow [61.44666983942965]
Many state-of-art methods hard-code the desired shared-vs-specific split into their architecture.
We propose a new method that does not rely on inductive architectural biases.
We show that the proposed method achieves consistently high manipulation accuracy across two synthetic and one natural dataset.
arXiv Detail & Related papers (2021-11-26T00:27:54Z) - Deep Symmetric Adaptation Network for Cross-modality Medical Image
Segmentation [40.95845629932874]
Unsupervised domain adaptation (UDA) methods have shown their promising performance in the cross-modality medical image segmentation tasks.
We present a novel deep symmetric architecture of UDA for medical image segmentation, which consists of a segmentation sub-network and two symmetric source and target domain translation sub-networks.
Our method has remarkable advantages compared to the state-of-the-art methods in both cross-modality Cardiac and BraTS segmentation tasks.
arXiv Detail & Related papers (2021-01-18T02:54:30Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - Multi-source Domain Adaptation for Visual Sentiment Classification [92.53780541232773]
We propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN)
To handle data from multiple source domains, MSGAN learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution.
Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.
arXiv Detail & Related papers (2020-01-12T08:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.