Conditional Coupled Generative Adversarial Networks for Zero-Shot Domain
Adaptation
- URL: http://arxiv.org/abs/2009.05228v1
- Date: Fri, 11 Sep 2020 04:36:42 GMT
- Title: Conditional Coupled Generative Adversarial Networks for Zero-Shot Domain
Adaptation
- Authors: Jinghua Wang and Jianmin Jiang
- Abstract summary: Machine learning models trained in one domain perform poorly in the other domains due to the existence of domain shift.
We propose conditional coupled generative adversarial networks (CoCoGAN) by extending the coupled generative adversarial networks (CoGAN) into a conditioning model.
Our proposed CoCoGAN is able to capture the joint distribution of dual-domain samples in two different tasks, i.e. the relevant task (RT) and an irrelevant task (IRT)
- Score: 31.334196673143257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models trained in one domain perform poorly in the other
domains due to the existence of domain shift. Domain adaptation techniques
solve this problem by training transferable models from the label-rich source
domain to the label-scarce target domain. Unfortunately, a majority of the
existing domain adaptation techniques rely on the availability of target-domain
data, and thus limit their applications to a small community across few
computer vision problems. In this paper, we tackle the challenging zero-shot
domain adaptation (ZSDA) problem, where target-domain data is non-available in
the training stage. For this purpose, we propose conditional coupled generative
adversarial networks (CoCoGAN) by extending the coupled generative adversarial
networks (CoGAN) into a conditioning model. Compared with the existing state of
the arts, our proposed CoCoGAN is able to capture the joint distribution of
dual-domain samples in two different tasks, i.e. the relevant task (RT) and an
irrelevant task (IRT). We train CoCoGAN with both source-domain samples in RT
and dual-domain samples in IRT to complete the domain adaptation. While the
former provide high-level concepts of the non-available target-domain data, the
latter carry the sharing correlation between the two domains in RT and IRT. To
train CoCoGAN in the absence of target-domain data for RT, we propose a new
supervisory signal, i.e. the alignment between representations across tasks.
Extensive experiments carried out demonstrate that our proposed CoCoGAN
outperforms existing state of the arts in image classifications.
Related papers
- Controlled Generation of Unseen Faults for Partial and OpenSet&Partial
Domain Adaptation [0.0]
New operating conditions can result in a performance drop of fault diagnostics models due to the domain gap between the training and the testing data distributions.
We propose a new framework based on a Wasserstein GAN for Partial and OpenSet&Partial domain adaptation.
The main contribution is the controlled fault data generation that enables to generate unobserved fault types and severity levels in the target domain.
arXiv Detail & Related papers (2022-04-29T13:05:25Z) - Co-Teaching for Unsupervised Domain Adaptation and Expansion [12.455364571022576]
Unsupervised Domain Adaptation (UDA) essentially trades a model's performance on a source domain for improving its performance on a target domain.
UDE tries to adapt the model for the target domain as UDA does, and in the meantime maintains its source-domain performance.
In both UDA and UDE settings, a model tailored to a given domain, let it be the source or the target domain, is assumed to well handle samples from the given domain.
We exploit this finding, and accordingly propose Co-Teaching (CT)
arXiv Detail & Related papers (2022-04-04T02:34:26Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Adversarial Learning for Zero-shot Domain Adaptation [31.334196673143257]
Zero-shot domain adaptation is a problem where neither data sample nor label is available for parameter learning in the target domain.
We propose a new method for ZSDA by transferring domain shift from an irrelevant task to the task of interest.
We evaluate the proposed method on benchmark datasets and achieve the state-of-the-art performances.
arXiv Detail & Related papers (2020-09-11T03:41:32Z) - Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation [80.55236691733506]
Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain.
We propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains.
arXiv Detail & Related papers (2020-07-24T17:57:54Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.