Semi-Supervised Adversarial Discriminative Domain Adaptation
- URL: http://arxiv.org/abs/2109.13016v1
- Date: Mon, 27 Sep 2021 12:52:50 GMT
- Title: Semi-Supervised Adversarial Discriminative Domain Adaptation
- Authors: Thai-Vu Nguyen, Anh Nguyen, Bac Le
- Abstract summary: Domain adaptation is a potential method to train a powerful deep neural network, which can handle the absence of labeled data.
In this paper, we propose an improved adversarial domain adaptation method called Semi-Supervised Adversarial Discriminative Domain Adaptation (SADDA)
- Score: 18.15464889789663
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Domain adaptation is a potential method to train a powerful deep neural
network, which can handle the absence of labeled data. More precisely, domain
adaptation solving the limitation called dataset bias or domain shift when the
training dataset and testing dataset are extremely different. Adversarial
adaptation method becoming popular among other domain adaptation methods.
Relies on the idea of GAN, adversarial domain adaptation tries to minimize the
distribution between training and testing datasets base on the adversarial
object. However, some conventional adversarial domain adaptation methods cannot
handle large domain shifts between two datasets or the generalization ability
of these methods are inefficient. In this paper, we propose an improved
adversarial domain adaptation method called Semi-Supervised Adversarial
Discriminative Domain Adaptation (SADDA), which can overcome the limitation of
other domain adaptation. We also show that SADDA has better performance than
other adversarial adaptation methods and illustrate the promise of our method
on digit classification and emotion recognition problems.
Related papers
- Towards Subject Agnostic Affective Emotion Recognition [8.142798657174332]
EEG signals manifest subject instability in subject-agnostic affective Brain-computer interfaces (aBCIs)
We propose a novel framework, meta-learning based augmented domain adaptation for subject-agnostic aBCIs.
Our proposed approach is shown to be effective in experiments on a public aBICs dataset.
arXiv Detail & Related papers (2023-10-20T23:44:34Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Semi-Supervised Hypothesis Transfer for Source-Free Domain Adaptation [38.982377864475374]
We propose a novel domain adaptation method via hypothesis transfer without accessing source data at adaptation stage.
In order to fully use the limited target data, a semi-supervised mutual enhancement method is proposed.
Compared with state-of-the-art methods, our method gets up to 19.9% improvements on semi-supervised adaptation tasks.
arXiv Detail & Related papers (2021-07-14T14:26:09Z) - Self-Domain Adaptation for Face Anti-Spoofing [31.441928816043536]
We propose a self-domain adaptation framework to leverage the unlabeled test domain data at inference.
A meta-learning based adaptor learning algorithm is proposed using the data of multiple source domains at the training step.
arXiv Detail & Related papers (2021-02-24T08:46:39Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Knowledge Distillation for BERT Unsupervised Domain Adaptation [2.969705152497174]
A pre-trained language model, BERT, has brought significant performance improvements across a range of natural language processing tasks.
We propose a simple but effective unsupervised domain adaptation method, adversarial adaptation with distillation (AAD)
We evaluate our approach in the task of cross-domain sentiment classification on 30 domain pairs.
arXiv Detail & Related papers (2020-10-22T06:51:24Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Class-Incremental Domain Adaptation [56.72064953133832]
We introduce a practical Domain Adaptation (DA) paradigm called Class-Incremental Domain Adaptation (CIDA)
Existing DA methods tackle domain-shift but are unsuitable for learning novel target-domain classes.
Our approach yields superior performance as compared to both DA and CI methods in the CIDA paradigm.
arXiv Detail & Related papers (2020-08-04T07:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.