Incremental Unsupervised Domain-Adversarial Training of Neural Networks
- URL: http://arxiv.org/abs/2001.04129v1
- Date: Mon, 13 Jan 2020 09:54:35 GMT
- Title: Incremental Unsupervised Domain-Adversarial Training of Neural Networks
- Authors: Antonio-Javier Gallego, Jorge Calvo-Zaragoza, Robert B. Fisher
- Abstract summary: In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples.
Here we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively.
Our results report a clear improvement with respect to the non-incremental case in several datasets, also outperforming other state-of-the-art domain adaptation algorithms.
- Score: 17.91571291302582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the context of supervised statistical learning, it is typically assumed
that the training set comes from the same distribution that draws the test
samples. When this is not the case, the behavior of the learned model is
unpredictable and becomes dependent upon the degree of similarity between the
distribution of the training set and the distribution of the test set. One of
the research topics that investigates this scenario is referred to as domain
adaptation. Deep neural networks brought dramatic advances in pattern
recognition and that is why there have been many attempts to provide good
domain adaptation algorithms for these models. Here we take a different avenue
and approach the problem from an incremental point of view, where the model is
adapted to the new domain iteratively. We make use of an existing unsupervised
domain-adaptation algorithm to identify the target samples on which there is
greater confidence about their true label. The output of the model is analyzed
in different ways to determine the candidate samples. The selected set is then
added to the source training set by considering the labels provided by the
network as ground truth, and the process is repeated until all target samples
are labelled. Our results report a clear improvement with respect to the
non-incremental case in several datasets, also outperforming other
state-of-the-art domain adaptation algorithms.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Cross-Inferential Networks for Source-free Unsupervised Domain
Adaptation [17.718392065388503]
We propose to explore a new method called cross-inferential networks (CIN)
Our main idea is that, when we adapt the network model to predict the sample labels from encoded features, we use these prediction results to construct new training samples with derived labels.
Our experimental results on benchmark datasets demonstrate that our proposed CIN approach can significantly improve the performance of source-free UDA.
arXiv Detail & Related papers (2023-06-29T14:04:24Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - Distributional Shift Adaptation using Domain-Specific Features [41.91388601229745]
In open-world scenarios, streaming big data can be Out-Of-Distribution (OOD)
We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not.
Our approach uses the most confidently predicted samples identified by an OOD base model to train a new model that effectively adapts to the target domain.
arXiv Detail & Related papers (2022-11-09T04:16:21Z) - Learning to Generalize across Domains on Single Test Samples [126.9447368941314]
We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
arXiv Detail & Related papers (2022-02-16T13:21:04Z) - Gradual Domain Adaptation in the Wild:When Intermediate Distributions
are Absent [32.906658998929394]
We focus on the problem of domain adaptation when the goal is shifting the model towards the target distribution.
We propose GIFT, a method that creates virtual samples from intermediate distributions by interpolating representations of examples from source and target domains.
arXiv Detail & Related papers (2021-06-10T22:47:06Z) - A Brief Review of Domain Adaptation [1.2043574473965317]
This paper focuses on unsupervised domain adaptation, where the labels are only available in the source domain.
It presents some successful shallow and deep domain adaptation approaches that aim to deal with domain adaptation problems.
arXiv Detail & Related papers (2020-10-07T07:05:32Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Enlarging Discriminative Power by Adding an Extra Class in Unsupervised
Domain Adaptation [5.377369521932011]
We propose an idea of empowering the discriminativeness: Adding a new, artificial class and training the model on the data together with the GAN-generated samples of the new class.
Our idea is highly generic so that it is compatible with many existing methods such as DANN, VADA, and DIRT-T.
arXiv Detail & Related papers (2020-02-19T07:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.