Loss-based Sequential Learning for Active Domain Adaptation
- URL: http://arxiv.org/abs/2204.11665v1
- Date: Mon, 25 Apr 2022 14:00:04 GMT
- Title: Loss-based Sequential Learning for Active Domain Adaptation
- Authors: Kyeongtak Han, Youngeun Kim, Dongyoon Han, Sungeun Hong
- Abstract summary: This paper introduces sequential learning considering both domain type (source/target) or labelness (labeled/unlabeled)
Our model significantly outperforms previous methods as well as baseline models in various benchmark datasets.
- Score: 14.366263836801485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active domain adaptation (ADA) studies have mainly addressed query selection
while following existing domain adaptation strategies. However, we argue that
it is critical to consider not only query selection criteria but also domain
adaptation strategies designed for ADA scenarios. This paper introduces
sequential learning considering both domain type (source/target) or labelness
(labeled/unlabeled). We first train our model only on labeled target samples
obtained by loss-based query selection. When loss-based query selection is
applied under domain shift, unuseful high-loss samples gradually increase, and
the labeled-sample diversity becomes low. To solve these, we fully utilize
pseudo labels of the unlabeled target domain by leveraging loss prediction. We
further encourage pseudo labels to have low self-entropy and diverse class
distributions. Our model significantly outperforms previous methods as well as
baseline models in various benchmark datasets.
Related papers
- Local Context-Aware Active Domain Adaptation [61.59201475369795]
We propose a Local context-aware ADA framework, named LADA, to address this issue.
To select informative target samples, we devise a novel criterion based on the local inconsistency of model predictions.
Experiments validate that the proposed criterion chooses more informative target samples than existing active selection strategies.
arXiv Detail & Related papers (2022-08-26T20:08:40Z) - Combating Label Distribution Shift for Active Domain Adaptation [16.270897459117755]
We consider the problem of active domain adaptation (ADA) to unlabeled target data.
Inspired by recent analysis on a critical issue from label distribution mismatch between source and target in domain adaptation, we devise a method that addresses the issue for the first time in ADA.
arXiv Detail & Related papers (2022-08-13T09:06:45Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Hard Class Rectification for Domain Adaptation [36.58361356407803]
Domain adaptation (DA) aims to transfer knowledge from a label-rich domain (source domain) to a label-scare domain (target domain)
We propose a novel framework, called Hard Class Rectification Pseudo-labeling (HCRPL), to alleviate the hard class problem.
The proposed method is evaluated in both unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA)
arXiv Detail & Related papers (2020-08-08T06:21:58Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z) - Enlarging Discriminative Power by Adding an Extra Class in Unsupervised
Domain Adaptation [5.377369521932011]
We propose an idea of empowering the discriminativeness: Adding a new, artificial class and training the model on the data together with the GAN-generated samples of the new class.
Our idea is highly generic so that it is compatible with many existing methods such as DANN, VADA, and DIRT-T.
arXiv Detail & Related papers (2020-02-19T07:58:24Z) - A Sample Selection Approach for Universal Domain Adaptation [94.80212602202518]
We study the problem of unsupervised domain adaption in the universal scenario.
Only some of the classes are shared between the source and target domains.
We present a scoring scheme that is effective in identifying the samples of the shared classes.
arXiv Detail & Related papers (2020-01-14T22:28:43Z) - MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation [29.43532067090422]
We propose an easy-to-implement method dubbed MiniMax Entropy Networks (MMEN) based on adversarial learning.
Unlike most existing approaches which employ a generator to deal with domain difference, MMEN focuses on learning the categorical information from unlabeled target samples.
arXiv Detail & Related papers (2019-04-21T13:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.