Discriminative Active Learning for Domain Adaptation
- URL: http://arxiv.org/abs/2005.11653v1
- Date: Sun, 24 May 2020 04:20:49 GMT
- Title: Discriminative Active Learning for Domain Adaptation
- Authors: Fan Zhou, Changjian Shui, Bincheng Huang, Boyu Wang and Brahim
Chaib-draa
- Abstract summary: We introduce a discriminative active learning approach for domain adaptation to reduce the efforts of data annotation.
Specifically, we propose three-stage active adversarial training of neural networks.
Empirical comparisons with existing domain adaptation methods using four benchmark datasets demonstrate the effectiveness of the proposed approach.
- Score: 16.004653151961303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain Adaptation aiming to learn a transferable feature between different
but related domains has been well investigated and has shown excellent
empirical performances. Previous works mainly focused on matching the marginal
feature distributions using the adversarial training methods while assuming the
conditional relations between the source and target domain remained unchanged,
$i.e.$, ignoring the conditional shift problem. However, recent works have
shown that such a conditional shift problem exists and can hinder the
adaptation process. To address this issue, we have to leverage labelled data
from the target domain, but collecting labelled data can be quite expensive and
time-consuming. To this end, we introduce a discriminative active learning
approach for domain adaptation to reduce the efforts of data annotation.
Specifically, we propose three-stage active adversarial training of neural
networks: invariant feature space learning (first stage), uncertainty and
diversity criteria and their trade-off for query strategy (second stage) and
re-training with queried target labels (third stage). Empirical comparisons
with existing domain adaptation methods using four benchmark datasets
demonstrate the effectiveness of the proposed approach.
Related papers
- Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via
Optimization Trajectory Distillation [73.83178465971552]
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection.
We propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective.
arXiv Detail & Related papers (2023-07-27T08:58:05Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - Adapting Segmentation Networks to New Domains by Disentangling Latent
Representations [14.050836886292869]
Domain adaptation approaches have come into play to transfer knowledge acquired on a label-abundant source domain to a related label-scarce target domain.
We propose a novel performance metric to capture the relative efficacy of an adaptation strategy compared to supervised training.
arXiv Detail & Related papers (2021-08-06T09:43:07Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Knowledge Distillation for BERT Unsupervised Domain Adaptation [2.969705152497174]
A pre-trained language model, BERT, has brought significant performance improvements across a range of natural language processing tasks.
We propose a simple but effective unsupervised domain adaptation method, adversarial adaptation with distillation (AAD)
We evaluate our approach in the task of cross-domain sentiment classification on 30 domain pairs.
arXiv Detail & Related papers (2020-10-22T06:51:24Z) - Missing-Class-Robust Domain Adaptation by Unilateral Alignment for Fault
Diagnosis [3.786700931138978]
Domain adaptation aims at improving model performance by leveraging the learned knowledge in the source domain and transferring it to the target domain.
Recently, domain adversarial methods have been particularly successful in alleviating the distribution shift between the source and the target domains.
We demonstrate in this paper that the performance of domain adversarial methods can be vulnerable to an incomplete target label space during training.
arXiv Detail & Related papers (2020-01-07T13:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.