Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings
- URL: http://arxiv.org/abs/2010.08666v3
- Date: Sun, 10 Oct 2021 02:26:11 GMT
- Title: Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings
- Authors: Viraj Prabhu, Arjun Chandrasekaran, Kate Saenko, Judy Hoffman
- Abstract summary: Generalizing deep neural networks to new target domains is critical to their real-world utility.
We study the problem of active learning (AL) under a domain shift, called Active Domain Adaptation (Active DA)
We propose Clustering Uncertainty-weighted Embeddings (CLUE), a novel label acquisition strategy for Active DA that performs uncertainty-weighted clustering to identify target instances for labeling that are both uncertain under the model and diverse in feature space.
- Score: 57.24806659432956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalizing deep neural networks to new target domains is critical to their
real-world utility. In practice, it may be feasible to get some target data
labeled, but to be cost-effective it is desirable to select a
maximally-informative subset via active learning (AL). We study the problem of
AL under a domain shift, called Active Domain Adaptation (Active DA). We
demonstrate how existing AL approaches based solely on model uncertainty or
diversity sampling are less effective for Active DA. We propose Clustering
Uncertainty-weighted Embeddings (CLUE), a novel label acquisition strategy for
Active DA that performs uncertainty-weighted clustering to identify target
instances for labeling that are both uncertain under the model and diverse in
feature space. CLUE consistently outperforms competing label acquisition
strategies for Active DA and AL across learning settings on 6 diverse domain
shifts for image classification.
Related papers
- Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation [3.367755441623275]
Multi-source unsupervised domain adaptation (MUDA) aims to transfer knowledge from related source domains to an unlabeled target domain.
We propose a novel approach called Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation (D3AAMDA)
This mechanism controls the alignment level of features between each source domain and the target domain, effectively leveraging the local advantageous feature information within the source domains.
arXiv Detail & Related papers (2023-07-26T09:40:19Z) - Divide and Adapt: Active Domain Adaptation via Customized Learning [56.79144758380419]
We present Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target instances into four categories with stratified transferable properties.
With a novel data subdivision protocol based on uncertainty and domainness, DiaNA can accurately recognize the most gainful samples.
Thanks to the "divideand-adapt" spirit, DiaNA can handle data with large variations of domain gap.
arXiv Detail & Related papers (2023-07-21T14:37:17Z) - ADAS: A Simple Active-and-Adaptive Baseline for Cross-Domain 3D Semantic
Segmentation [38.66509154973051]
We propose an Active-and-Adaptive (ADAS) baseline to enhance the weak cross-domain generalization ability of a well-trained 3D segmentation model.
ADAS performs an active sampling operation to select a maximally-informative subset from both source and target domains for effective adaptation.
ADAS is verified to be effective in many cross-domain settings including: 1) Unsupervised Domain Adaptation (UDA), which means that all samples from target domain are unlabeled; 2) Unsupervised Few-shot Domain Adaptation (UFDA), which means that only a few unlabeled samples are available in the unlabeled target domain.
arXiv Detail & Related papers (2022-12-20T16:17:40Z) - ADeADA: Adaptive Density-aware Active Domain Adaptation for Semantic
Segmentation [23.813813896293876]
We present ADeADA, a general active domain adaptation framework for semantic segmentation.
With less than 5% target domain annotations, our method reaches comparable results with that of full supervision.
arXiv Detail & Related papers (2022-02-14T05:17:38Z) - S$^3$VAADA: Submodular Subset Selection for Virtual Adversarial Active
Domain Adaptation [49.01925978752677]
In the real-world scenario's it might be feasible to get labels for a small proportion of target data.
We propose S$3$VAADA which i) introduces a novel submodular criterion to select a maximally informative subset to label and ii) enhances a cluster-based DA procedure.
Our approach consistently outperforms the competing state-of-the-art approaches on datasets with varying degrees of domain shifts.
arXiv Detail & Related papers (2021-09-18T10:53:57Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.