Divide and Adapt: Active Domain Adaptation via Customized Learning
- URL: http://arxiv.org/abs/2307.11618v1
- Date: Fri, 21 Jul 2023 14:37:17 GMT
- Title: Divide and Adapt: Active Domain Adaptation via Customized Learning
- Authors: Duojun Huang, Jichang Li, Weikai Chen, Junshi Huang, Zhenhua Chai,
Guanbin Li
- Abstract summary: We present Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target instances into four categories with stratified transferable properties.
With a novel data subdivision protocol based on uncertainty and domainness, DiaNA can accurately recognize the most gainful samples.
Thanks to the "divideand-adapt" spirit, DiaNA can handle data with large variations of domain gap.
- Score: 56.79144758380419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active domain adaptation (ADA) aims to improve the model adaptation
performance by incorporating active learning (AL) techniques to label a
maximally-informative subset of target samples. Conventional AL methods do not
consider the existence of domain shift, and hence, fail to identify the truly
valuable samples in the context of domain adaptation. To accommodate active
learning and domain adaption, the two naturally different tasks, in a
collaborative framework, we advocate that a customized learning strategy for
the target data is the key to the success of ADA solutions. We present
Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target
instances into four categories with stratified transferable properties. With a
novel data subdivision protocol based on uncertainty and domainness, DiaNA can
accurately recognize the most gainful samples. While sending the informative
instances for annotation, DiaNA employs tailored learning strategies for the
remaining categories. Furthermore, we propose an informativeness score that
unifies the data partitioning criteria. This enables the use of a Gaussian
mixture model (GMM) to automatically sample unlabeled data into the proposed
four categories. Thanks to the "divideand-adapt" spirit, DiaNA can handle data
with large variations of domain gap. In addition, we show that DiaNA can
generalize to different domain adaptation settings, such as unsupervised domain
adaptation (UDA), semi-supervised domain adaptation (SSDA), source-free domain
adaptation (SFDA), etc.
Related papers
- Semi-supervised Domain Adaptation via Prototype-based Multi-level
Learning [4.232614032390374]
In semi-supervised domain adaptation (SSDA), a few labeled target samples of each class help the model to transfer knowledge representation from the fully labeled source domain to the target domain.
We propose a Prototype-based Multi-level Learning (ProML) framework to better tap the potential of labeled target samples.
arXiv Detail & Related papers (2023-05-04T10:09:30Z) - IDA: Informed Domain Adaptive Semantic Segmentation [51.12107564372869]
We propose an Domain Informed Adaptation (IDA) model, a self-training framework that mixes the data based on class-level segmentation performance.
In our IDA model, the class-level performance is tracked by an expected confidence score (ECS) and we then use a dynamic schedule to determine the mixing ratio for data in different domains.
Our proposed method is able to outperform the state-of-the-art UDA-SS method by a margin of 1.1 mIoU in the adaptation of GTA-V to Cityscapes and of 0.9 mIoU in the adaptation of SYNTHIA to City
arXiv Detail & Related papers (2023-03-05T18:16:34Z) - ADAS: A Simple Active-and-Adaptive Baseline for Cross-Domain 3D Semantic
Segmentation [38.66509154973051]
We propose an Active-and-Adaptive (ADAS) baseline to enhance the weak cross-domain generalization ability of a well-trained 3D segmentation model.
ADAS performs an active sampling operation to select a maximally-informative subset from both source and target domains for effective adaptation.
ADAS is verified to be effective in many cross-domain settings including: 1) Unsupervised Domain Adaptation (UDA), which means that all samples from target domain are unlabeled; 2) Unsupervised Few-shot Domain Adaptation (UFDA), which means that only a few unlabeled samples are available in the unlabeled target domain.
arXiv Detail & Related papers (2022-12-20T16:17:40Z) - ADeADA: Adaptive Density-aware Active Domain Adaptation for Semantic
Segmentation [23.813813896293876]
We present ADeADA, a general active domain adaptation framework for semantic segmentation.
With less than 5% target domain annotations, our method reaches comparable results with that of full supervision.
arXiv Detail & Related papers (2022-02-14T05:17:38Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.