Curriculum Guided Domain Adaptation in the Dark
- URL: http://arxiv.org/abs/2308.00956v1
- Date: Wed, 2 Aug 2023 05:47:56 GMT
- Title: Curriculum Guided Domain Adaptation in the Dark
- Authors: Chowdhury Sadman Jahan and Andreas Savakis
- Abstract summary: Domain adaptation in the dark aims to adapt a black-box source trained model to an unlabeled target domain without access to source data or source model parameters.
We present Curriculum Adaptation for Black-Box (CABB) which provides a curriculum guided adaptation approach to gradually train the target model.
Our method utilizes co-training of a dual-branch network to suppress error accumulation resulting from confirmation bias.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Addressing the rising concerns of privacy and security, domain adaptation in
the dark aims to adapt a black-box source trained model to an unlabeled target
domain without access to any source data or source model parameters. The need
for domain adaptation of black-box predictors becomes even more pronounced to
protect intellectual property as deep learning based solutions are becoming
increasingly commercialized. Current methods distill noisy predictions on the
target data obtained from the source model to the target model, and/or separate
clean/noisy target samples before adapting using traditional noisy label
learning algorithms. However, these methods do not utilize the easy-to-hard
learning nature of the clean/noisy data splits. Also, none of the existing
methods are end-to-end, and require a separate fine-tuning stage and an initial
warmup stage. In this work, we present Curriculum Adaptation for Black-Box
(CABB) which provides a curriculum guided adaptation approach to gradually
train the target model, first on target data with high confidence (clean)
labels, and later on target data with noisy labels. CABB utilizes
Jensen-Shannon divergence as a better criterion for clean-noisy sample
separation, compared to the traditional criterion of cross entropy loss. Our
method utilizes co-training of a dual-branch network to suppress error
accumulation resulting from confirmation bias. The proposed approach is
end-to-end trainable and does not require any extra finetuning stage, unlike
existing methods. Empirical results on standard domain adaptation datasets show
that CABB outperforms existing state-of-the-art black-box DA models and is
comparable to white-box domain adaptation models.
Related papers
- Incremental Pseudo-Labeling for Black-Box Unsupervised Domain Adaptation [14.596659424489223]
We propose a novel approach that incrementally selects high-confidence pseudo-labels to improve the generalization ability of the target model.
Experimental results demonstrate that the proposed method achieves state-of-the-art black-box unsupervised domain adaptation performance on three benchmark datasets.
arXiv Detail & Related papers (2024-05-26T05:41:42Z) - Cross-Domain Transfer Learning with CoRTe: Consistent and Reliable
Transfer from Black-Box to Lightweight Segmentation Model [25.3403116022412]
CoRTe is a pseudo-labelling function that extracts reliable knowledge from a black-box source model.
We benchmark CoRTe on two synthetic-to-real settings, demonstrating remarkable results when using black-box models to transfer knowledge on lightweight models for a target data distribution.
arXiv Detail & Related papers (2024-02-20T16:35:14Z) - Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error [50.86671887712424]
The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage.
To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution.
We propose a novel calibration-guided source-free domain adaptive semantic segmentation framework.
arXiv Detail & Related papers (2023-08-06T03:28:34Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.