Prototypical Distillation and Debiased Tuning for Black-box Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2412.20670v1
- Date: Mon, 30 Dec 2024 02:48:34 GMT
- Title: Prototypical Distillation and Debiased Tuning for Black-box Unsupervised Domain Adaptation
- Authors: Jian Liang, Lijun Sheng, Hongmin Liu, Ran He,
- Abstract summary: Unsupervised domain adaptation aims to transfer knowledge from a related, label-rich source domain to an unlabeled target domain.
This paper introduces a novel setting called black-box domain adaptation, where the source model is accessible only through an API.
We develop a two-step framework named $textbfPro$totypical $textbfD$istillation and $textbfD$ebiased tun$textbfing$.
- Score: 78.50990190076232
- License:
- Abstract: Unsupervised domain adaptation aims to transfer knowledge from a related, label-rich source domain to an unlabeled target domain, thereby circumventing the high costs associated with manual annotation. Recently, there has been growing interest in source-free domain adaptation, a paradigm in which only a pre-trained model, rather than the labeled source data, is provided to the target domain. Given the potential risk of source data leakage via model inversion attacks, this paper introduces a novel setting called black-box domain adaptation, where the source model is accessible only through an API that provides the predicted label along with the corresponding confidence value for each query. We develop a two-step framework named $\textbf{Pro}$totypical $\textbf{D}$istillation and $\textbf{D}$ebiased tun$\textbf{ing}$ ($\textbf{ProDDing}$). In the first step, ProDDing leverages both the raw predictions from the source model and prototypes derived from the target domain as teachers to distill a customized target model. In the second step, ProDDing keeps fine-tuning the distilled model by penalizing logits that are biased toward certain classes. Empirical results across multiple benchmarks demonstrate that ProDDing outperforms existing black-box domain adaptation methods. Moreover, in the case of hard-label black-box domain adaptation, where only predicted labels are available, ProDDing achieves significant improvements over these methods. Code will be available at \url{https://github.com/tim-learn/ProDDing/}.
Related papers
- Robust Target Training for Multi-Source Domain Adaptation [110.77704026569499]
We propose a novel Bi-level Optimization based Robust Target Training (BORT$2$) method for MSDA.
Our proposed method achieves the state of the art performance on three MSDA benchmarks, including the large-scale DomainNet dataset.
arXiv Detail & Related papers (2022-10-04T15:20:01Z) - Divide to Adapt: Mitigating Confirmation Bias for Domain Adaptation of
Black-Box Predictors [94.78389703894042]
Domain Adaptation of Black-box Predictors (DABP) aims to learn a model on an unlabeled target domain supervised by a black-box predictor trained on a source domain.
It does not require access to both the source-domain data and the predictor parameters, thus addressing the data privacy and portability issues of standard domain adaptation.
We propose a new method, named BETA, to incorporate knowledge distillation and noisy label learning into one coherent framework.
arXiv Detail & Related papers (2022-05-28T16:00:44Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Domain Adaptation without Source Data [20.64875162351594]
We introduce Source data-Free Domain Adaptation (SFDA) to avoid accessing source data that may contain sensitive information.
Our key idea is to leverage a pre-trained model from the source domain and progressively update the target model in a self-learning manner.
Our PrDA outperforms conventional domain adaptation methods on benchmark datasets.
arXiv Detail & Related papers (2020-07-03T07:21:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.