Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model
- URL: http://arxiv.org/abs/2104.01539v1
- Date: Sun, 4 Apr 2021 05:29:05 GMT
- Title: Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model
- Authors: Jian Liang and Dapeng Hu and Ran He and Jiashi Feng
- Abstract summary: Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
- Score: 138.12678159620248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To alleviate the burden of labeling, unsupervised domain adaptation (UDA)
aims to transfer knowledge in previous related labeled datasets (source) to a
new unlabeled dataset (target). Despite impressive progress, prior methods
always need to access the raw source data and develop data-dependent alignment
approaches to recognize the target samples in a transductive learning manner,
which may raise privacy concerns from source individuals. Several recent
studies resort to an alternative solution by exploiting the well-trained
white-box model instead of the raw data from the source domain, however, it may
leak the raw data through generative adversarial training. This paper studies a
practical and interesting setting for UDA, where only a black-box source model
(i.e., only network predictions are available) is provided during adaptation in
the target domain. Besides, different neural networks are even allowed to be
employed for different domains. For this new problem, we propose a novel
two-step adaptation framework called Distill and Fine-tune (Dis-tune).
Specifically, Dis-tune first structurally distills the knowledge from the
source model to a customized target model, then unsupervisedly fine-tunes the
distilled model to fit the target domain. To verify the effectiveness, we
consider two UDA scenarios (\ie, closed-set and partial-set), and discover that
Dis-tune achieves highly competitive performance to state-of-the-art
approaches.
Related papers
- Robust Source-Free Domain Adaptation for Fundus Image Segmentation [3.585032903685044]
Unlabelled Domain Adaptation (UDA) is a learning technique that transfers knowledge learned in the source domain from labelled data to the target domain with only unlabelled data.
In this study, we propose a two-stage training stage for robust domain adaptation.
We propose a novel robust pseudo-label and pseudo-boundary (PLPB) method, which effectively utilizes unlabeled target data to generate pseudo labels and pseudo boundaries.
arXiv Detail & Related papers (2023-10-25T14:25:18Z) - Curriculum Guided Domain Adaptation in the Dark [0.0]
Domain adaptation in the dark aims to adapt a black-box source trained model to an unlabeled target domain without access to source data or source model parameters.
We present Curriculum Adaptation for Black-Box (CABB) which provides a curriculum guided adaptation approach to gradually train the target model.
Our method utilizes co-training of a dual-branch network to suppress error accumulation resulting from confirmation bias.
arXiv Detail & Related papers (2023-08-02T05:47:56Z) - RAIN: RegulArization on Input and Network for Black-Box Domain
Adaptation [80.03883315743715]
Source-free domain adaptation transits the source-trained model towards target domain without exposing the source data.
This paradigm is still at risk of data leakage due to adversarial attacks on the source model.
We propose a novel approach named RAIN (RegulArization on Input and Network) for Black-Box domain adaptation from both input-level and network-level regularization.
arXiv Detail & Related papers (2022-08-22T18:18:47Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Unsupervised Multi-source Domain Adaptation Without Access to Source
Data [58.551861130011886]
Unsupervised Domain Adaptation (UDA) aims to learn a predictor model for an unlabeled domain by transferring knowledge from a separate labeled source domain.
We propose a novel and efficient algorithm which automatically combines the source models with suitable weights in such a way that it performs at least as good as the best source model.
arXiv Detail & Related papers (2021-04-05T10:45:12Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.