Source-free Domain Adaptation via Distributional Alignment by Matching
Batch Normalization Statistics
- URL: http://arxiv.org/abs/2101.10842v1
- Date: Tue, 19 Jan 2021 14:22:33 GMT
- Title: Source-free Domain Adaptation via Distributional Alignment by Matching
Batch Normalization Statistics
- Authors: Masato Ishii and Masashi Sugiyama
- Abstract summary: We propose a novel domain adaptation method for the source-free setting.
We use batch normalization statistics stored in the pretrained model to approximate the distribution of unobserved source data.
Our method achieves competitive performance with state-of-the-art domain adaptation methods.
- Score: 85.75352990739154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel domain adaptation method for the
source-free setting. In this setting, we cannot access source data during
adaptation, while unlabeled target data and a model pretrained with source data
are given. Due to lack of source data, we cannot directly match the data
distributions between domains unlike typical domain adaptation algorithms. To
cope with this problem, we propose utilizing batch normalization statistics
stored in the pretrained model to approximate the distribution of unobserved
source data. Specifically, we fix the classifier part of the model during
adaptation and only fine-tune the remaining feature encoder part so that batch
normalization statistics of the features extracted by the encoder match those
stored in the fixed classifier. Additionally, we also maximize the mutual
information between the features and the classifier's outputs to further boost
the classification performance. Experimental results with several benchmark
datasets show that our method achieves competitive performance with
state-of-the-art domain adaptation methods even though it does not require
access to source data.
Related papers
- Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Unsupervised Adaptation of Semantic Segmentation Models without Source
Data [14.66682099621276]
We consider the novel problem of unsupervised domain adaptation of source models, without access to the source data for semantic segmentation.
We propose a self-training approach to extract the knowledge from the source model.
Our framework is able to achieve significant performance gains compared to directly applying the source model on the target data.
arXiv Detail & Related papers (2021-12-04T15:13:41Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Domain Impression: A Source Data Free Domain Adaptation Method [27.19677042654432]
Unsupervised domain adaptation methods solve the adaptation problem for an unlabeled target set, assuming that the source dataset is available with all labels.
This paper proposes a domain adaptation technique that does not need any source data.
Instead of the source data, we are only provided with a classifier that is trained on the source data.
arXiv Detail & Related papers (2021-02-17T19:50:49Z) - Unsupervised Domain Adaptation in the Absence of Source Data [0.7366405857677227]
We propose an unsupervised method for adapting a source classifier to a target domain that varies from the source domain along natural axes.
We validate our method in scenarios where the distribution shift involves brightness, contrast, and rotation and show that it outperforms fine-tuning baselines in scenarios with limited labeled data.
arXiv Detail & Related papers (2020-07-20T16:22:14Z) - Sequential Model Adaptation Using Domain Agnostic Internal Distributions [31.3178953771424]
We develop an algorithm for sequential adaptation of a classifier that is trained for a source domain to generalize in an unannotated target domain.
We consider that the model has been trained on the source domain annotated data and then it needs to be adapted using the target domain unannotated data when the source domain data is not accessible.
arXiv Detail & Related papers (2020-07-01T03:14:17Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.