Unsupervised BatchNorm Adaptation (UBNA): A Domain Adaptation Method for
Semantic Segmentation Without Using Source Domain Representations
- URL: http://arxiv.org/abs/2011.08502v2
- Date: Thu, 11 Nov 2021 14:38:12 GMT
- Title: Unsupervised BatchNorm Adaptation (UBNA): A Domain Adaptation Method for
Semantic Segmentation Without Using Source Domain Representations
- Authors: Marvin Klingner, Jan-Aike Term\"ohlen, Jacob Ritterbach, Tim
Fingscheidt
- Abstract summary: Unsupervised BatchNorm Adaptation (UBNA) adapts a given pre-trained model to an unseen target domain.
We partially adapt the normalization layer statistics to the target domain using an exponentially decaying momentum factor.
Compared to standard UDA approaches we report a trade-off between performance and usage of source domain representations.
- Score: 35.586031601299034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a solution to the task of "unsupervised domain
adaptation (UDA) of a given pre-trained semantic segmentation model without
relying on any source domain representations". Previous UDA approaches for
semantic segmentation either employed simultaneous training of the model in the
source and target domains, or they relied on an additional network, replaying
source domain knowledge to the model during adaptation. In contrast, we present
our novel Unsupervised BatchNorm Adaptation (UBNA) method, which adapts a given
pre-trained model to an unseen target domain without using -- beyond the
existing model parameters from pre-training -- any source domain
representations (neither data, nor networks) and which can also be applied in
an online setting or using just a few unlabeled images from the target domain
in a few-shot manner. Specifically, we partially adapt the normalization layer
statistics to the target domain using an exponentially decaying momentum
factor, thereby mixing the statistics from both domains. By evaluation on
standard UDA benchmarks for semantic segmentation we show that this is superior
to a model without adaptation and to baseline approaches using statistics from
the target domain only. Compared to standard UDA approaches we report a
trade-off between performance and usage of source domain representations.
Related papers
- Style Adaptation for Domain-adaptive Semantic Segmentation [2.1365683052370046]
Domain discrepancy leads to a significant decrease in the performance of general network models trained on the source domain data when applied to the target domain.
We introduce a straightforward approach to mitigate the domain discrepancy, which necessitates no additional parameter calculations and seamlessly integrates with self-training-based UDA methods.
Our proposed method attains a noteworthy UDA performance of 76.93 mIoU on the GTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage points over the previous state-of-the-art results.
arXiv Detail & Related papers (2024-04-25T02:51:55Z) - Online Continual Domain Adaptation for Semantic Image Segmentation Using
Internal Representations [28.549418215123936]
We develop an online UDA algorithm for semantic segmentation of images that improves model generalization on unannotated domains.
We evaluate our approach on well established semantic segmentation datasets and demonstrate it compares favorably against state-of-the-art (SOTA) semantic segmentation methods.
arXiv Detail & Related papers (2024-01-02T04:48:49Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z) - Unsupervised Model Adaptation for Continual Semantic Segmentation [15.820660013260584]
We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain.
We provide theoretical analysis and explain conditions under which our algorithm is effective.
Experiments on benchmark adaptation task demonstrate our method achieves competitive performance even compared with joint UDA approaches.
arXiv Detail & Related papers (2020-09-26T04:55:50Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.