Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error
- URL: http://arxiv.org/abs/2308.03003v1
- Date: Sun, 6 Aug 2023 03:28:34 GMT
- Title: Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error
- Authors: Zixin Wang, Yadan Luo, Zhi Chen, Sen Wang, Zi Huang
- Abstract summary: The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage.
To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution.
We propose a novel calibration-guided source-free domain adaptive semantic segmentation framework.
- Score: 50.86671887712424
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The prevalence of domain adaptive semantic segmentation has prompted concerns
regarding source domain data leakage, where private information from the source
domain could inadvertently be exposed in the target domain. To circumvent the
requirement for source data, source-free domain adaptation has emerged as a
viable solution that leverages self-training methods to pseudo-label
high-confidence regions and adapt the model to the target data. However, the
confidence scores obtained are often highly biased due to over-confidence and
class-imbalance issues, which render both model selection and optimization
problematic. In this paper, we propose a novel calibration-guided source-free
domain adaptive semantic segmentation (Cal-SFDA) framework. The core idea is to
estimate the expected calibration error (ECE) from the segmentation
predictions, serving as a strong indicator of the model's generalization
capability to the unlabeled target domain. The estimated ECE scores, in turn,
assist the model training and fair selection in both source training and target
adaptation stages. During model pre-training on the source domain, we ensure
the differentiability of the ECE objective by leveraging the LogSumExp trick
and using ECE scores to select the best source checkpoints for adaptation. To
enable ECE estimation on the target domain without requiring labels, we train a
value net for ECE estimation and apply statistic warm-up on its BatchNorm
layers for stability. The estimated ECE scores assist in determining the
reliability of prediction and enable class-balanced pseudo-labeling by
positively guiding the adaptation progress and inhibiting potential error
accumulation. Extensive experiments on two widely-used synthetic-to-real
transfer tasks show that the proposed approach surpasses previous
state-of-the-art by up to 5.25% of mIoU with fair model selection criteria.
Related papers
- Dirichlet-based Uncertainty Calibration for Active Domain Adaptation [33.33529827699169]
Active domain adaptation (DA) aims to maximally boost the model adaptation on a new target domain by actively selecting limited target data to annotate.
Traditional active learning methods may be less effective since they do not consider the domain shift issue.
We propose a itDirichlet-based Uncertainty (DUC) approach for active DA, which simultaneously achieves the mitigation of miscalibration and the selection of informative target samples.
arXiv Detail & Related papers (2023-02-27T14:33:29Z) - Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Source-free Unsupervised Domain Adaptation for Blind Image Quality
Assessment [20.28784839680503]
Existing learning-based methods for blind image quality assessment (BIQA) are heavily dependent on large amounts of annotated training data.
In this paper, we take the first step towards the source-free unsupervised domain adaptation (SFUDA) in a simple yet efficient manner.
We present a group of well-designed self-supervised objectives to guide the adaptation of the BN affine parameters towards the target domain.
arXiv Detail & Related papers (2022-07-17T09:42:36Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation [49.295165476818866]
This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation.
Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data.
This paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning.
arXiv Detail & Related papers (2020-03-08T12:37:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.