DA-Cal: Towards Cross-Domain Calibration in Semantic Segmentation
- URL: http://arxiv.org/abs/2602.20860v1
- Date: Tue, 24 Feb 2026 13:03:41 GMT
- Title: DA-Cal: Towards Cross-Domain Calibration in Semantic Segmentation
- Authors: Wangkai Li, Rui Sun, Zhaoyang Li, Yujia Chen, Tianzhu Zhang,
- Abstract summary: DA-Cal is a cross-domain framework that transforms target domain calibration into soft pseudo-label optimization.<n> Experiments demonstrate that DA-Cal seamlessly integrates with existing self-training frameworks across multiple UDA segmentation benchmarks.
- Score: 37.89728131013411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While existing unsupervised domain adaptation (UDA) methods greatly enhance target domain performance in semantic segmentation, they often neglect network calibration quality, resulting in misalignment between prediction confidence and actual accuracy -- a significant risk in safety-critical applications. Our key insight emerges from observing that performance degrades substantially when soft pseudo-labels replace hard pseudo-labels in cross-domain scenarios due to poor calibration, despite the theoretical equivalence of perfectly calibrated soft pseudo-labels to hard pseudo-labels. Based on this finding, we propose DA-Cal, a dedicated cross-domain calibration framework that transforms target domain calibration into soft pseudo-label optimization. DA-Cal introduces a Meta Temperature Network to generate pixel-level calibration parameters and employs bi-level optimization to establish the relationship between soft pseudo-labels and UDA supervision, while utilizing complementary domain-mixing strategies to prevent overfitting and reduce domain discrepancies. Experiments demonstrate that DA-Cal seamlessly integrates with existing self-training frameworks across multiple UDA segmentation benchmarks, significantly improving target domain calibration while delivering performance gains without inference overhead. The code will be released.
Related papers
- Gradient Rectification for Robust Calibration under Distribution Shift [28.962407770230882]
Deep neural networks often produce overconfident predictions, undermining their reliability in safety-critical applications.<n>We propose a novel calibration framework that operates without access to target domain information.<n>Our method significantly improves calibration under distribution shift while maintaining strong in-distribution performance.
arXiv Detail & Related papers (2025-08-27T12:28:26Z) - Uncertainty Awareness on Unsupervised Domain Adaptation for Time Series Data [49.36938105983916]
Unsupervised domain adaptation methods seek to generalize effectively on unlabeled test data.<n>We propose incorporating multi-scale feature extraction and uncertainty estimation to improve the model's generalization and robustness across domains.
arXiv Detail & Related papers (2025-08-26T03:13:08Z) - Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - PseudoCal: A Source-Free Approach to Unsupervised Uncertainty
Calibration in Domain Adaptation [87.69789891809562]
Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in improving the accuracy of models for unlabeled target domains.
The calibration of predictive uncertainty in the target domain, a crucial aspect of the safe deployment of UDA models, has received limited attention.
We propose PseudoCal, a source-free calibration method that exclusively relies on unlabeled target data.
arXiv Detail & Related papers (2023-07-14T17:21:41Z) - Correlated Adversarial Joint Discrepancy Adaptation Network [6.942003070153651]
We propose a novel approach called correlated adversarial joint discrepancy adaptation network (CAJNet)
By training the joint features, we can align the marginal and conditional distributions between the two domains.
In addition, we introduce a probability-based top-$mathcalK$ correlated label ($mathcalK$-label) which is a powerful indicator of the target domain.
arXiv Detail & Related papers (2021-05-18T19:52:08Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - MetaCorrection: Domain-aware Meta Loss Correction for Unsupervised
Domain Adaptation in Semantic Segmentation [14.8840510432657]
Unsupervised domain adaptation (UDA) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain.
Existing self-training based UDA approaches assign pseudo labels for target data and treat them as ground truth labels.
generated pseudo labels from the model optimized on the source domain inevitably contain noise due to the domain gap.
arXiv Detail & Related papers (2021-03-09T06:57:03Z) - SENTRY: Selective Entropy Optimization via Committee Consistency for
Unsupervised Domain Adaptation [14.086066389856173]
We propose a UDA algorithm that judges the reliability of a target instance based on its predictive consistency under a committee of random image transformations.
Our algorithm then selectively minimizes predictive entropy to increase confidence on highly consistent target instances, while maximizing predictive entropy to reduce confidence on highly inconsistent ones.
In combination with pseudo-label based approximate target class balancing, our approach leads to significant improvements over the state-of-the-art on 27/31 domain shifts from standard UDA benchmarks as well as benchmarks designed to stress-test adaptation under label distribution shift.
arXiv Detail & Related papers (2020-12-21T16:24:50Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.