Boundary-weighted logit consistency improves calibration of segmentation
networks
- URL: http://arxiv.org/abs/2307.08163v1
- Date: Sun, 16 Jul 2023 22:13:28 GMT
- Title: Boundary-weighted logit consistency improves calibration of segmentation
networks
- Authors: Neerav Karani, Neel Dey, Polina Golland
- Abstract summary: We show that logit consistency acts as a spatially varying regularizer that prevents overconfident predictions at pixels with ambiguous labels.
Our boundary-weighted extension of this regularizer provides state-of-the-art calibration for prostate and heart MRI segmentation.
- Score: 6.980357450216633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network prediction probabilities and accuracy are often only
weakly-correlated. Inherent label ambiguity in training data for image
segmentation aggravates such miscalibration. We show that logit consistency
across stochastic transformations acts as a spatially varying regularizer that
prevents overconfident predictions at pixels with ambiguous labels. Our
boundary-weighted extension of this regularizer provides state-of-the-art
calibration for prostate and heart MRI segmentation.
Related papers
- Towards Certification of Uncertainty Calibration under Adversarial Attacks [96.48317453951418]
We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations.
We propose novel calibration attacks and demonstrate how they can improve model calibration through textitadversarial calibration training
arXiv Detail & Related papers (2024-05-22T18:52:09Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - ACLS: Adaptive and Conditional Label Smoothing for Network Calibration [30.80635918457243]
Many approaches to network calibration adopt a regularization-based method that exploits a regularization term to smooth the miscalibrated confidences.
We present in this paper an in-depth analysis of existing regularization-based methods, providing a better understanding on how they affect to network calibration.
We introduce a novel loss function, dubbed ACLS, that unifies the merits of existing regularization methods, while avoiding the limitations.
arXiv Detail & Related papers (2023-08-23T04:52:48Z) - Overcoming Distribution Mismatch in Quantizing Image Super-Resolution Networks [53.23803932357899]
quantization leads to accuracy loss in image super-resolution (SR) networks.
Existing works address this distribution mismatch problem by dynamically adapting quantization ranges during test time.
We propose a new quantization-aware training scheme that effectively Overcomes the Distribution Mismatch problem in SR networks.
arXiv Detail & Related papers (2023-07-25T08:50:01Z) - DOMINO: Domain-aware Model Calibration in Medical Image Segmentation [51.346121016559024]
Modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability.
We propose DOMINO, a domain-aware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels.
Our results show that DOMINO-calibrated deep neural networks outperform non-calibrated models and state-of-the-art morphometric methods in head image segmentation.
arXiv Detail & Related papers (2022-09-13T15:31:52Z) - Calibrating Segmentation Networks with Margin-based Label Smoothing [19.669173092632]
We provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses.
These losses could be viewed as approximations of a linear penalty imposing equality constraints on logit distances.
We propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances.
arXiv Detail & Related papers (2022-09-09T20:21:03Z) - The Devil is in the Margin: Margin-based Label Smoothing for Network
Calibration [21.63888208442176]
In spite of the dominant performances of deep neural networks, recent works have shown that they are poorly calibrated.
We provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses.
We propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances.
arXiv Detail & Related papers (2021-11-30T14:21:47Z) - Incorporating Boundary Uncertainty into loss functions for biomedical
image segmentation [2.5243042477020836]
We propose the Boundary Uncertainty, which uses morphological operations to restrict soft labelling to object boundaries.
We incorporate Boundary Uncertainty with the Dice loss, achieving consistently improved performance across three well-validated biomedical imaging datasets.
arXiv Detail & Related papers (2021-10-31T16:19:57Z) - Spatially Varying Label Smoothing: Capturing Uncertainty from Expert
Annotations [19.700271444378618]
The task of image segmentation is inherently noisy due to ambiguities regarding the exact location of boundaries between anatomical structures.
We argue that this information can be extracted from the expert annotations at no extra cost, and it can lead to improved calibration between soft probabilistic predictions and the underlying uncertainty.
We built upon label smoothing (LS) where a network is trained on 'blurred' versions of the ground truth labels which has been shown to be effective for calibrating output predictions.
arXiv Detail & Related papers (2021-04-12T19:35:51Z) - Distribution-free uncertainty quantification for classification under
label shift [105.27463615756733]
We focus on uncertainty quantification (UQ) for classification problems via two avenues.
We first argue that label shift hurts UQ, by showing degradation in coverage and calibration.
We examine these techniques theoretically in a distribution-free framework and demonstrate their excellent practical performance.
arXiv Detail & Related papers (2021-03-04T20:51:03Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.