Uncertainty-Supervised Interpretable and Robust Evidential Segmentation
- URL: http://arxiv.org/abs/2509.17098v2
- Date: Tue, 14 Oct 2025 02:38:06 GMT
- Title: Uncertainty-Supervised Interpretable and Robust Evidential Segmentation
- Authors: Yuzhu Li, An Sui, Fuping Wu, Xiahai Zhuang,
- Abstract summary: Uncertainty estimation has been widely studied in medical image segmentation as a tool to provide reliability.<n>Previous methods generally lack effective supervision in uncertainty estimation.<n>We propose a self-supervised approach to guide the learning of uncertainty.
- Score: 16.64288013782432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty estimation has been widely studied in medical image segmentation as a tool to provide reliability, particularly in deep learning approaches. However, previous methods generally lack effective supervision in uncertainty estimation, leading to low interpretability and robustness of the predictions. In this work, we propose a self-supervised approach to guide the learning of uncertainty. Specifically, we introduce three principles about the relationships between the uncertainty and the image gradients around boundaries and noise. Based on these principles, two uncertainty supervision losses are designed. These losses enhance the alignment between model predictions and human interpretation. Accordingly, we introduce novel quantitative metrics for evaluating the interpretability and robustness of uncertainty. Experimental results demonstrate that compared to state-of-the-art approaches, the proposed method can achieve competitive segmentation performance and superior results in out-of-distribution (OOD) scenarios while significantly improving the interpretability and robustness of uncertainty estimation. Code is available via https://github.com/suiannaius/SURE.
Related papers
- Towards Reliable LLM-based Robot Planning via Combined Uncertainty Estimation [68.106428321492]
Large language models (LLMs) demonstrate advanced reasoning abilities, enabling robots to understand natural language instructions and generate high-level plans with appropriate grounding.<n>LLMs hallucinations present a significant challenge, often leading to overconfident yet potentially misaligned or unsafe plans.<n>We present Combined Uncertainty estimation for Reliable Embodied planning (CURE), which decomposes the uncertainty into epistemic and intrinsic uncertainty, each estimated separately.
arXiv Detail & Related papers (2025-10-09T10:26:58Z) - An Analysis of Concept Bottleneck Models: Measuring, Understanding, and Mitigating the Impact of Noisy Annotations [4.72358438230281]
Concept bottleneck models (CBMs) ensure interpretability by decomposing predictions into human interpretable concepts.<n>Yet the annotations used for training CBMs that enable this transparency are often noisy.<n>Even moderate corruption simultaneously impairs prediction performance, interpretability, and the intervention effectiveness.
arXiv Detail & Related papers (2025-05-22T14:06:55Z) - Probabilistic Modeling of Disparity Uncertainty for Robust and Efficient Stereo Matching [61.73532883992135]
We propose a new uncertainty-aware stereo matching framework.<n>We adopt Bayes risk as the measurement of uncertainty and use it to separately estimate data and model uncertainty.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Gradient-based Uncertainty Attribution for Explainable Bayesian Deep
Learning [38.34033824352067]
Predictions made by deep learning models are prone to data perturbations, adversarial attacks, and out-of-distribution inputs.
We propose to develop explainable and actionable Bayesian deep learning methods to perform accurate uncertainty quantification.
arXiv Detail & Related papers (2023-04-10T19:14:15Z) - Logit-based Uncertainty Measure in Classification [18.224344440110862]
We introduce a new, reliable, and agnostic uncertainty measure for classification tasks called logit uncertainty.
We show that this new uncertainty measure yields a superior performance compared to existing uncertainty measures on different tasks.
arXiv Detail & Related papers (2021-07-06T19:07:16Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Double-Uncertainty Weighted Method for Semi-supervised Learning [32.484750353853954]
We propose a double-uncertainty weighted method for semi-supervised segmentation based on the teacher-student model.
We train the teacher model using Bayesian deep learning to obtain double-uncertainty, i.e. segmentation uncertainty and feature uncertainty.
Our method outperforms the state-of-the-art uncertainty-based semi-supervised methods on two public medical datasets.
arXiv Detail & Related papers (2020-10-19T08:20:18Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.