Uncertainty Estimation for Safety-critical Scene Segmentation via
Fine-grained Reward Maximization
- URL: http://arxiv.org/abs/2311.02719v1
- Date: Sun, 5 Nov 2023 17:43:37 GMT
- Title: Uncertainty Estimation for Safety-critical Scene Segmentation via
Fine-grained Reward Maximization
- Authors: Hongzheng Yang, Cheng Chen, Yueyao Chen, Markus Scheppach, Hon Chi
Yip, Qi Dou
- Abstract summary: Uncertainty estimation plays an important role for future reliable deployment of deep segmentation models in safety-critical scenarios.
We propose a novel fine-grained reward (FGRM) framework to address uncertainty estimation.
Our method outperforms state-of-the-art methods by a clear margin on all the calibration metrics of uncertainty estimation.
- Score: 12.79542334840646
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Uncertainty estimation plays an important role for future reliable deployment
of deep segmentation models in safety-critical scenarios such as medical
applications. However, existing methods for uncertainty estimation have been
limited by the lack of explicit guidance for calibrating the prediction risk
and model confidence. In this work, we propose a novel fine-grained reward
maximization (FGRM) framework, to address uncertainty estimation by directly
utilizing an uncertainty metric related reward function with a reinforcement
learning based model tuning algorithm. This would benefit the model uncertainty
estimation through direct optimization guidance for model calibration.
Specifically, our method designs a new uncertainty estimation reward function
using the calibration metric, which is maximized to fine-tune an evidential
learning pre-trained segmentation model for calibrating prediction risk.
Importantly, we innovate an effective fine-grained parameter update scheme,
which imposes fine-grained reward-weighting of each network parameter according
to the parameter importance quantified by the fisher information matrix. To the
best of our knowledge, this is the first work exploring reward optimization for
model uncertainty estimation in safety-critical vision tasks. The effectiveness
of our method is demonstrated on two large safety-critical surgical scene
segmentation datasets under two different uncertainty estimation settings. With
real-time one forward pass at inference, our method outperforms
state-of-the-art methods by a clear margin on all the calibration metrics of
uncertainty estimation, while maintaining a high task accuracy for the
segmentation results. Code is available at
\url{https://github.com/med-air/FGRM}.
Related papers
- Information-Theoretic Safe Bayesian Optimization [59.758009422067005]
We consider a sequential decision making task, where the goal is to optimize an unknown function without evaluating parameters that violate an unknown (safety) constraint.
Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case.
We propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate.
arXiv Detail & Related papers (2024-02-23T14:31:10Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Lightweight, Uncertainty-Aware Conformalized Visual Odometry [2.429910016019183]
Data-driven visual odometry (VO) is a critical subroutine for autonomous edge robotics.
Emerging edge robotics devices like insect-scale drones and surgical robots lack a computationally efficient framework to estimate VO's predictive uncertainties.
This paper presents a novel, lightweight, and statistically robust framework that leverages conformal inference (CI) to extract VO's uncertainty bands.
arXiv Detail & Related papers (2023-03-03T20:37:55Z) - Reliable Multimodal Trajectory Prediction via Error Aligned Uncertainty
Optimization [11.456242421204298]
In a well-calibrated model, uncertainty estimates should perfectly correlate with model error.
We propose a novel error aligned uncertainty optimization method and introduce a trainable loss function to guide the models to yield good quality uncertainty estimates aligning with the model error.
We demonstrate that our method improves average displacement error by 1.69% and 4.69%, and the uncertainty correlation with model error by 17.22% and 19.13% as quantified by Pearson correlation coefficient on two state-of-the-art baselines.
arXiv Detail & Related papers (2022-12-09T12:33:26Z) - A Geometric Method for Improved Uncertainty Estimation in Real-time [13.588210692213568]
Post-hoc model calibrations can improve models' uncertainty estimations without the need for retraining.
Our work puts forward a geometric-based approach for uncertainty estimation.
We show that our method yields better uncertainty estimations than recently proposed approaches.
arXiv Detail & Related papers (2022-06-23T09:18:05Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Improving Deterministic Uncertainty Estimation in Deep Learning for
Classification and Regression [30.112634874443494]
We propose a new model that estimates uncertainty in a single forward pass.
Our approach combines a bi-Lipschitz feature extractor with an inducing point approximate Gaussian process, offering robust and principled uncertainty estimation.
arXiv Detail & Related papers (2021-02-22T23:29:12Z) - Improving model calibration with accuracy versus uncertainty
optimization [17.056768055368384]
A well-calibrated model should be accurate when it is certain about its prediction and indicate high uncertainty when it is likely to be inaccurate.
We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration.
We demonstrate our approach with mean-field variational inference and compare with state-of-the-art methods.
arXiv Detail & Related papers (2020-12-14T20:19:21Z) - Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
Higher-Order Influence Functions [121.10450359856242]
We develop a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals.
The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy.
arXiv Detail & Related papers (2020-06-29T13:36:52Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.