Quantifying Deep Learning Model Uncertainty in Conformal Prediction
- URL: http://arxiv.org/abs/2306.00876v2
- Date: Thu, 4 Jan 2024 02:02:51 GMT
- Title: Quantifying Deep Learning Model Uncertainty in Conformal Prediction
- Authors: Hamed Karimi, Reza Samavi
- Abstract summary: Conformal Prediction is a promising framework for representing the model uncertainty.
In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations.
- Score: 1.4685355149711297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Precise estimation of predictive uncertainty in deep neural networks is a
critical requirement for reliable decision-making in machine learning and
statistical modeling, particularly in the context of medical AI. Conformal
Prediction (CP) has emerged as a promising framework for representing the model
uncertainty by providing well-calibrated confidence levels for individual
predictions. However, the quantification of model uncertainty in conformal
prediction remains an active research area, yet to be fully addressed. In this
paper, we explore state-of-the-art CP methodologies and their theoretical
foundations. We propose a probabilistic approach in quantifying the model
uncertainty derived from the produced prediction sets in conformal prediction
and provide certified boundaries for the computed uncertainty. By doing so, we
allow model uncertainty measured by CP to be compared by other uncertainty
quantification methods such as Bayesian (e.g., MC-Dropout and DeepEnsemble) and
Evidential approaches.
Related papers
- Introducing an Improved Information-Theoretic Measure of Predictive
Uncertainty [6.3398383724486544]
Predictive uncertainty is commonly measured by the entropy of the Bayesian model average (BMA) predictive distribution.
We introduce a theoretically grounded measure to overcome these limitations.
We find that our introduced measure behaves more reasonably in controlled synthetic tasks.
arXiv Detail & Related papers (2023-11-14T16:55:12Z) - Quantifying Uncertainty in Deep Learning Classification with Noise in
Discrete Inputs for Risk-Based Decision Making [1.529943343419486]
We propose a mathematical framework to quantify prediction uncertainty for Deep Neural Network (DNN) models.
The prediction uncertainty arises from errors in predictors that follow some known finite discrete distribution.
Our proposed framework can support risk-based decision making in applications when discrete errors in predictors are present.
arXiv Detail & Related papers (2023-10-09T19:26:24Z) - Conformalized Multimodal Uncertainty Regression and Reasoning [0.9205582989348333]
This paper introduces a lightweight uncertainty estimator capable of predicting multimodal (disjoint) uncertainty bounds.
We specifically discuss its application for visual odometry (VO), where environmental features such as flying domain symmetries can result in multimodal uncertainties.
arXiv Detail & Related papers (2023-09-20T02:40:59Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Model-free generalized fiducial inference [0.0]
I propose and develop ideas for a model-free statistical framework for imprecise probabilistic prediction inference.
This framework facilitates uncertainty quantification in the form of prediction sets that offer finite sample control of type 1 errors.
I consider the theoretical and empirical properties of a precise probabilistic approximation to the model-free imprecise framework.
arXiv Detail & Related papers (2023-07-24T01:58:48Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.