Quantifying Uncertainty in Deep Learning Classification with Noise in
Discrete Inputs for Risk-Based Decision Making
- URL: http://arxiv.org/abs/2310.06105v1
- Date: Mon, 9 Oct 2023 19:26:24 GMT
- Title: Quantifying Uncertainty in Deep Learning Classification with Noise in
Discrete Inputs for Risk-Based Decision Making
- Authors: Maryam Kheirandish, Shengfan Zhang, Donald G. Catanzaro, Valeriu Crudu
- Abstract summary: We propose a mathematical framework to quantify prediction uncertainty for Deep Neural Network (DNN) models.
The prediction uncertainty arises from errors in predictors that follow some known finite discrete distribution.
Our proposed framework can support risk-based decision making in applications when discrete errors in predictors are present.
- Score: 1.529943343419486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of Deep Neural Network (DNN) models in risk-based decision-making has
attracted extensive attention with broad applications in medical, finance,
manufacturing, and quality control. To mitigate prediction-related risks in
decision making, prediction confidence or uncertainty should be assessed
alongside the overall performance of algorithms. Recent studies on Bayesian
deep learning helps quantify prediction uncertainty arises from input noises
and model parameters. However, the normality assumption of input noise in these
models limits their applicability to problems involving categorical and
discrete feature variables in tabular datasets. In this paper, we propose a
mathematical framework to quantify prediction uncertainty for DNN models. The
prediction uncertainty arises from errors in predictors that follow some known
finite discrete distribution. We then conducted a case study using the
framework to predict treatment outcome for tuberculosis patients during their
course of treatment. The results demonstrate under a certain level of risk, we
can identify risk-sensitive cases, which are prone to be misclassified due to
error in predictors. Comparing to the Monte Carlo dropout method, our proposed
framework is more aware of misclassification cases. Our proposed framework for
uncertainty quantification in deep learning can support risk-based decision
making in applications when discrete errors in predictors are present.
Related papers
- One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Conformalized Multimodal Uncertainty Regression and Reasoning [0.9205582989348333]
This paper introduces a lightweight uncertainty estimator capable of predicting multimodal (disjoint) uncertainty bounds.
We specifically discuss its application for visual odometry (VO), where environmental features such as flying domain symmetries can result in multimodal uncertainties.
arXiv Detail & Related papers (2023-09-20T02:40:59Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Quantifying Deep Learning Model Uncertainty in Conformal Prediction [1.4685355149711297]
Conformal Prediction is a promising framework for representing the model uncertainty.
In this paper, we explore state-of-the-art CP methodologies and their theoretical foundations.
arXiv Detail & Related papers (2023-06-01T16:37:50Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - Improving Trustworthiness of AI Disease Severity Rating in Medical
Imaging with Ordinal Conformal Prediction Sets [0.7734726150561088]
A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results.
Recent developments in distribution-free uncertainty quantification present practical solutions for these issues.
We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity.
arXiv Detail & Related papers (2022-07-05T18:01:20Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.