Deep Bayesian Gaussian Processes for Uncertainty Estimation in
Electronic Health Records
- URL: http://arxiv.org/abs/2003.10170v1
- Date: Mon, 23 Mar 2020 10:36:52 GMT
- Title: Deep Bayesian Gaussian Processes for Uncertainty Estimation in
Electronic Health Records
- Authors: Yikuan Li, Shishir Rao, Abdelaali Hassaine, Rema Ramakrishnan, Yajie
Zhu, Dexter Canoy, Gholamreza Salimi-Khorshidi, Thomas Lukasiewicz, Kazem
Rahimi
- Abstract summary: We merge features of the deep Bayesian learning framework with deep kernel learning to leverage the strengths of both methods for more comprehensive uncertainty estimation.
We show that our method is less susceptible to making overconfident predictions, especially for the minority class in imbalanced datasets.
- Score: 30.65770563934045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One major impediment to the wider use of deep learning for clinical decision
making is the difficulty of assigning a level of confidence to model
predictions. Currently, deep Bayesian neural networks and sparse Gaussian
processes are the main two scalable uncertainty estimation methods. However,
deep Bayesian neural network suffers from lack of expressiveness, and more
expressive models such as deep kernel learning, which is an extension of sparse
Gaussian process, captures only the uncertainty from the higher level latent
space. Therefore, the deep learning model under it lacks interpretability and
ignores uncertainty from the raw data. In this paper, we merge features of the
deep Bayesian learning framework with deep kernel learning to leverage the
strengths of both methods for more comprehensive uncertainty estimation.
Through a series of experiments on predicting the first incidence of heart
failure, diabetes and depression applied to large-scale electronic medical
records, we demonstrate that our method is better at capturing uncertainty than
both Gaussian processes and deep Bayesian neural networks in terms of
indicating data insufficiency and distinguishing true positive and false
positive predictions, with a comparable generalisation performance.
Furthermore, by assessing the accuracy and area under the receiver operating
characteristic curve over the predictive probability, we show that our method
is less susceptible to making overconfident predictions, especially for the
minority class in imbalanced datasets. Finally, we demonstrate how uncertainty
information derived by the model can inform risk factor analysis towards model
interpretability.
Related papers
- Deep Evidential Learning for Radiotherapy Dose Prediction [0.0]
We present a novel application of an uncertainty-quantification framework called Deep Evidential Learning in the domain of radiotherapy dose prediction.
We found that this model can be effectively harnessed to yield uncertainty estimates that inherited correlations with prediction errors upon completion of network training.
arXiv Detail & Related papers (2024-04-26T02:43:45Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation
for Earth System Science Applications [0.32302664881848275]
Evidential deep learning is a technique that extends parametric deep learning to higher-order distributions.
This study compares the uncertainty derived from evidential neural networks to those obtained from ensembles.
We show evidential deep learning models attaining predictive accuracy rivaling standard methods, while robustly quantifying both sources of uncertainty.
arXiv Detail & Related papers (2023-09-22T23:04:51Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - Looking at the posterior: accuracy and uncertainty of neural-network
predictions [0.0]
We show that prediction accuracy depends on both epistemic and aleatoric uncertainty.
We introduce a novel acquisition function that outperforms common uncertainty-based methods.
arXiv Detail & Related papers (2022-11-26T16:13:32Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Quantifying Predictive Uncertainty in Medical Image Analysis with Deep
Kernel Learning [14.03923026690186]
We propose an uncertainty-aware deep kernel learning model which permits the estimation of the uncertainty in the prediction.
In most cases, the proposed model shows better performance compared to common architectures.
Our model can also be used to detect challenging and controversial test samples.
arXiv Detail & Related papers (2021-06-01T17:09:47Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
Higher-Order Influence Functions [121.10450359856242]
We develop a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals.
The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy.
arXiv Detail & Related papers (2020-06-29T13:36:52Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.