Quantifying Uncertainty for Machine Learning Based Diagnostic
- URL: http://arxiv.org/abs/2107.14261v1
- Date: Thu, 29 Jul 2021 18:13:01 GMT
- Title: Quantifying Uncertainty for Machine Learning Based Diagnostic
- Authors: Owen Convery, Lewis Smith, Yarin Gal, Adi Hanuka
- Abstract summary: We use ensemble methods and quantile regression neural networks to explore different ways of creating and analyzing prediction's uncertainty on experimental data from the Linac Coherent Light Source at SLAC.
We aim to accurately and confidently predict the current profile or longitudinal phase space images of the electron beam.
- Score: 28.499029801148545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Virtual Diagnostic (VD) is a deep learning tool that can be used to predict a
diagnostic output. VDs are especially useful in systems where measuring the
output is invasive, limited, costly or runs the risk of damaging the output.
Given a prediction, it is necessary to relay how reliable that prediction is.
This is known as 'uncertainty quantification' of a prediction. In this paper,
we use ensemble methods and quantile regression neural networks to explore
different ways of creating and analyzing prediction's uncertainty on
experimental data from the Linac Coherent Light Source at SLAC. We aim to
accurately and confidently predict the current profile or longitudinal phase
space images of the electron beam. The ability to make informed decisions under
uncertainty is crucial for reliable deployment of deep learning tools on
safety-critical systems as particle accelerators.
Related papers
- SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Inadequacy of common stochastic neural networks for reliable clinical
decision support [0.4262974002462632]
Widespread adoption of AI for medical decision making is still hindered due to ethical and safety-related concerns.
Common deep learning approaches, however, have the tendency towards overconfidence under data shift.
This study investigates their actual reliability in clinical applications.
arXiv Detail & Related papers (2024-01-24T18:49:30Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - LMD: Light-weight Prediction Quality Estimation for Object Detection in
Lidar Point Clouds [3.927702899922668]
Object detection on Lidar point cloud data is a promising technology for autonomous driving and robotics.
Uncertainty estimation is a crucial component for down-stream tasks and deep neural networks remain error-prone even for predictions with high confidence.
We propose LidarMetaDetect, a light-weight post-processing scheme for prediction quality estimation.
Our experiments show a significant increase of statistical reliability in separating true from false predictions.
arXiv Detail & Related papers (2023-06-13T15:13:29Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Interpreting Uncertainty in Model Predictions For COVID-19 Diagnosis [0.0]
COVID-19 has brought in the need to use assistive tools for faster diagnosis in addition to typical lab swab testing.
Traditional convolutional networks use point estimate for predictions, lacking in capture of uncertainty.
We develop a visualization framework to address interpretability of uncertainty and its components, with uncertainty in predictions computed with a Bayesian Convolutional Neural Network.
arXiv Detail & Related papers (2020-10-26T01:27:29Z) - Probabilistic Neighbourhood Component Analysis: Sample Efficient
Uncertainty Estimation in Deep Learning [25.8227937350516]
We show that uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.
We propose a probabilistic generalization of the popular sample-efficient non-parametric kNN approach.
Our approach enables deep kNN to accurately quantify underlying uncertainties in its prediction.
arXiv Detail & Related papers (2020-07-18T21:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.