Functional Space Variational Inference for Uncertainty Estimation in
Computer Aided Diagnosis
- URL: http://arxiv.org/abs/2005.11797v2
- Date: Thu, 28 May 2020 16:47:06 GMT
- Title: Functional Space Variational Inference for Uncertainty Estimation in
Computer Aided Diagnosis
- Authors: Pranav Poduval, Hrushikesh Loya, Amit Sethi
- Abstract summary: Deep neural networks have revolutionized medical image analysis and disease diagnosis.
It is difficult to generate well-calibrated probabilistic outputs for such networks, which makes them uninterpretable black boxes.
We show that by shifting Bayesian inference to the functional space we can craft meaningful priors that give better calibrated uncertainty estimates at a much lower computational cost.
- Score: 2.1940032945704817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have revolutionized medical image analysis and disease
diagnosis. Despite their impressive performance, it is difficult to generate
well-calibrated probabilistic outputs for such networks, which makes them
uninterpretable black boxes. Bayesian neural networks provide a principled
approach for modelling uncertainty and increasing patient safety, but they have
a large computational overhead and provide limited improvement in calibration.
In this work, by taking skin lesion classification as an example task, we show
that by shifting Bayesian inference to the functional space we can craft
meaningful priors that give better calibrated uncertainty estimates at a much
lower computational cost.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles [4.249986624493547]
Ensemble deep learning has been shown to achieve high predictive accuracy and uncertainty estimation.
perturbations in the input images at test time can still lead to significant performance degradation.
LaDiNE is a novel and robust probabilistic method that is capable of inferring informative and invariant latent variables from the input images.
arXiv Detail & Related papers (2023-10-24T15:53:07Z) - The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning [71.14237199051276]
We consider classical distribution-agnostic framework and algorithms minimising empirical risks.
We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks is extremely challenging.
arXiv Detail & Related papers (2023-09-13T16:33:27Z) - Fixing Overconfidence in Dynamic Neural Networks [21.148621590039582]
We present an efficient approach for quantifying uncertainty in dynamic neural networks.
We show improvements on CIFAR-100, ImageNet, and Caltech-256 in terms of accuracy, capturing uncertainty, and calibration error.
arXiv Detail & Related papers (2023-02-13T13:45:50Z) - BayesNetCNN: incorporating uncertainty in neural networks for
image-based classification tasks [0.29005223064604074]
We propose a method to convert a standard neural network into a Bayesian neural network.
We estimate the variability of predictions by sampling different networks similar to the original one at each forward pass.
We test our model in a large cohort of brain images from Alzheimer's Disease patients.
arXiv Detail & Related papers (2022-09-27T01:07:19Z) - Differentially private training of neural networks with Langevin
dynamics forcalibrated predictive uncertainty [58.730520380312676]
We show that differentially private gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models.
This represents a serious issue for safety-critical applications, e.g. in medical diagnosis.
arXiv Detail & Related papers (2021-07-09T08:14:45Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels [93.58854458951431]
We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
arXiv Detail & Related papers (2020-09-16T15:16:03Z) - Improved Trainable Calibration Method for Neural Networks on Medical
Imaging Classification [17.941506832422192]
Empirically, neural networks are often miscalibrated and overconfident in their predictions.
We propose a novel calibration approach that maintains the overall classification accuracy while significantly improving model calibration.
arXiv Detail & Related papers (2020-09-09T01:25:53Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Deep Bayesian Gaussian Processes for Uncertainty Estimation in
Electronic Health Records [30.65770563934045]
We merge features of the deep Bayesian learning framework with deep kernel learning to leverage the strengths of both methods for more comprehensive uncertainty estimation.
We show that our method is less susceptible to making overconfident predictions, especially for the minority class in imbalanced datasets.
arXiv Detail & Related papers (2020-03-23T10:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.