BayesNetCNN: incorporating uncertainty in neural networks for
image-based classification tasks
- URL: http://arxiv.org/abs/2209.13096v1
- Date: Tue, 27 Sep 2022 01:07:19 GMT
- Title: BayesNetCNN: incorporating uncertainty in neural networks for
image-based classification tasks
- Authors: Matteo Ferrante, Tommaso Boccato, Nicola Toschi
- Abstract summary: We propose a method to convert a standard neural network into a Bayesian neural network.
We estimate the variability of predictions by sampling different networks similar to the original one at each forward pass.
We test our model in a large cohort of brain images from Alzheimer's Disease patients.
- Score: 0.29005223064604074
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The willingness to trust predictions formulated by automatic algorithms is
key in a vast number of domains. However, a vast number of deep architectures
are only able to formulate predictions without an associated uncertainty. In
this paper, we propose a method to convert a standard neural network into a
Bayesian neural network and estimate the variability of predictions by sampling
different networks similar to the original one at each forward pass. We couple
our methods with a tunable rejection-based approach that employs only the
fraction of the dataset that the model is able to classify with an uncertainty
below a user-set threshold. We test our model in a large cohort of brain images
from Alzheimer's Disease patients, where we tackle discrimination of patients
from healthy controls based on morphometric images only. We demonstrate how
combining the estimated uncertainty with a rejection-based approach increases
classification accuracy from 0.86 to 0.95 while retaining 75% of the test set.
In addition, the model can select cases to be recommended for manual evaluation
based on excessive uncertainty. We believe that being able to estimate the
uncertainty of a prediction, along with tools that can modulate the behavior of
the network to a degree of confidence that the user is informed about (and
comfortable with) can represent a crucial step in the direction of user
compliance and easier integration of deep learning tools into everyday tasks
currently performed by human operators.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Confidence estimation of classification based on the distribution of the
neural network output layer [4.529188601556233]
One of the most common problems preventing the application of prediction models in the real world is lack of generalization.
We propose novel methods that estimate uncertainty of particular predictions generated by a neural network classification model.
The proposed methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.
arXiv Detail & Related papers (2022-10-14T12:32:50Z) - Bayesian Neural Network Versus Ex-Post Calibration For Prediction
Uncertainty [0.2343856409260935]
Probabilistic predictions from neural networks account for predictive uncertainty during classification.
In practice most datasets are trained on non-probabilistic neural networks which by default do not capture this inherent uncertainty.
A plausible alternative to the calibration approach is to use Bayesian neural networks, which directly models a predictive distribution.
arXiv Detail & Related papers (2022-09-29T07:22:19Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Quantifying Predictive Uncertainty in Medical Image Analysis with Deep
Kernel Learning [14.03923026690186]
We propose an uncertainty-aware deep kernel learning model which permits the estimation of the uncertainty in the prediction.
In most cases, the proposed model shows better performance compared to common architectures.
Our model can also be used to detect challenging and controversial test samples.
arXiv Detail & Related papers (2021-06-01T17:09:47Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.