An out-of-distribution discriminator based on Bayesian neural network
epistemic uncertainty
- URL: http://arxiv.org/abs/2210.10780v2
- Date: Wed, 9 Aug 2023 17:48:40 GMT
- Title: An out-of-distribution discriminator based on Bayesian neural network
epistemic uncertainty
- Authors: Ethan Ancell, Christopher Bennett, Bert Debusschere, Sapan Agarwal,
Park Hays, T. Patrick Xiao
- Abstract summary: Bayesian neural networks (BNNs) are an important type of neural network with built-in capability for quantifying uncertainty.
This paper discusses aleatoric and epistemic uncertainty in BNNs and how they can be calculated.
- Score: 0.19573380763700712
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks have revolutionized the field of machine learning with
increased predictive capability. In addition to improving the predictions of
neural networks, there is a simultaneous demand for reliable uncertainty
quantification on estimates made by machine learning methods such as neural
networks. Bayesian neural networks (BNNs) are an important type of neural
network with built-in capability for quantifying uncertainty. This paper
discusses aleatoric and epistemic uncertainty in BNNs and how they can be
calculated. With an example dataset of images where the goal is to identify the
amplitude of an event in the image, it is shown that epistemic uncertainty
tends to be lower in images which are well-represented in the training dataset
and tends to be high in images which are not well-represented. An algorithm for
out-of-distribution (OoD) detection with BNN epistemic uncertainty is
introduced along with various experiments demonstrating factors influencing the
OoD detection capability in a BNN. The OoD detection capability with epistemic
uncertainty is shown to be comparable to the OoD detection in the discriminator
network of a generative adversarial network (GAN) with comparable network
architecture.
Related papers
- Uncertainty Quantification in Working Memory via Moment Neural Networks [8.064442892805843]
Humans possess a finely tuned sense of uncertainty that helps anticipate potential errors.
This study applies moment neural networks to explore the neural mechanism of uncertainty quantification in working memory.
arXiv Detail & Related papers (2024-11-21T15:05:04Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - An Estimator for the Sensitivity to Perturbations of Deep Neural
Networks [0.31498833540989407]
This paper derives an estimator that can predict the sensitivity of a given Deep Neural Network to perturbations in input.
An approximation of the estimator is tested on two Convolutional Neural Networks, AlexNet and VGG-19, using the ImageNet dataset.
arXiv Detail & Related papers (2023-07-24T10:33:32Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Certified Invertibility in Neural Networks via Mixed-Integer Programming [16.64960701212292]
Neural networks are known to be vulnerable to adversarial attacks.
There may exist large, meaningful perturbations that do not affect the network's decision.
We discuss how our findings can be useful for invertibility certification in transformations between neural networks.
arXiv Detail & Related papers (2023-01-27T15:40:38Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Bayesian Convolutional Neural Networks for Limited Data Hyperspectral
Remote Sensing Image Classification [14.464344312441582]
We use a special class of deep neural networks, namely Bayesian neural network, to classify HSRS images.
Bayesian neural networks provide an inherent tool for measuring uncertainty.
We show that a Bayesian network can outperform a similarly-constructed non-Bayesian convolutional neural network (CNN) and an off-the-shelf Random Forest (RF)
arXiv Detail & Related papers (2022-05-19T00:02:16Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Bayesian Neural Networks [0.0]
We show how errors in prediction by neural networks can be obtained in principle, and provide the two favoured methods for characterising these errors.
We will also describe how both of these methods have substantial pitfalls when put into practice.
arXiv Detail & Related papers (2020-06-02T09:43:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.