Bayesian Neural Networks
- URL: http://arxiv.org/abs/2006.01490v2
- Date: Fri, 6 Nov 2020 07:53:50 GMT
- Title: Bayesian Neural Networks
- Authors: Tom Charnock, Laurence Perreault-Levasseur, Fran\c{c}ois Lanusse
- Abstract summary: We show how errors in prediction by neural networks can be obtained in principle, and provide the two favoured methods for characterising these errors.
We will also describe how both of these methods have substantial pitfalls when put into practice.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent times, neural networks have become a powerful tool for the analysis
of complex and abstract data models. However, their introduction intrinsically
increases our uncertainty about which features of the analysis are
model-related and which are due to the neural network. This means that
predictions by neural networks have biases which cannot be trivially
distinguished from being due to the true nature of the creation and observation
of data or not. In order to attempt to address such issues we discuss Bayesian
neural networks: neural networks where the uncertainty due to the network can
be characterised. In particular, we present the Bayesian statistical framework
which allows us to categorise uncertainty in terms of the ingrained randomness
of observing certain data and the uncertainty from our lack of knowledge about
how data can be created and observed. In presenting such techniques we show how
errors in prediction by neural networks can be obtained in principle, and
provide the two favoured methods for characterising these errors. We will also
describe how both of these methods have substantial pitfalls when put into
practice, highlighting the need for other statistical techniques to truly be
able to do inference when using neural networks.
Related papers
- Uncertainty Quantification in Working Memory via Moment Neural Networks [8.064442892805843]
Humans possess a finely tuned sense of uncertainty that helps anticipate potential errors.
This study applies moment neural networks to explore the neural mechanism of uncertainty quantification in working memory.
arXiv Detail & Related papers (2024-11-21T15:05:04Z) - Statistical tuning of artificial neural network [0.0]
This study introduces methods to enhance the understanding of neural networks, focusing specifically on models with a single hidden layer.
We propose statistical tests to assess the significance of input neurons and introduce algorithms for dimensionality reduction.
This research advances the field of Explainable Artificial Intelligence by presenting robust statistical frameworks for interpreting neural networks.
arXiv Detail & Related papers (2024-09-24T19:47:03Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Neural Bayesian Network Understudy [13.28673601999793]
We show that a neural network can be trained to output conditional probabilities, providing approximately the same functionality as a Bayesian Network.
We propose two training strategies that allow encoding the independence relations inferred from a given causal structure into the neural network.
arXiv Detail & Related papers (2022-11-15T15:56:51Z) - An out-of-distribution discriminator based on Bayesian neural network
epistemic uncertainty [0.19573380763700712]
Bayesian neural networks (BNNs) are an important type of neural network with built-in capability for quantifying uncertainty.
This paper discusses aleatoric and epistemic uncertainty in BNNs and how they can be calculated.
arXiv Detail & Related papers (2022-10-18T21:15:33Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.