Locally Learned Synaptic Dropout for Complete Bayesian Inference
- URL: http://arxiv.org/abs/2111.09780v1
- Date: Thu, 18 Nov 2021 16:23:00 GMT
- Title: Locally Learned Synaptic Dropout for Complete Bayesian Inference
- Authors: Kevin L. McKee, Ian C. Crandell, Rishidev Chaudhuri, Randall C.
O'Reilly
- Abstract summary: It has not been shown previously how random failures might allow networks to sample from observed distributions, also known as aleatoric or residual uncertainty.
We demonstrate that under a population-code based interpretation of neural activity, both types of distribution can be represented and sampled with synaptic failure alone.
- Score: 5.926384731231605
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The Bayesian brain hypothesis postulates that the brain accurately operates
on statistical distributions according to Bayes' theorem. The random failure of
presynaptic vesicles to release neurotransmitters may allow the brain to sample
from posterior distributions of network parameters, interpreted as epistemic
uncertainty. It has not been shown previously how random failures might allow
networks to sample from observed distributions, also known as aleatoric or
residual uncertainty. Sampling from both distributions enables probabilistic
inference, efficient search, and creative or generative problem solving. We
demonstrate that under a population-code based interpretation of neural
activity, both types of distribution can be represented and sampled with
synaptic failure alone. We first define a biologically constrained neural
network and sampling scheme based on synaptic failure and lateral inhibition.
Within this framework, we derive drop-out based epistemic uncertainty, then
prove an analytic mapping from synaptic efficacy to release probability that
allows networks to sample from arbitrary, learned distributions represented by
a receiving layer. Second, our result leads to a local learning rule by which
synapses adapt their release probabilities. Our result demonstrates complete
Bayesian inference, related to the variational learning method of dropout, in a
biologically constrained network using only locally-learned synaptic failure
rates.
Related papers
- Counterfactual Realizability [52.85109506684737]
We introduce a formal definition of realizability, the ability to draw samples from a distribution, and then develop a complete algorithm to determine whether an arbitrary counterfactual distribution is realizable.
We illustrate the implications of this new framework for counterfactual data collection using motivating examples from causal fairness and causal reinforcement learning.
arXiv Detail & Related papers (2025-03-14T20:54:27Z) - Expressive probabilistic sampling in recurrent neural networks [4.3900330990701235]
We show that firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution.
We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling.
arXiv Detail & Related papers (2023-08-22T22:20:39Z) - Learning Theory of Distribution Regression with Neural Networks [6.961253535504979]
We establish an approximation theory and a learning theory of distribution regression via a fully connected neural network (FNN)
In contrast to the classical regression methods, the input variables of distribution regression are probability measures.
arXiv Detail & Related papers (2023-07-07T09:49:11Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Looking at the posterior: accuracy and uncertainty of neural-network
predictions [0.0]
We show that prediction accuracy depends on both epistemic and aleatoric uncertainty.
We introduce a novel acquisition function that outperforms common uncertainty-based methods.
arXiv Detail & Related papers (2022-11-26T16:13:32Z) - Adaptive Synaptic Failure Enables Sampling from Posterior Predictive
Distributions in the Brain [3.57214198937538]
Many have speculated that synaptic failure constitutes a mechanism of variational, i.e., approximate, Bayesian inference in the brain.
We demonstrate that by adapting transmission probabilities to learned network weights, synaptic failure can sample not only over model uncertainty, but complete posterior predictive distributions as well.
arXiv Detail & Related papers (2022-10-04T15:41:44Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Predicting Unreliable Predictions by Shattering a Neural Network [145.3823991041987]
Piecewise linear neural networks can be split into subfunctions.
Subfunctions have their own activation pattern, domain, and empirical error.
Empirical error for the full network can be written as an expectation over subfunctions.
arXiv Detail & Related papers (2021-06-15T18:34:41Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.