Adaptive Synaptic Failure Enables Sampling from Posterior Predictive
Distributions in the Brain
- URL: http://arxiv.org/abs/2210.01691v1
- Date: Tue, 4 Oct 2022 15:41:44 GMT
- Title: Adaptive Synaptic Failure Enables Sampling from Posterior Predictive
Distributions in the Brain
- Authors: Kevin McKee, Ian Crandell, Rishidev Chaudhuri, Randall O'Reilly
- Abstract summary: Many have speculated that synaptic failure constitutes a mechanism of variational, i.e., approximate, Bayesian inference in the brain.
We demonstrate that by adapting transmission probabilities to learned network weights, synaptic failure can sample not only over model uncertainty, but complete posterior predictive distributions as well.
- Score: 3.57214198937538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian interpretations of neural processing require that biological
mechanisms represent and operate upon probability distributions in accordance
with Bayes' theorem. Many have speculated that synaptic failure constitutes a
mechanism of variational, i.e., approximate, Bayesian inference in the brain.
Whereas models have previously used synaptic failure to sample over uncertainty
in model parameters, we demonstrate that by adapting transmission probabilities
to learned network weights, synaptic failure can sample not only over model
uncertainty, but complete posterior predictive distributions as well. Our
results potentially explain the brain's ability to perform probabilistic
searches and to approximate complex integrals. These operations are involved in
numerous calculations, including likelihood evaluation and state value
estimation for complex planning.
Related papers
- A variational neural Bayes framework for inference on intractable posterior distributions [1.0801976288811024]
Posterior distributions of model parameters are efficiently obtained by feeding observed data into a trained neural network.
We show theoretically that our posteriors converge to the true posteriors in Kullback-Leibler divergence.
arXiv Detail & Related papers (2024-04-16T20:40:15Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Deep Variational Lesion-Deficit Mapping [0.3914676152740142]
We introduce a comprehensive framework for lesion-deficit model comparison.
We show that our model outperforms established methods by a substantial margin across all simulation scenarios.
Our analysis justifies the widespread adoption of this approach.
arXiv Detail & Related papers (2023-05-27T13:49:35Z) - Looking at the posterior: accuracy and uncertainty of neural-network
predictions [0.0]
We show that prediction accuracy depends on both epistemic and aleatoric uncertainty.
We introduce a novel acquisition function that outperforms common uncertainty-based methods.
arXiv Detail & Related papers (2022-11-26T16:13:32Z) - Bayesian Networks for the robust and unbiased prediction of depression
and its symptoms utilizing speech and multimodal data [65.28160163774274]
We apply a Bayesian framework to capture the relationships between depression, depression symptoms, and features derived from speech, facial expression and cognitive game data collected at thymia.
arXiv Detail & Related papers (2022-11-09T14:48:13Z) - Bayesian Neural Networks for Reversible Steganography [0.7614628596146599]
We propose to consider uncertainty in predictive models based upon a theoretical framework of Bayesian deep learning.
We approximate the posterior predictive distribution through Monte Carlo sampling with reversible forward passes.
We show that predictive uncertainty can be disentangled into aleatoric uncertainties and these quantities can be learnt in an unsupervised manner.
arXiv Detail & Related papers (2022-01-07T14:56:33Z) - Locally Learned Synaptic Dropout for Complete Bayesian Inference [5.926384731231605]
It has not been shown previously how random failures might allow networks to sample from observed distributions, also known as aleatoric or residual uncertainty.
We demonstrate that under a population-code based interpretation of neural activity, both types of distribution can be represented and sampled with synaptic failure alone.
arXiv Detail & Related papers (2021-11-18T16:23:00Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Probabilistic solution of chaotic dynamical system inverse problems
using Bayesian Artificial Neural Networks [0.0]
Inverse problems for chaotic systems are numerically challenging.
Small perturbations in model parameters can cause very large changes in estimated forward trajectories.
Bizarre Artificial Neural Networks can be used to simultaneously fit a model and estimate model parameter uncertainty.
arXiv Detail & Related papers (2020-05-26T20:35:02Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.