Iterative VAE as a predictive brain model for out-of-distribution
generalization
- URL: http://arxiv.org/abs/2012.00557v1
- Date: Tue, 1 Dec 2020 15:02:38 GMT
- Title: Iterative VAE as a predictive brain model for out-of-distribution
generalization
- Authors: Victor Boutin, Aimen Zerroug, Minju Jung, Thomas Serre
- Abstract summary: We show that iVAEs generalize to distributional shifts significantly better than both PCNs and VAEs.
We propose a novel measure of recognizability for individual samples which can be tested against human psychophysical data.
- Score: 7.006301658267125
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Our ability to generalize beyond training data to novel, out-of-distribution,
image degradations is a hallmark of primate vision. The predictive brain,
exemplified by predictive coding networks (PCNs), has become a prominent
neuroscience theory of neural computation. Motivated by the recent successes of
variational autoencoders (VAEs) in machine learning, we rigorously derive a
correspondence between PCNs and VAEs. This motivates us to consider iterative
extensions of VAEs (iVAEs) as plausible variational extensions of the PCNs. We
further demonstrate that iVAEs generalize to distributional shifts
significantly better than both PCNs and VAEs. In addition, we propose a novel
measure of recognizability for individual samples which can be tested against
human psychophysical data. Overall, we hope this work will spur interest in
iVAEs as a promising new direction for modeling in neuroscience.
Related papers
- A prescriptive theory for brain-like inference [0.0]
We show that maximizing the Evidence Lower Bound (ELBO) leads to a spiking neural network that performs Bayesian posterior inference.
The resulting model, the iterative Poisson VAE, has a closer connection to biological neurons than previous brain-inspired predictive models.
These findings suggest that optimizing ELBO, combined with Poisson assumptions, provides a solid foundation for developing prescriptive theories in NeuroAI.
arXiv Detail & Related papers (2024-10-25T06:00:18Z) - Predictive Coding Networks and Inference Learning: Tutorial and Survey [0.7510165488300368]
Predictive coding networks (PCNs) are based on the neuroscientific framework of predictive coding.
Unlike traditional neural networks trained with backpropagation (BP), PCNs utilize inference learning (IL), a more biologically plausible algorithm.
As inherently probabilistic (graphical) latent variable models, PCNs provide a versatile framework for both supervised learning and unsupervised (generative) modeling.
arXiv Detail & Related papers (2024-07-04T18:39:20Z) - Poisson Variational Autoencoder [0.0]
Variational autoencoders (VAE) employ Bayesian inference to interpret sensory inputs.
Here, we develop a novel architecture that combines principles of predictive coding with a VAE that encodes inputs into discrete spike counts.
Our work provides an interpretable computational framework to study brain-like sensory processing.
arXiv Detail & Related papers (2024-05-23T12:02:54Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Learning identifiable and interpretable latent models of
high-dimensional neural activity using pi-VAE [10.529943544385585]
We propose a method that integrates key ingredients from latent models and traditional neural encoding models.
Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder.
We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex.
arXiv Detail & Related papers (2020-11-09T22:00:38Z) - Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels [93.58854458951431]
We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
arXiv Detail & Related papers (2020-09-16T15:16:03Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.