Learning to infer in recurrent biological networks
- URL: http://arxiv.org/abs/2006.10811v2
- Date: Mon, 31 May 2021 17:33:06 GMT
- Title: Learning to infer in recurrent biological networks
- Authors: Ari S. Benjamin and Konrad P. Kording
- Abstract summary: We argue that the cortex may learn with an adversarial algorithm.
We illustrate the idea on recurrent neural networks trained to model image and video datasets.
- Score: 4.56877715768796
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A popular theory of perceptual processing holds that the brain learns both a
generative model of the world and a paired recognition model using variational
Bayesian inference. Most hypotheses of how the brain might learn these models
assume that neurons in a population are conditionally independent given their
common inputs. This simplification is likely not compatible with the type of
local recurrence observed in the brain. Seeking an alternative that is
compatible with complex inter-dependencies yet consistent with known biology,
we argue here that the cortex may learn with an adversarial algorithm. Many
observable symptoms of this approach would resemble known neural phenomena,
including wake/sleep cycles and oscillations that vary in magnitude with
surprise, and we describe how further predictions could be tested. We
illustrate the idea on recurrent neural networks trained to model image and
video datasets. This framework for learning brings variational inference closer
to neuroscience and yields multiple testable hypotheses.
Related papers
- Coin-Flipping In The Brain: Statistical Learning with Neuronal Assemblies [9.757971977909683]
We study the emergence of statistical learning in NEMO, a computational model of the brain.
We show that connections between assemblies record statistics, and ambient noise can be harnessed to make probabilistic choices.
arXiv Detail & Related papers (2024-06-11T20:51:50Z) - Deep Latent Variable Modeling of Physiological Signals [0.8702432681310401]
We explore high-dimensional problems related to physiological monitoring using latent variable models.
First, we present a novel deep state-space model to generate electrical waveforms of the heart using optically obtained signals as inputs.
Second, we present a brain signal modeling scheme that combines the strengths of probabilistic graphical models and deep adversarial learning.
Third, we propose a framework for the joint modeling of physiological measures and behavior.
arXiv Detail & Related papers (2024-05-29T17:07:33Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Towards a Foundation Model for Brain Age Prediction using coVariance
Neural Networks [102.75954614946258]
Increasing brain age with respect to chronological age can reflect increased vulnerability to neurodegeneration and cognitive decline.
NeuroVNN is pre-trained as a regression model on healthy population to predict chronological age.
NeuroVNN adds anatomical interpretability to brain age and has a scale-free' characteristic that allows its transference to datasets curated according to any arbitrary brain atlas.
arXiv Detail & Related papers (2024-02-12T14:46:31Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z) - From internal models toward metacognitive AI [0.0]
In the prefrontal cortex, a distributed executive network called the "cognitive reality monitoring network" orchestrates conscious involvement of generative-inverse model pairs.
A high responsibility signal is given to the pairs that best capture the external world.
consciousness is determined by the entropy of responsibility signals across all pairs.
arXiv Detail & Related papers (2021-09-27T05:00:56Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.