Coin-Flipping In The Brain: Statistical Learning with Neuronal Assemblies
- URL: http://arxiv.org/abs/2406.07715v1
- Date: Tue, 11 Jun 2024 20:51:50 GMT
- Title: Coin-Flipping In The Brain: Statistical Learning with Neuronal Assemblies
- Authors: Max Dabagia, Daniel Mitropolsky, Christos H. Papadimitriou, Santosh S. Vempala,
- Abstract summary: We study the emergence of statistical learning in NEMO, a computational model of the brain.
We show that connections between assemblies record statistics, and ambient noise can be harnessed to make probabilistic choices.
- Score: 9.757971977909683
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: How intelligence arises from the brain is a central problem in science. A crucial aspect of intelligence is dealing with uncertainty -- developing good predictions about one's environment, and converting these predictions into decisions. The brain itself seems to be noisy at many levels, from chemical processes which drive development and neuronal activity to trial variability of responses to stimuli. One hypothesis is that the noise inherent to the brain's mechanisms is used to sample from a model of the world and generate predictions. To test this hypothesis, we study the emergence of statistical learning in NEMO, a biologically plausible computational model of the brain based on stylized neurons and synapses, plasticity, and inhibition, and giving rise to assemblies -- a group of neurons whose coordinated firing is tantamount to recalling a location, concept, memory, or other primitive item of cognition. We show in theory and simulation that connections between assemblies record statistics, and ambient noise can be harnessed to make probabilistic choices between assemblies. This allows NEMO to create internal models such as Markov chains entirely from the presentation of sequences of stimuli. Our results provide a foundation for biologically plausible probabilistic computation, and add theoretical support to the hypothesis that noise is a useful component of the brain's mechanism for cognition.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - A theory of neural emulators [0.0]
A central goal in neuroscience is to provide explanations for how animal nervous systems can generate actions and cognitive states such as consciousness.
We propose emulator theory (ET) and neural emulators as circuit- and scale-independent predictive models of biological brain activity.
arXiv Detail & Related papers (2024-05-22T07:12:03Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Computation with Sequences in a Model of the Brain [11.15191997898358]
How cognition arises from neural activity is a central open question in neuroscience.
We show that time can be captured naturally as precedence through synaptic weights and plasticity.
We show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences.
arXiv Detail & Related papers (2023-06-06T15:58:09Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Predictive Coding and Stochastic Resonance: Towards a Unified Theory of
Auditory (Phantom) Perception [6.416574036611064]
To gain a mechanistic understanding of brain function, hypothesis driven experiments should be accompanied by biologically plausible computational models.
With a special focus on tinnitus, we review recent work at the intersection of artificial intelligence, psychology, and neuroscience.
We conclude that two fundamental processing principles - being ubiquitous in the brain - best fit to a vast number of experimental results.
arXiv Detail & Related papers (2022-04-07T10:47:58Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Learning to infer in recurrent biological networks [4.56877715768796]
We argue that the cortex may learn with an adversarial algorithm.
We illustrate the idea on recurrent neural networks trained to model image and video datasets.
arXiv Detail & Related papers (2020-06-18T19:04:47Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Neuronal Sequence Models for Bayesian Online Inference [0.0]
Sequential neuronal activity underlies a wide range of processes in the brain.
Neuroscientific evidence for neuronal sequences has been reported in domains as diverse as perception, motor control, speech, spatial navigation and memory.
We review key findings about neuronal sequences and relate these to the concept of online inference on sequences as a model of sensory-motor processing and recognition.
arXiv Detail & Related papers (2020-04-02T10:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.