Computation with Sequences in a Model of the Brain
- URL: http://arxiv.org/abs/2306.03812v2
- Date: Mon, 16 Oct 2023 17:30:17 GMT
- Title: Computation with Sequences in a Model of the Brain
- Authors: Max Dabagia, Christos H. Papadimitriou, Santosh S. Vempala
- Abstract summary: How cognition arises from neural activity is a central open question in neuroscience.
We show that time can be captured naturally as precedence through synaptic weights and plasticity.
We show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences.
- Score: 11.15191997898358
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Even as machine learning exceeds human-level performance on many
applications, the generality, robustness, and rapidity of the brain's learning
capabilities remain unmatched. How cognition arises from neural activity is a
central open question in neuroscience, inextricable from the study of
intelligence itself. A simple formal model of neural activity was proposed in
Papadimitriou [2020] and has been subsequently shown, through both mathematical
proofs and simulations, to be capable of implementing certain simple cognitive
operations via the creation and manipulation of assemblies of neurons. However,
many intelligent behaviors rely on the ability to recognize, store, and
manipulate temporal sequences of stimuli (planning, language, navigation, to
list a few). Here we show that, in the same model, time can be captured
naturally as precedence through synaptic weights and plasticity, and, as a
result, a range of computations on sequences of assemblies can be carried out.
In particular, repeated presentation of a sequence of stimuli leads to the
memorization of the sequence through corresponding neural assemblies: upon
future presentation of any stimulus in the sequence, the corresponding assembly
and its subsequent ones will be activated, one after the other, until the end
of the sequence. Finally, we show that any finite state machine can be learned
in a similar way, through the presentation of appropriate patterns of
sequences. Through an extension of this mechanism, the model can be shown to be
capable of universal computation. We support our analysis with a number of
experiments to probe the limits of learning in this model in key ways. Taken
together, these results provide a concrete hypothesis for the basis of the
brain's remarkable abilities to compute and learn, with sequences playing a
vital role.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Coin-Flipping In The Brain: Statistical Learning with Neuronal Assemblies [9.757971977909683]
We study the emergence of statistical learning in NEMO, a computational model of the brain.
We show that connections between assemblies record statistics, and ambient noise can be harnessed to make probabilistic choices.
arXiv Detail & Related papers (2024-06-11T20:51:50Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Goal-Driven Approach to Systems Neuroscience [2.6451153531057985]
Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
arXiv Detail & Related papers (2023-11-05T16:37:53Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Neuronal Sequence Models for Bayesian Online Inference [0.0]
Sequential neuronal activity underlies a wide range of processes in the brain.
Neuroscientific evidence for neuronal sequences has been reported in domains as diverse as perception, motor control, speech, spatial navigation and memory.
We review key findings about neuronal sequences and relate these to the concept of online inference on sequences as a model of sensory-motor processing and recognition.
arXiv Detail & Related papers (2020-04-02T10:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.