CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture
- URL: http://arxiv.org/abs/2204.00619v1
- Date: Thu, 31 Mar 2022 04:44:28 GMT
- Title: CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture
- Authors: Alexander Ororbia, M. Alex Kelly
- Abstract summary: We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
- Score: 79.07468367923619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new cognitive architecture that combines two neurobiologically
plausible, computational models: (1) a variant of predictive processing known
as neural generative coding (NGC) and (2) hyperdimensional, vector-symbolic
models of human memory. We draw inspiration from well-known cognitive
architectures such as ACT-R, Soar, Leabra, and Spaun/Nengo. Our cognitive
architecture, the COGnitive Neural GENerative system (CogNGen), is in broad
agreement with these architectures, but provides a level of detail between
ACT-R's high-level, symbolic description of human cognition and Spaun's
low-level neurobiological description. CogNGen creates the groundwork for
developing agents that learn continually from diverse tasks and model human
performance at larger scales than what is possible with existent cognitive
architectures. We aim to develop a cognitive architecture that has the power of
modern machine learning techniques while retaining long-term memory,
single-trial learning, transfer-learning, planning, and other capacities
associated with high-level cognition. We test CogNGen on a set of maze-learning
tasks, including mazes that test short-term memory and planning, and find that
the addition of vector-symbolic models of memory improves the ability of the
NGC reinforcement learning model to master the maze task. Future work includes
testing CogNGen on more tasks and exploring methods for efficiently scaling
hyperdimensional memory models to lifetime learning.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Brain Networks and Intelligence: A Graph Neural Network Based Approach to Resting State fMRI Data [2.193937336601403]
We present a novel modeling architecture called BrainRGIN for predicting intelligence (fluid, crystallized, and total intelligence) using graph neural networks on rsfMRI derived connectivity matrices.
Our approach incorporates a clustering-based embedding and graph isomorphism network in the graph convolutional layer to reflect the nature of the brain sub-network organization.
arXiv Detail & Related papers (2023-11-06T20:58:07Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - A brain basis of dynamical intelligence for AI and computational
neuroscience [0.0]
More brain-like capacities may demand new theories, models, and methods for designing artificial learning systems.
This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
arXiv Detail & Related papers (2021-05-15T19:49:32Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.