Hippocampus-Inspired Cognitive Architecture (HICA) for Operant
Conditioning
- URL: http://arxiv.org/abs/2212.08626v1
- Date: Fri, 16 Dec 2022 18:00:21 GMT
- Title: Hippocampus-Inspired Cognitive Architecture (HICA) for Operant
Conditioning
- Authors: Deokgun Park, Md Ashaduzzaman Rubel Mondol, SM Mazharul Islam,
Aishwarya Pothula
- Abstract summary: We propose a Hippocampus-Inspired Cognitive Architecture (HICA) as a neural mechanism for operant conditioning.
HICA is composed of two different types of modules.
- Score: 1.2955718209635252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The neural implementation of operant conditioning with few trials is unclear.
We propose a Hippocampus-Inspired Cognitive Architecture (HICA) as a neural
mechanism for operant conditioning. HICA explains a learning mechanism in which
agents can learn a new behavior policy in a few trials, as mammals do in
operant conditioning experiments. HICA is composed of two different types of
modules. One is a universal learning module type that represents a cortical
column in the neocortex gray matter. The working principle is modeled as
Modulated Heterarchical Prediction Memory (mHPM). In mHPM, each module learns
to predict a succeeding input vector given the sequence of the input vectors
from lower layers and the context vectors from higher layers. The prediction is
fed into the lower layers as a context signal (top-down feedback signaling),
and into the higher layers as an input signal (bottom-up feedforward
signaling). Rewards modulate the learning rate in those modules to memorize
meaningful sequences effectively. In mHPM, each module updates in a local and
distributed way compared to conventional end-to-end learning with
backpropagation of the single objective loss. This local structure enables the
heterarchical network of modules. The second type is an innate, special-purpose
module representing various organs of the brain's subcortical system. Modules
modeling organs such as the amygdala, hippocampus, and reward center are
pre-programmed to enable instinctive behaviors. The hippocampus plays the role
of the simulator. It is an autoregressive prediction model of the top-most
level signal with a loop structure of memory, while cortical columns are lower
layers that provide detailed information to the simulation. The simulation
becomes the basis for learning with few trials and the deliberate planning
required for operant conditioning.
Related papers
- A Global Data-Driven Model for The Hippocampus and Nucleus Accumbens of Rat From The Local Field Potential Recordings (LFP) [0.19999259391104385]
Local Field Potential (LFP) signals represent the dynamic flow of information in brain neural networks.
This paper identifies a global data-driven model to predict brain signals in different situations.
Morphine and natural rewards do not change the dynamic features of neurons in these regions.
arXiv Detail & Related papers (2024-05-10T15:58:39Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Deep Imitation Learning for Bimanual Robotic Manipulation [70.56142804957187]
We present a deep imitation learning framework for robotic bimanual manipulation.
A core challenge is to generalize the manipulation skills to objects in different locations.
We propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control.
arXiv Detail & Related papers (2020-10-11T01:40:03Z) - RE-MIMO: Recurrent and Permutation Equivariant Neural MIMO Detection [85.44877328116881]
We present a novel neural network for symbol detection in wireless communication systems.
It is motivated by several important considerations in wireless communication systems.
We compare its performance against existing methods and the results show the ability of our network to efficiently handle a variable number of transmitters.
arXiv Detail & Related papers (2020-06-30T22:43:01Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Hierarchical Predictive Coding Models in a Deep-Learning Framework [1.370633147306388]
We review some of the more well known models of predictive coding.
We also survey some recent attempts to cast these models within a deep learning framework.
arXiv Detail & Related papers (2020-05-07T03:39:57Z) - A Neuromorphic Paradigm for Online Unsupervised Clustering [0.6091702876917281]
A computational paradigm based on neuroscientific concepts is proposed and shown to be capable of online unsupervised clustering.
All operations, both training and inference, are localized and efficient.
The prototype column is simulated with a semi-synthetic benchmark and is shown to have performance characteristics on par with classic k-means.
arXiv Detail & Related papers (2020-04-25T14:02:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.