A STDP-based Encoding Algorithm for Associative and Composite Data
- URL: http://arxiv.org/abs/2104.12249v3
- Date: Mon, 9 Aug 2021 01:52:05 GMT
- Title: A STDP-based Encoding Algorithm for Associative and Composite Data
- Authors: Hong-Gyu Yoon and Pilwon Kim
- Abstract summary: This work proposes a practical memory model based on STDP that can store and retrieve high-dimensional associative data.
The model combines STDP dynamics with an encoding scheme for distributed representations and can handle multiple composite data in a continuous manner.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spike-timing-dependent plasticity(STDP) is a biological process of synaptic
modification caused by the difference of firing order and timing between
neurons. One of the neurodynamical roles of STDP is to form a macroscopic
geometrical structure in the neuronal state space in response to a periodic
input. This work proposes a practical memory model based on STDP that can store
and retrieve high-dimensional associative data. The model combines STDP
dynamics with an encoding scheme for distributed representations and can handle
multiple composite data in a continuous manner. In the auto-associative memory
task where a group of images is continuously streamed to the model, the images
are successfully retrieved from an oscillating neural state whenever a proper
cue is given. In the second task that deals with semantic memories embedded
from sentences, the results show that words can recall multiple sentences
simultaneously or one exclusively, depending on their grammatical relations.
Related papers
- AM-MTEEG: Multi-task EEG classification based on impulsive associative memory [6.240145569484483]
We propose a multi-task (MT) classification model, called AM-MTEEG, inspired by the principles of learning and memory in the human hippocampus.
The model treats the EEG classification of each individual as an independent task and facilitates feature sharing across individuals.
Experimental results in two BCI competition datasets show that our model improves average accuracy compared to state-of-the-art models.
arXiv Detail & Related papers (2024-09-27T01:33:45Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - ROSE: A Neurocomputational Architecture for Syntax [0.0]
This article proposes a neurocomputational architecture for syntax, termed the ROSE model.
Under ROSE, the basic data structures of syntax are atomic features, types of mental representations (R), and are coded at the single-unit and ensemble level.
Distinct forms of low frequency coupling and phase-amplitude coupling (delta-theta coupling via pSTS-IFG; theta-gamma coupling via IFG to conceptual hubs) then encode these structures onto distinct workspaces (E)
arXiv Detail & Related papers (2023-03-15T18:44:37Z) - TranSG: Transformer-Based Skeleton Graph Prototype Contrastive Learning
with Structure-Trajectory Prompted Reconstruction for Person
Re-Identification [63.903237777588316]
Person re-identification (re-ID) via 3D skeleton data is an emerging topic with prominent advantages.
Existing methods usually design skeleton descriptors with raw body joints or perform skeleton sequence representation learning.
We propose a generic Transformer-based Skeleton Graph prototype contrastive learning (TranSG) approach with structure-trajectory prompted reconstruction.
arXiv Detail & Related papers (2023-03-13T02:27:45Z) - Extended Graph Temporal Classification for Multi-Speaker End-to-End ASR [77.82653227783447]
We propose an extension of GTC to model the posteriors of both labels and label transitions by a neural network.
As an example application, we use the extended GTC (GTC-e) for the multi-speaker speech recognition task.
arXiv Detail & Related papers (2022-03-01T05:02:02Z) - A probabilistic latent variable model for detecting structure in binary
data [0.6767885381740952]
We introduce a novel, probabilistic binary latent variable model to detect noisy or approximate repeats of patterns in sparse binary data.
The model's capability is demonstrated by extracting structure in recordings from retinal neurons.
We apply our model to spiking responses recorded in retinal ganglion cells during stimulation with a movie.
arXiv Detail & Related papers (2022-01-26T18:37:35Z) - Brain dynamics via Cumulative Auto-Regressive Self-Attention [0.0]
We present a model that is considerably shallow than deep graph neural networks (GNNs)
Our model learns the autoregressive structure of individual time series and estimates directed connectivity graphs.
We demonstrate our results on a functional neuroimaging dataset classifying schizophrenia patients and controls.
arXiv Detail & Related papers (2021-11-01T21:50:35Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.