Spiking Associative Memory for Spatio-Temporal Patterns
- URL: http://arxiv.org/abs/2006.16684v1
- Date: Tue, 30 Jun 2020 11:08:31 GMT
- Title: Spiking Associative Memory for Spatio-Temporal Patterns
- Authors: Simon Davidson, Stephen B. Furber and Oliver Rhodes
- Abstract summary: Spike Timing Dependent Plasticity is form of learning that has been demonstrated in real cortical tissue.
We develop a simple learning rule called cyclic STDP that can extract patterns in the precise spiking times of neurons.
We show that a population of neurons endowed with this learning rule can act as an effective short-term associative memory.
- Score: 0.21094707683348418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spike Timing Dependent Plasticity is form of learning that has been
demonstrated in real cortical tissue, but attempts to use it for artificial
systems have not produced good results. This paper seeks to remedy this with
two significant advances. The first is the development a simple stochastic
learning rule called cyclic STDP that can extract patterns encoded in the
precise spiking times of a group of neurons. We show that a population of
neurons endowed with this learning rule can act as an effective short-term
associative memory, storing and reliably recalling a large set of pattern
associations over an extended period of time.
The second major theme examines the challenges associated with training a
neuron to produce a spike at a precise time and for the fidelity of spike
recall time to be maintained as further learning occurs. The strong constraint
of working with precisely-timed spikes (so-called temporal coding) is mandated
by the learning rule but is also consistent with the believe in the necessity
of such an encoding scheme to render a spiking neural network a competitive
solution for flexible intelligent systems in continuous learning environments.
The encoding and learning rules are demonstrated in the design of a
single-layer associative memory (an input layer consisting of 3,200 spiking
neurons fully-connected to a similar sized population of memory neurons), which
we simulate and characterise. Design considerations and clarification of the
role of parameters under the control of the designer are explored.
Related papers
- Temporal Chunking Enhances Recognition of Implicit Sequential Patterns [11.298233331771975]
We propose a neuro-inspired approach that compresses temporal sequences into context-tagged chunks.<n>These tags are generated during an offline sleep phase and serve as compact references to past experience.<n>We evaluate this idea in a controlled synthetic environment designed to reveal the limitations of traditional neural network based sequence learners.
arXiv Detail & Related papers (2025-05-31T14:51:08Z) - Learning in Spiking Neural Networks with a Calcium-based Hebbian Rule for Spike-timing-dependent Plasticity [0.46085106405479537]
We present a Hebbian local learning rule that models synaptic modification as a function of calcium traces tracking neuronal activity.
We show how our model is sensitive to correlated spiking activity and how this enables it to modulate the learning rate of the network without altering the mean firing rate of the neurons.
arXiv Detail & Related papers (2025-04-09T11:39:59Z) - ELiSe: Efficient Learning of Sequences in Structured Recurrent Networks [1.5931140598271163]
We build a model for efficient learning sequences using only local always-on and phase-free plasticity.
We showcase the capabilities of ELiSe in a mock-up of birdsong learning, and demonstrate its flexibility with respect to parametrization.
arXiv Detail & Related papers (2024-02-26T17:30:34Z) - TC-LIF: A Two-Compartment Spiking Neuron Model for Long-Term Sequential
Modelling [54.97005925277638]
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
It remains a challenging task for state-of-the-art spiking neural networks (SNNs) to establish long-term temporal dependency between distant cues.
We propose a novel biologically inspired Two-Compartment Leaky Integrate-and-Fire spiking neuron model, dubbed TC-LIF.
arXiv Detail & Related papers (2023-08-25T08:54:41Z) - Long Short-term Memory with Two-Compartment Spiking Neuron [64.02161577259426]
We propose a novel biologically inspired Long Short-Term Memory Leaky Integrate-and-Fire spiking neuron model, dubbed LSTM-LIF.
Our experimental results, on a diverse range of temporal classification tasks, demonstrate superior temporal classification capability, rapid training convergence, strong network generalizability, and high energy efficiency of the proposed LSTM-LIF model.
This work, therefore, opens up a myriad of opportunities for resolving challenging temporal processing tasks on emerging neuromorphic computing machines.
arXiv Detail & Related papers (2023-07-14T08:51:03Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Axonal Delay As a Short-Term Memory for Feed Forward Deep Spiking Neural
Networks [3.985532502580783]
Recent studies have found that the time delay of neurons plays an important role in the learning process.
configuring the precise timing of the spike is a promising direction for understanding and improving the transmission process of temporal information in SNNs.
In this paper, we verify the effectiveness of integrating time delay into supervised learning and propose a module that modulates the axonal delay through short-term memory.
arXiv Detail & Related papers (2022-04-20T16:56:42Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Deep Metric Learning with Locality Sensitive Angular Loss for
Self-Correcting Source Separation of Neural Spiking Signals [77.34726150561087]
We propose a methodology based on deep metric learning to address the need for automated post-hoc cleaning and robust separation filters.
We validate this method with an artificially corrupted label set based on source-separated high-density surface electromyography recordings.
This approach enables a neural network to learn to accurately decode neurophysiological time series using any imperfect method of labelling the signal.
arXiv Detail & Related papers (2021-10-13T21:51:56Z) - Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning [11.781094547718595]
We derive an efficient training algorithm for Leaky Integrate and Fire neurons, which is capable of training a SNN to learn complex spatial temporal patterns.
We have developed a CMOS circuit implementation for a memristor-based network of neuron and synapses which retains critical neural dynamics with reduced complexity.
arXiv Detail & Related papers (2021-04-21T18:23:31Z) - Bio-plausible Unsupervised Delay Learning for Extracting Temporal
Features in Spiking Neural Networks [0.548253258922555]
plasticity of the conduction delay between neurons plays a fundamental role in learning.
Understanding the precise adjustment of synaptic delays could help us in developing effective brain-inspired computational models.
arXiv Detail & Related papers (2020-11-18T16:25:32Z) - A Deep 2-Dimensional Dynamical Spiking Neuronal Network for Temporal
Encoding trained with STDP [10.982390333064536]
We show that a large, deep layered SNN with dynamical, chaotic activity mimicking the mammalian cortex is capable of encoding information from temporal data.
We argue that the randomness inherent in the network weights allow the neurons to form groups that encode the temporal data being inputted after self-organizing with STDP.
We analyze the network in terms of network entropy as a metric of information transfer.
arXiv Detail & Related papers (2020-09-01T17:12:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.