Bio-plausible Unsupervised Delay Learning for Extracting Temporal
Features in Spiking Neural Networks
- URL: http://arxiv.org/abs/2011.09380v1
- Date: Wed, 18 Nov 2020 16:25:32 GMT
- Title: Bio-plausible Unsupervised Delay Learning for Extracting Temporal
Features in Spiking Neural Networks
- Authors: Alireza Nadafian, Mohammad Ganjtabesh
- Abstract summary: plasticity of the conduction delay between neurons plays a fundamental role in learning.
Understanding the precise adjustment of synaptic delays could help us in developing effective brain-inspired computational models.
- Score: 0.548253258922555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The plasticity of the conduction delay between neurons plays a fundamental
role in learning. However, the exact underlying mechanisms in the brain for
this modulation is still an open problem. Understanding the precise adjustment
of synaptic delays could help us in developing effective brain-inspired
computational models in providing aligned insights with the experimental
evidence. In this paper, we propose an unsupervised biologically plausible
learning rule for adjusting the synaptic delays in spiking neural networks.
Then, we provided some mathematical proofs to show that our learning rule gives
a neuron the ability to learn repeating spatio-temporal patterns. Furthermore,
the experimental results of applying an STDP-based spiking neural network
equipped with our proposed delay learning rule on Random Dot Kinematogram
indicate the efficacy of the proposed delay learning rule in extracting
temporal features.
Related papers
- TC-LIF: A Two-Compartment Spiking Neuron Model for Long-Term Sequential
Modelling [54.97005925277638]
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
It remains a challenging task for state-of-the-art spiking neural networks (SNNs) to establish long-term temporal dependency between distant cues.
We propose a novel biologically inspired Two-Compartment Leaky Integrate-and-Fire spiking neuron model, dubbed TC-LIF.
arXiv Detail & Related papers (2023-08-25T08:54:41Z) - Beyond Weights: Deep learning in Spiking Neural Networks with pure
synaptic-delay training [0.9208007322096533]
We show that training ONLY the delays in feed-forward spiking networks using backpropagation can achieve performance comparable to the more conventional weight training.
We demonstrate the task performance of delay-only training on MNIST and Fashion-MNIST datasets in preliminary experiments.
arXiv Detail & Related papers (2023-06-09T20:14:10Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation [22.18972584098911]
Fully test-time adaptation aims to adapt the network model based on sequential analysis of input samples during the inference stage.
We take inspiration from the biological plausibility learning where the neuron responses are tuned based on a local synapse-change procedure.
We design a soft Hebbian learning process which provides an unsupervised and effective mechanism for online adaptation.
arXiv Detail & Related papers (2023-03-02T02:18:56Z) - Spiking Neural Networks for event-based action recognition: A new task to understand their advantage [1.4348901037145936]
Spiking Neural Networks (SNNs) are characterised by their unique temporal dynamics.
We show how Spiking neurons can enable temporal feature extraction in feed-forward neural networks.
We also show how recurrent SNNs can achieve comparable results to LSTM with a smaller number of parameters.
arXiv Detail & Related papers (2022-09-29T16:22:46Z) - Axonal Delay As a Short-Term Memory for Feed Forward Deep Spiking Neural
Networks [3.985532502580783]
Recent studies have found that the time delay of neurons plays an important role in the learning process.
configuring the precise timing of the spike is a promising direction for understanding and improving the transmission process of temporal information in SNNs.
In this paper, we verify the effectiveness of integrating time delay into supervised learning and propose a module that modulates the axonal delay through short-term memory.
arXiv Detail & Related papers (2022-04-20T16:56:42Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - SpikePropamine: Differentiable Plasticity in Spiking Neural Networks [0.0]
We introduce a framework for learning the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in Spiking Neural Networks (SNNs)
We show that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks.
These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task.
arXiv Detail & Related papers (2021-06-04T19:29:07Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.