DelRec: learning delays in recurrent spiking neural networks
- URL: http://arxiv.org/abs/2509.24852v2
- Date: Wed, 15 Oct 2025 13:26:28 GMT
- Title: DelRec: learning delays in recurrent spiking neural networks
- Authors: Alexandre Queant, Ulysse Rançon, Benoit R Cottereau, Timothée Masquelier,
- Abstract summary: Spiking neural networks (SNNs) are a bio-inspired alternative to conventional real-valued deep learning models.<n>DelRec is the first SGL-based method to train axonal or synaptic delays in recurrent spiking layers.<n>Our results demonstrate that recurrent delays are critical for temporal processing in SNNs.
- Score: 44.373421535679476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks (SNNs) are a bio-inspired alternative to conventional real-valued deep learning models, with the potential for substantially higher energy efficiency. Interest in SNNs has recently exploded due to a major breakthrough: surrogate gradient learning (SGL), which allows training SNNs with backpropagation, strongly outperforming other approaches. In SNNs, each synapse is characterized not only by a weight but also by a transmission delay. While theoretical works have long suggested that trainable delays significantly enhance expressivity, practical methods for learning them have only recently emerged. Here, we introduce ``DelRec'', the first SGL-based method to train axonal or synaptic delays in recurrent spiking layers, compatible with any spiking neuron model. DelRec leverages a differentiable interpolation technique to handle non-integer delays with well-defined gradients at training time. We show that SNNs with trainable recurrent delays outperform feedforward ones, leading to new state-of-the-art (SOTA) on two challenging temporal datasets (Spiking Speech Command, an audio dataset, and Permuted Sequential MNIST, a vision one), and match the SOTA on the now saturated Spiking Heidelberg Digit dataset using only vanilla Leaky-Integrate-and-Fire neurons with stateless (instantaneous) synapses. Our results demonstrate that recurrent delays are critical for temporal processing in SNNs and can be effectively optimized with DelRec, paving the way for efficient deployment on neuromorphic hardware with programmable delays. Our code is available at https://github.com/alexmaxad/DelRec.
Related papers
- Sparse Axonal and Dendritic Delays Enable Competitive SNNs for Keyword Classification [5.928605435529651]
Training transmission delays in spiking neural networks (SNNs) has been shown to substantially improve their performance on complex temporal tasks.<n>We show that learning either axonal or dendritic delays enables deep feedforward SNNs to reach accuracy comparable to existing synaptic delay learning approaches.
arXiv Detail & Related papers (2026-02-10T12:57:02Z) - Efficient Event-based Delay Learning in Spiking Neural Networks [0.1350479308585481]
Spiking Neural Networks (SNNs) compute using sparse communication and are attracting increased attention.<n>We propose a novel event-based training method for SNNs with delays, grounded in the EventProp formalism.<n>Our method supports multiple spikes per neuron and, to the best of our knowledge, is the first delay learning algorithm to be applied to recurrent SNNs.
arXiv Detail & Related papers (2025-01-13T13:44:34Z) - Learning Delays in Spiking Neural Networks using Dilated Convolutions
with Learnable Spacings [1.534667887016089]
Spiking Neural Networks (SNNs) are promising research direction for building power-efficient information processing systems.
In SNNs, delays refer to the time needed for one spike to travel from one neuron to another.
It has been shown theoretically that plastic delays greatly increase the expressivity in SNNs.
We propose a new discrete-time algorithm that addresses this issue in deep feedforward SNNs using backpropagation.
arXiv Detail & Related papers (2023-06-30T14:01:53Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Spike-inspired Rank Coding for Fast and Accurate Recurrent Neural
Networks [5.986408771459261]
Biological spiking neural networks (SNNs) can temporally encode information in their outputs, whereas artificial neural networks (ANNs) conventionally do not.
Here we show that temporal coding such as rank coding (RC) inspired by SNNs can also be applied to conventional ANNs such as LSTMs.
RC-training also significantly reduces time-to-insight during inference, with a minimal decrease in accuracy.
We demonstrate these in two toy problems of sequence classification, and in a temporally-encoded MNIST dataset where our RC model achieves 99.19% accuracy after the first input time-step
arXiv Detail & Related papers (2021-10-06T15:51:38Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.