Traces Propagation: Memory-Efficient and Scalable Forward-Only Learning in Spiking Neural Networks
- URL: http://arxiv.org/abs/2509.13053v2
- Date: Fri, 17 Oct 2025 12:31:33 GMT
- Title: Traces Propagation: Memory-Efficient and Scalable Forward-Only Learning in Spiking Neural Networks
- Authors: Lorenzo Pes, Bojian Yin, Sander Stuijk, Federico Corradi,
- Abstract summary: Spiking Neural Networks (SNNs) provide an efficient framework for processing dynamic-temporal signals.<n>A key challenge in training SNNs is to solve to both spatial and temporal credit assignment.
- Score: 1.6952253597549973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) provide an efficient framework for processing dynamic spatio-temporal signals and for investigating the learning principles underlying biological neural systems. A key challenge in training SNNs is to solve both spatial and temporal credit assignment. The dominant approach for training SNNs is Backpropagation Through Time (BPTT) with surrogate gradients. However, BPTT is in stark contrast with the spatial and temporal locality observed in biological neural systems and leads to high computational and memory demands, limiting efficient training strategies and on-device learning. Although existing local learning rules achieve local temporal credit assignment by leveraging eligibility traces, they fail to address the spatial credit assignment without resorting to auxiliary layer-wise matrices, which increase memory overhead and hinder scalability, especially on embedded devices. In this work, we propose Traces Propagation (TP), a forward-only, memory-efficient, scalable, and fully local learning rule that combines eligibility traces with a layer-wise contrastive loss without requiring auxiliary layer-wise matrices. TP outperforms other fully local learning rules on NMNIST and SHD datasets. On more complex datasets such as DVS-GESTURE and DVS-CIFAR10, TP showcases competitive performance and scales effectively to deeper SNN architectures such as VGG-9, while providing favorable memory scaling compared to prior fully local scalable rules, for datasets with a significant number of classes. Finally, we show that TP is well suited for practical fine-tuning tasks, such as keyword spotting on the Google Speech Commands dataset, thus paving the way for efficient learning at the edge.
Related papers
- Spatio-Temporal Decoupled Learning for Spiking Neural Networks [23.720523101102593]
Spiking artificial neural networks (SNNs) have gained significant attention for their potential to enable energy-efficient intelligence.<n>While backpropagation through time (BPTT) achieves high accuracy, it incurs substantial memory overhead.<n>We propose a novel training framework that decouples the spatial and temporal dependencies to achieve both high accuracy and training efficiency for SNNs.
arXiv Detail & Related papers (2025-06-01T18:46:36Z) - TESS: A Scalable Temporally and Spatially Local Learning Rule for Spiking Neural Networks [6.805933498669221]
Training neural networks (SNNs) on resource-constrained devices remains challenging due to high computational and memory demands.<n>We introduce TESS, a temporally and spatially local learning rule for training SNNs.<n>Our approach addresses both temporal and spatial credit assignments by relying solely on locally available signals within each neuron.
arXiv Detail & Related papers (2025-02-03T21:23:15Z) - S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks [7.573297026523597]
Spiking Neural Networks (SNNs) are biologically plausible models that have been identified as potentially apt for deploying energy-efficient intelligence at the edge.
We propose S-TLLR, a novel three-factor temporal local learning rule inspired by the Spike-Timing Dependent Plasticity (STDP) mechanism.
S-TLLR is designed to have low memory and time complexities, which are independent of the number of time steps, rendering it suitable for online learning on low-power edge devices.
arXiv Detail & Related papers (2023-06-27T05:44:56Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Online Spatio-Temporal Learning in Deep Neural Networks [1.6624384368855523]
Online learning has recently regained the attention of the research community, focusing on approaches that approximate BPTT or on biologically-plausible schemes applied to SNNs.
Here we present an alternative perspective that is based on a clear separation of spatial and temporal gradient components.
We derive from first principles a novel online learning algorithm for deep SNNs, called online spiking-temporal learning (OSTL)
For shallow networks, OSTL is gradient-equivalent to BPTT enabling for the first time online training of SNNs with BPTT-equivalent gradients. In addition, the proposed formulation unveils a class of SNN architectures
arXiv Detail & Related papers (2020-07-24T18:10:18Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.