STAER: Temporal Aligned Rehearsal for Continual Spiking Neural Network
- URL: http://arxiv.org/abs/2601.20870v1
- Date: Fri, 16 Jan 2026 09:54:10 GMT
- Title: STAER: Temporal Aligned Rehearsal for Continual Spiking Neural Network
- Authors: Matteo Gianferrari, Omayma Moussadek, Riccardo Salami, Cosimo Fiorini, Lorenzo Tartarini, Daniela Gandolfi, Simone Calderara,
- Abstract summary: Spiking Neural Networks (SNNs) are inherently suited for continuous learning due to their event-driven temporal dynamics.<n>We introduce Spiking Temporal Alignment with Experience Replay (STAER) to bridge the performance gap between SNNs and ANNs.<n>Our approach integrates a differentiable Soft-DTW alignment loss to maintain spike timing fidelity and employs a temporal expansion and contraction mechanism on output logits to enforce robust representation learning.
- Score: 11.986684053664087
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Spiking Neural Networks (SNNs) are inherently suited for continuous learning due to their event-driven temporal dynamics; however, their application to Class-Incremental Learning (CIL) has been hindered by catastrophic forgetting and the temporal misalignment of spike patterns. In this work, we introduce Spiking Temporal Alignment with Experience Replay (STAER), a novel framework that explicitly preserves temporal structure to bridge the performance gap between SNNs and ANNs. Our approach integrates a differentiable Soft-DTW alignment loss to maintain spike timing fidelity and employs a temporal expansion and contraction mechanism on output logits to enforce robust representation learning. Implemented on a deep ResNet19 spiking backbone, STAER achieves state-of-the-art performance on Sequential-MNIST and Sequential-CIFAR10. Empirical results demonstrate that our method matches or outperforms strong ANN baselines (ER, DER++) while preserving biologically plausible dynamics. Ablation studies further confirm that explicit temporal alignment is critical for representational stability, positioning STAER as a scalable solution for spike-native lifelong learning. Code is available at https://github.com/matteogianferrari/staer.
Related papers
- SpikingGamma: Surrogate-Gradient Free and Temporally Precise Online Training of Spiking Neural Networks with Smoothed Delays [1.5166105038254163]
Spiking Neural Networks (SNNs) promise energy-efficient, low-latency AI through sparse, event-driven computation.<n>Yet, training SNNs under fine temporal discretization remains a major challenge, hindering both low-latency responsiveness and the mapping of software-trained SNNs to efficient hardware.<n>We show that this SpikingGamma model supports direct error backpropagation without surrogate gradients, can learn fine temporal patterns with minimal spiking in an online manner, and scale feedforward SNNs to complex tasks and benchmarks with competitive accuracy.
arXiv Detail & Related papers (2026-02-02T11:35:16Z) - From Observations to States: Latent Time Series Forecasting [65.98504021691666]
We propose Latent Time Series Forecasting (LatentTSF), a novel paradigm that shifts TSF from observation regression to latent state prediction.<n>Specifically, LatentTSF employs an AutoEncoder to project observations at each time step into a higher-dimensional latent state space.<n>Our proposed latent objectives implicitly maximize mutual information between predicted latent states and ground-truth states and observations.
arXiv Detail & Related papers (2026-01-30T20:39:44Z) - Unleashing Temporal Capacity of Spiking Neural Networks through Spatiotemporal Separation [67.69345363409835]
Spiking Neural Networks (SNNs) are considered naturally suited for temporal processing, with membrane potential propagation widely regarded as the core temporal modeling mechanism.<n>We design Non-Stateful (NS) models progressively removing membrane propagation to its stage-wise role. Experiments reveal a counterintuitive phenomenon: moderate removal in shallow layers improves performance, while excessive removal causes collapse.
arXiv Detail & Related papers (2025-12-05T07:05:53Z) - PredNext: Explicit Cross-View Temporal Prediction for Unsupervised Learning in Spiking Neural Networks [70.1286354746363]
Spiking Neural Networks (SNNs) offer a natural platform for unsupervised representation learning.<n>Current unsupervised SNNs employ shallow architectures or localized plasticity rules, limiting their ability to model long-range temporal dependencies.<n>We propose PredNext, which explicitly models temporal relationships through cross-view future Step Prediction and Clip Prediction.
arXiv Detail & Related papers (2025-09-29T14:27:58Z) - Unsupervised Online 3D Instance Segmentation with Synthetic Sequences and Dynamic Loss [52.28880405119483]
Unsupervised online 3D instance segmentation is a fundamental yet challenging task.<n>Existing methods, such as UNIT, have made progress in this direction but remain constrained by limited training diversity.<n>We propose a new framework that enriches the training distribution through synthetic point cloud sequence generation.
arXiv Detail & Related papers (2025-09-27T08:53:27Z) - STRAP: Spatio-Temporal Pattern Retrieval for Out-of-Distribution Generalization [29.10084723132903]
We propose an innovative Spatio-Temporal Retrieval-Augmented Pattern Learning framework, STRAP.<n>During inference, STRAP retrieves relevant patterns from this library based on similarity to the current input and injects them into the model via a plug-and-play prompting mechanism.<n>Experiments across multiple real-world streaming graph datasets show that STRAP consistently outperforms state-of-the-art STGNN baselines on STOOD tasks.
arXiv Detail & Related papers (2025-05-26T06:11:05Z) - StPR: Spatiotemporal Preservation and Routing for Exemplar-Free Video Class-Incremental Learning [79.44594332189018]
Class-Incremental Learning (CIL) seeks to develop models that continuously learn new action categories over time without previously acquired knowledge.<n>Existing approaches either rely on forgetting, raising concerns over memory and privacy, or adapt static image-based methods that neglect temporal modeling.<n>We propose a unified and exemplar-free VCIL framework that explicitly disentangles and preserves information.
arXiv Detail & Related papers (2025-05-20T06:46:51Z) - Learning Delays Through Gradients and Structure: Emergence of Spatiotemporal Patterns in Spiking Neural Networks [0.06752396542927405]
We present a Spiking Neural Network (SNN) model that incorporates learnable synaptic delays through two approaches.
In the latter approach, the network selects and prunes connections, optimizing the delays in sparse connectivity settings.
Our results demonstrate the potential of combining delay learning with dynamic pruning to develop efficient SNN models for temporal data processing.
arXiv Detail & Related papers (2024-07-07T11:55:48Z) - TC-LIF: A Two-Compartment Spiking Neuron Model for Long-Term Sequential
Modelling [54.97005925277638]
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
It remains a challenging task for state-of-the-art spiking neural networks (SNNs) to establish long-term temporal dependency between distant cues.
We propose a novel biologically inspired Two-Compartment Leaky Integrate-and-Fire spiking neuron model, dubbed TC-LIF.
arXiv Detail & Related papers (2023-08-25T08:54:41Z) - Temporal Contrastive Learning for Spiking Neural Networks [23.963069990569714]
Biologically inspired neural networks (SNNs) have garnered considerable attention due to their low-energy consumption and better-temporal information processing capabilities.
We propose a novel method to obtain SNNs with low latency and high performance by incorporating contrastive supervision with temporal domain information.
arXiv Detail & Related papers (2023-05-23T10:31:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.