SAFA-SNN: Sparsity-Aware On-Device Few-Shot Class-Incremental Learning with Fast-Adaptive Structure of Spiking Neural Network
- URL: http://arxiv.org/abs/2510.03648v1
- Date: Sat, 04 Oct 2025 03:21:31 GMT
- Title: SAFA-SNN: Sparsity-Aware On-Device Few-Shot Class-Incremental Learning with Fast-Adaptive Structure of Spiking Neural Network
- Authors: Huijing Zhang, Muyang Cao, Linshan Jiang, Xin Du, Di Yu, Changze Lv, Shuiguang Deng,
- Abstract summary: Continuous learning of novel classes is crucial for edge devices to preserve data privacy and maintain reliable performance in dynamic environments.<n>In this work, we present an SNN-based method for On-Device FSCIL ie., Sparsity-Aware and Fast Adaptive SNN.
- Score: 19.73335869722781
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuous learning of novel classes is crucial for edge devices to preserve data privacy and maintain reliable performance in dynamic environments. However, the scenario becomes particularly challenging when data samples are insufficient, requiring on-device few-shot class-incremental learning (FSCIL) to maintain consistent model performance. Although existing work has explored parameter-efficient FSCIL frameworks based on artificial neural networks (ANNs), their deployment is still fundamentally constrained by limited device resources. Inspired by neural mechanisms, Spiking neural networks (SNNs) process spatiotemporal information efficiently, offering lower energy consumption, greater biological plausibility, and compatibility with neuromorphic hardware than ANNs. In this work, we present an SNN-based method for On-Device FSCIL, i.e., Sparsity-Aware and Fast Adaptive SNN (SAFA-SNN). We first propose sparsity-conditioned neuronal dynamics, in which most neurons remain stable while a subset stays active, thereby mitigating catastrophic forgetting. To further cope with spike non-differentiability in gradient estimation, we employ zeroth-order optimization. Moreover, during incremental learning sessions, we enhance the discriminability of new classes through subspace projection, which alleviates overfitting to novel classes. Extensive experiments conducted on two standard benchmark datasets (CIFAR100 and Mini-ImageNet) and three neuromorphic datasets (CIFAR-10-DVS, DVS128gesture, and N-Caltech101) demonstrate that SAFA-SNN outperforms baseline methods, specifically achieving at least 4.01% improvement at the last incremental session on Mini-ImageNet and 20% lower energy cost over baseline methods with practical implementation.
Related papers
- Energy-Aware Spike Budgeting for Continual Learning in Spiking Neural Networks for Neuromorphic Vision [0.0]
Neuromorphic vision systems based on spiking neural networks (SNNs) offer ultra-low-power perception for event-based and frame-based cameras.<n>Existing continual learning methods, developed primarily for artificial neural networks, seldom jointly optimize accuracy and energy efficiency.<n>We propose an energy-aware spike budgeting framework for continual SNN learning that integrates experience replay, learnable leaky integrate-and-fire neuron parameters, and an adaptive spike scheduler to enforce dataset-specific energy constraints.
arXiv Detail & Related papers (2026-02-12T18:15:32Z) - Learning Neuron Dynamics within Deep Spiking Neural Networks [0.09558392439655011]
Spiking Neural Networks (SNNs) offer a promising energy-efficient alternative to Artificial Neural Networks (ANNs)<n>Deep SNNs remain limited by their reliance on simple neuron models, such as the Leaky Integrate-and-Fire (LIF) model.<n>LNMs are a general, parametric formulation for non-linear integrate-and-fire dynamics that learn neuron dynamics during training.
arXiv Detail & Related papers (2025-10-07T18:16:57Z) - Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control [59.65431931190187]
Spiking Neural Networks (SNNs) offer low-latency and energy-efficient decision making on neuromorphic hardware.<n>Most continuous control algorithms for continuous control are designed for Artificial Neural Networks (ANNs)<n>We show that this mismatch destabilizes SNN training and degrades performance.<n>We propose a novel proxy target framework to bridge the gap between discrete SNNs and continuous-control algorithms.
arXiv Detail & Related papers (2025-05-30T03:08:03Z) - Self-cross Feature based Spiking Neural Networks for Efficient Few-shot Learning [16.156610945877986]
We propose a few-shot learning framework based on Spiking Neural Networks (SNNs)<n>We apply the combination of temporal efficient training loss and Info Info loss to optimize the temporal spike dynamics of trains and enhance the discriminative power.
arXiv Detail & Related papers (2025-05-12T16:51:08Z) - ALADE-SNN: Adaptive Logit Alignment in Dynamically Expandable Spiking Neural Networks for Class Incremental Learning [15.022211557367273]
We develop spiking neural networks (SNNs) with dynamic structures for Class Incremental Learning (CIL)<n>We propose the ALADE-SNN framework, which includes adaptive logit alignment for balanced feature representation and OtoN suppression to manage weights mapping frozen old features to new classes during training.<n>Experiment results show that ALADE-SNN achieves an average incremental accuracy of 75.42 on the CIFAR100-B0 benchmark over 10 incremental steps.
arXiv Detail & Related papers (2024-12-17T09:13:22Z) - Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - KLIF: An optimized spiking neuron unit for tuning surrogate gradient
slope and membrane potential [0.0]
Spiking neural networks (SNNs) have attracted much attention due to their ability to process temporal information.
It is still challenging to develop efficient and high-performing learning algorithms for SNNs.
We propose a novel k-based leaky Integrate-and-Fire neuron model to improve the learning ability of SNNs.
arXiv Detail & Related papers (2023-02-18T05:18:18Z) - PC-SNN: Predictive Coding-based Local Hebbian Plasticity Learning in Spiking Neural Networks [9.026880521552153]
Spiking Neural Networks (SNNs) emulate the brain's information processing with unparalleled biological plausibility.<n>We propose PC-SNN, a novel learning framework that integrates predictive coding with SNNs to enable biologically plausible, local Hebbian plasticity.
arXiv Detail & Related papers (2022-11-24T09:56:02Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.