HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
- URL: http://arxiv.org/abs/2308.10373v4
- Date: Tue, 04 Mar 2025 01:24:52 GMT
- Title: HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
- Authors: Hejia Geng, Peng Li,
- Abstract summary: spiking neural networks (SNNs) offer a promising neurally-inspired model of computation.<n>We present the first study that draws inspiration from neural homeostasis to design a threshold-adapting leaky integrate-and-fire neuron model.
- Score: 4.223946773134886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While spiking neural networks (SNNs) offer a promising neurally-inspired model of computation, they are vulnerable to adversarial attacks. We present the first study that draws inspiration from neural homeostasis to design a threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model and utilize TA-LIF neurons to construct the adversarially robust homeostatic SNNs (HoSNNs) for improved robustness. The TA-LIF model incorporates a self-stabilizing dynamic thresholding mechanism, offering a local feedback control solution to the minimization of each neuron's membrane potential error caused by adversarial disturbance. Theoretical analysis demonstrates favorable dynamic properties of TA-LIF neurons in terms of the bounded-input bounded-output stability and suppressed time growth of membrane potential error, underscoring their superior robustness compared with the standard LIF neurons. When trained with weak FGSM attacks (attack budget = 2/255) and tested with much stronger PGD attacks (attack budget = 8/255), our HoSNNs significantly improve model accuracy on several datasets: from 30.54% to 74.91% on FashionMNIST, from 0.44% to 35.06% on SVHN, from 0.56% to 42.63% on CIFAR10, from 0.04% to 16.66% on CIFAR100, over the conventional LIF-based SNNs.
Related papers
- Robust Spiking Neural Networks Against Adversarial Attacks [49.08210314590693]
Spiking Neural Networks (SNNs) represent a promising paradigm for energy-efficient neuromorphic computing.<n>In this study, we theoretically demonstrate that threshold-neighboring spiking neurons are the key factors limiting the robustness of directly trained SNNs.<n>We find that these neurons set the upper limits for the maximum potential strength of adversarial attacks and are prone to state-flipping under minor disturbances.
arXiv Detail & Related papers (2026-02-24T05:06:12Z) - General Self-Prediction Enhancement for Spiking Neurons [71.01912385372577]
Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility.<n>We propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential.<n>This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity.
arXiv Detail & Related papers (2026-01-29T15:08:48Z) - Discretized Quadratic Integrate-and-Fire Neuron Model for Deep Spiking Neural Networks [0.08749675983608168]
Spiking Neural Networks (SNNs) have emerged as energy-efficient alternatives to traditional artificial neural networks.<n>We propose the first discretization of the Quadratic Integrate-and-Fire (QIF) neuron model tailored for high-performance deep spiking neural networks.
arXiv Detail & Related papers (2025-10-05T02:30:10Z) - Incorporating the Refractory Period into Spiking Neural Networks through Spike-Triggered Threshold Dynamics [16.273350447266132]
We propose a method to incorporate the refractory period into spiking LIF neurons through spike-triggered threshold dynamics.<n> RPLIF achieves state-of-the-art performance on Cifar10-DVS(82.40%) and N-Caltech101(83.35%) with fewer timesteps and demonstrates superior performance on DVS128 Gesture(97.22%) at low latency.
arXiv Detail & Related papers (2025-09-22T13:33:31Z) - Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control [59.65431931190187]
Spiking Neural Networks (SNNs) offer low-latency and energy-efficient decision making on neuromorphic hardware.<n>Most continuous control algorithms for continuous control are designed for Artificial Neural Networks (ANNs)<n>We show that this mismatch destabilizes SNN training and degrades performance.<n>We propose a novel proxy target framework to bridge the gap between discrete SNNs and continuous-control algorithms.
arXiv Detail & Related papers (2025-05-30T03:08:03Z) - Robust Stable Spiking Neural Networks [45.84535743722043]
Spiking neural networks (SNNs) are gaining popularity in deep learning due to their low energy budget on neuromorphic hardware.
Many studies have been conducted to defend SNNs from the threat of adversarial attacks.
This paper aims to uncover the robustness of SNN through the lens of the stability of nonlinear systems.
arXiv Detail & Related papers (2024-05-31T08:40:02Z) - Fully Spiking Denoising Diffusion Implicit Models [61.32076130121347]
Spiking neural networks (SNNs) have garnered considerable attention owing to their ability to run on neuromorphic devices with super-high speeds.
We propose a novel approach fully spiking denoising diffusion implicit model (FSDDIM) to construct a diffusion model within SNNs.
We demonstrate that the proposed method outperforms the state-of-the-art fully spiking generative model.
arXiv Detail & Related papers (2023-12-04T09:07:09Z) - Adaptive Sparse Structure Development with Pruning and Regeneration for
Spiking Neural Networks [6.760855795263126]
Spiking Neural Networks (SNNs) have the natural advantage of drawing the sparse structural plasticity of brain development to alleviate the energy problems of deep neural networks.
This paper proposed a novel method for the adaptive structural development of SNN, introducing dendritic spine plasticity-based synaptic constraint, neuronal pruning and synaptic regeneration.
arXiv Detail & Related papers (2022-11-22T12:23:30Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - SIT: A Bionic and Non-Linear Neuron for Spiking Neural Network [12.237928453571636]
Spiking Neural Networks (SNNs) have piqued researchers' interest because of their capacity to process temporal information and low power consumption.
Current state-of-the-art methods limited their biological plausibility and performance because their neurons are generally built on the simple Leaky-Integrate-and-Fire (LIF) model.
Due to the high level of dynamic complexity, modern neuron models have seldom been implemented in SNN practice.
arXiv Detail & Related papers (2022-03-30T07:50:44Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise [13.904091056365765]
We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
arXiv Detail & Related papers (2021-10-06T16:48:48Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Batch Normalization Increases Adversarial Vulnerability and Decreases
Adversarial Transferability: A Non-Robust Feature Perspective [91.5105021619887]
Batch normalization (BN) has been widely used in modern deep neural networks (DNNs)
BN is observed to increase the model accuracy while at the cost of adversarial robustness.
It remains unclear whether BN mainly favors learning robust features (RFs) or non-robust features (NRFs)
arXiv Detail & Related papers (2020-10-07T10:24:33Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.