Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
- URL: http://arxiv.org/abs/2509.23762v2
- Date: Mon, 06 Oct 2025 22:07:17 GMT
- Title: Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
- Authors: Nhan T. Luu,
- Abstract summary: Spiking Neural Networks (SNNs) have attracted growing interest in both computational neuroscience and artificial intelligence.<n>Recent studies have proposed leveraging sparse gradients as a form of regularization to enhance robustness against adversarial perturbations.<n>We present a surprising finding: under specific architectural configurations, SNNs exhibit natural gradient sparsity and can achieve state-of-the-art adversarial defense performance without the need for any explicit regularization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs) have attracted growing interest in both computational neuroscience and artificial intelligence, primarily due to their inherent energy efficiency and compact memory footprint. However, achieving adversarial robustness in SNNs, particularly for vision-related tasks, remains a nascent and underexplored challenge. Recent studies have proposed leveraging sparse gradients as a form of regularization to enhance robustness against adversarial perturbations. In this work, we present a surprising finding: under specific architectural configurations, SNNs exhibit natural gradient sparsity and can achieve state-of-the-art adversarial defense performance without the need for any explicit regularization. Further analysis reveals a trade-off between robustness and generalization: while sparse gradients contribute to improved adversarial resilience, they can impair the model's ability to generalize; conversely, denser gradients support better generalization but increase vulnerability to attacks.
Related papers
- Robust Spiking Neural Networks Against Adversarial Attacks [49.08210314590693]
Spiking Neural Networks (SNNs) represent a promising paradigm for energy-efficient neuromorphic computing.<n>In this study, we theoretically demonstrate that threshold-neighboring spiking neurons are the key factors limiting the robustness of directly trained SNNs.<n>We find that these neurons set the upper limits for the maximum potential strength of adversarial attacks and are prone to state-flipping under minor disturbances.
arXiv Detail & Related papers (2026-02-24T05:06:12Z) - General Self-Prediction Enhancement for Spiking Neurons [71.01912385372577]
Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility.<n>We propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential.<n>This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity.
arXiv Detail & Related papers (2026-01-29T15:08:48Z) - Towards Reliable Evaluation of Adversarial Robustness for Spiking Neural Networks [12.939513095038977]
Spiking Neural Networks (SNNs) utilize spike-based activations to mimic the brain's energy-efficient information processing.<n>We propose a more reliable framework for evaluating SNN adversarial robustness.
arXiv Detail & Related papers (2025-12-27T08:43:06Z) - Spiking Meets Attention: Efficient Remote Sensing Image Super-Resolution with Attention Spiking Neural Networks [86.28783985254431]
Spiking neural networks (SNNs) are emerging as a promising alternative to traditional artificial neural networks (ANNs)<n>We propose SpikeSR, which achieves state-of-the-art performance across various remote sensing benchmarks such as AID, DOTA, and DIOR.
arXiv Detail & Related papers (2025-03-06T09:06:06Z) - Temporal Reversal Regularization for Spiking Neural Networks: Hybrid Spatio-Temporal Invariance for Generalization [3.7748662901422807]
Spiking neural networks (SNNs) have received widespread attention as an ultra-low power computing paradigm.<n>Recent studies have shown that SNNs suffer from severe overfitting, which limits their generalization performance.<n>We propose a simple yet effective Temporal Reversal Regularization to mitigate overfitting during training and facilitate generalization of SNNs.
arXiv Detail & Related papers (2024-08-17T06:23:38Z) - Beyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking [6.189613073024831]
This study introduces an innovative Local Feature Masking (LFM) strategy aimed at fortifying the performance of Convolutional Neural Networks (CNNs)
During the training phase, we strategically incorporate random feature masking in the shallow layers of CNNs.
LFM compels the network to adapt by leveraging remaining features to compensate for the absence of certain semantic features.
arXiv Detail & Related papers (2024-07-18T16:25:16Z) - Understanding the Robustness of Graph Neural Networks against Adversarial Attacks [14.89001880258583]
Recent studies have shown that graph neural networks (GNNs) are vulnerable to adversarial attacks.<n>This vulnerability has spurred a growing focus on designing robust GNNs.<n>We conduct the first large-scale systematic study on the adversarial robustness of GNNs.
arXiv Detail & Related papers (2024-06-20T01:24:18Z) - Enhancing Adversarial Robustness in SNNs with Sparse Gradients [46.15229142258264]
Spiking Neural Networks (SNNs) have attracted great attention for their energy-efficient operations and biologically inspired structures.
Existing techniques, whether adapted from ANNs or specifically designed for SNNs, exhibit limitations in training SNNs or defending against strong attacks.
We propose a novel approach to enhance the robustness of SNNs through gradient sparsity regularization.
arXiv Detail & Related papers (2024-05-30T05:39:27Z) - Fixed Inter-Neuron Covariability Induces Adversarial Robustness [26.878913741674058]
The vulnerability to adversarial perturbations is a major flaw of Deep Neural Networks (DNNs)
We have developed the Self-Consistent Activation layer, which comprises of neurons whose activations are consistent with each other, as they conform to a fixed, but learned, covariability pattern.
The models with a SCA layer achieved high accuracy, and exhibited significantly greater robustness than multi-layer perceptron models to state-of-the-art Auto-PGD adversarial attacks textitwithout being trained on adversarially perturbed data.
arXiv Detail & Related papers (2023-08-07T23:46:14Z) - Stability and Generalization Analysis of Gradient Methods for Shallow
Neural Networks [59.142826407441106]
We study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability.
We consider gradient descent (GD) and gradient descent (SGD) to train SNNs, for both of which we develop consistent excess bounds.
arXiv Detail & Related papers (2022-09-19T18:48:00Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.