Exploring Adversarial Attack in Spiking Neural Networks with
Spike-Compatible Gradient
- URL: http://arxiv.org/abs/2001.01587v2
- Date: Wed, 30 Sep 2020 22:56:29 GMT
- Title: Exploring Adversarial Attack in Spiking Neural Networks with
Spike-Compatible Gradient
- Authors: Ling Liang, Xing Hu, Lei Deng, Yujie Wu, Guoqi Li, Yufei Ding, Peng
Li, Yuan Xie
- Abstract summary: We build an adversarial attack methodology for SNNs trained by supervised algorithms.
This work can help reveal what happens in SNN attack and might stimulate more research on the security of SNN models and neuromorphic devices.
- Score: 29.567395824544437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, backpropagation through time inspired learning algorithms are
widely introduced into SNNs to improve the performance, which brings the
possibility to attack the models accurately given Spatio-temporal gradient
maps. We propose two approaches to address the challenges of gradient input
incompatibility and gradient vanishing. Specifically, we design a gradient to
spike converter to convert continuous gradients to ternary ones compatible with
spike inputs. Then, we design a gradient trigger to construct ternary gradients
that can randomly flip the spike inputs with a controllable turnover rate, when
meeting all zero gradients. Putting these methods together, we build an
adversarial attack methodology for SNNs trained by supervised algorithms.
Moreover, we analyze the influence of the training loss function and the firing
threshold of the penultimate layer, which indicates a "trap" region under the
cross-entropy loss that can be escaped by threshold tuning. Extensive
experiments are conducted to validate the effectiveness of our solution.
Besides the quantitative analysis of the influence factors, we evidence that
SNNs are more robust against adversarial attack than ANNs. This work can help
reveal what happens in SNN attack and might stimulate more research on the
security of SNN models and neuromorphic devices.
Related papers
- Towards Reliable Evaluation of Adversarial Robustness for Spiking Neural Networks [12.939513095038977]
Spiking Neural Networks (SNNs) utilize spike-based activations to mimic the brain's energy-efficient information processing.<n>We propose a more reliable framework for evaluating SNN adversarial robustness.
arXiv Detail & Related papers (2025-12-27T08:43:06Z) - Privacy in Federated Learning with Spiking Neural Networks [5.715736614295801]
Spiking neural networks (SNNs) have emerged as prominent candidates for embedded and edge AI.<n>We present the first comprehensive empirical study of gradient leakage in SNNs across diverse data domains.<n>Results indicate that the combination of event-driven dynamics and surrogate-gradient training substantially reduces gradient informativeness.
arXiv Detail & Related papers (2025-11-26T08:55:11Z) - Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Gradient Scaling on Deep Spiking Neural Networks with Spike-Dependent
Local Information [2.111711135667053]
We train deep neural networks (SNNs) with spiking backpropagation (STBP) with surrogate gradient.
In this work, we proposed gradient with scaling local spike information, which is the relation between pre- and post-temporal spikes.
Considering the causality between spikes, we could enhance the training of deep SNNs.
arXiv Detail & Related papers (2023-08-01T13:58:21Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - A temporally and spatially local spike-based backpropagation algorithm
to enable training in hardware [0.0]
Spiking Neural Networks (SNNs) have emerged as a hardware efficient architecture for classification tasks.
There have been several attempts to adopt the powerful backpropagation (BP) technique used in non-spiking artificial neural networks (ANNs)
arXiv Detail & Related papers (2022-07-20T08:57:53Z) - On the Robustness of Bayesian Neural Networks to Adversarial Attacks [11.277163381331137]
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.
We show that vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution.
We prove that the expected gradient of the loss with respect to the BNN posterior distribution is vanishing, even when each neural network sampled from the posterior is vulnerable to gradient-based attacks.
arXiv Detail & Related papers (2022-07-13T12:27:38Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Dynamically Sampled Nonlocal Gradients for Stronger Adversarial Attacks [3.055601224691843]
The vulnerability of deep neural networks to small and even imperceptible perturbations has become a central topic in deep learning research.
We propose Dynamically Dynamically Nonlocal Gradient Descent (DSNGD) as a vulnerability defense mechanism.
We show that DSNGD-based attacks are average 35% faster while achieving 0.9% to 27.1% higher success rates compared to their gradient descent-based counterparts.
arXiv Detail & Related papers (2020-11-05T08:55:24Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - Learning Precise Spike Timings with Eligibility Traces [1.3190581566723916]
We show that STDP-aware synaptic gradients naturally emerge within the eligibility equations of e-prop.
We also present a simple extension of the LIF model that provides similar gradients.
In a simple experiment we demonstrate that the STDP-aware LIF neurons can learn precise spike timings from an e-prop-based gradient signal.
arXiv Detail & Related papers (2020-05-08T09:19:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.