Securing Deep Spiking Neural Networks against Adversarial Attacks
through Inherent Structural Parameters
- URL: http://arxiv.org/abs/2012.05321v1
- Date: Wed, 9 Dec 2020 21:09:03 GMT
- Title: Securing Deep Spiking Neural Networks against Adversarial Attacks
through Inherent Structural Parameters
- Authors: Rida El-Allami and Alberto Marchisio and Muhammad Shafique and Ihsen
Alouani
- Abstract summary: This paper explores the security enhancement of Spiking Neural Networks (SNNs) through internal structural parameters.
To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks.
- Score: 11.665517294899724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning (DL) algorithms have gained popularity owing to their practical
problem-solving capacity. However, they suffer from a serious integrity threat,
i.e., their vulnerability to adversarial attacks. In the quest for DL
trustworthiness, recent works claimed the inherent robustness of Spiking Neural
Networks (SNNs) to these attacks, without considering the variability in their
structural spiking parameters. This paper explores the security enhancement of
SNNs through internal structural parameters. Specifically, we investigate the
SNNs robustness to adversarial attacks with different values of the neuron's
firing voltage thresholds and time window boundaries. We thoroughly study SNNs
security under different adversarial attacks in the strong white-box setting,
with different noise budgets and under variable spiking parameters. Our results
show a significant impact of the structural parameters on the SNNs' security,
and promising sweet spots can be reached to design trustworthy SNNs with 85%
higher robustness than a traditional non-spiking DL system. To the best of our
knowledge, this is the first work that investigates the impact of structural
parameters on SNNs robustness to adversarial attacks. The proposed
contributions and the experimental framework is available online to the
community for reproducible research.
Related papers
- Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks [3.9444202574850755]
Spiking Neural Networks (SNNs) are known for their low energy consumption and high robustness.
This paper explores the robustness performance of SNNs trained by supervised learning rules under backdoor attacks.
arXiv Detail & Related papers (2024-09-24T02:15:19Z) - Late Breaking Results: Fortifying Neural Networks: Safeguarding Against Adversarial Attacks with Stochastic Computing [1.523100574874007]
In neural network (NN) security, safeguarding model integrity and resilience against adversarial attacks has become paramount.
This study investigates the application of computing (SC) as a novel mechanism to fortify NN models.
arXiv Detail & Related papers (2024-07-05T20:49:32Z) - Robust Stable Spiking Neural Networks [45.84535743722043]
Spiking neural networks (SNNs) are gaining popularity in deep learning due to their low energy budget on neuromorphic hardware.
Many studies have been conducted to defend SNNs from the threat of adversarial attacks.
This paper aims to uncover the robustness of SNN through the lens of the stability of nonlinear systems.
arXiv Detail & Related papers (2024-05-31T08:40:02Z) - Enhancing Adversarial Robustness in SNNs with Sparse Gradients [46.15229142258264]
Spiking Neural Networks (SNNs) have attracted great attention for their energy-efficient operations and biologically inspired structures.
Existing techniques, whether adapted from ANNs or specifically designed for SNNs, exhibit limitations in training SNNs or defending against strong attacks.
We propose a novel approach to enhance the robustness of SNNs through gradient sparsity regularization.
arXiv Detail & Related papers (2024-05-30T05:39:27Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Toward Robust Spiking Neural Network Against Adversarial Perturbation [22.56553160359798]
spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications.
Researchers have already demonstrated an SNN can be attacked with adversarial examples.
To the best of our knowledge, this is the first analysis on robust training of SNNs.
arXiv Detail & Related papers (2022-04-12T21:26:49Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.