R-SNN: An Analysis and Design Methodology for Robustifying Spiking
Neural Networks against Adversarial Attacks through Noise Filters for Dynamic
Vision Sensors
- URL: http://arxiv.org/abs/2109.00533v1
- Date: Wed, 1 Sep 2021 14:40:04 GMT
- Title: R-SNN: An Analysis and Design Methodology for Robustifying Spiking
Neural Networks against Adversarial Attacks through Noise Filters for Dynamic
Vision Sensors
- Authors: Alberto Marchisio and Giacomo Pira and Maurizio Martina and Guido
Masera and Muhammad Shafique
- Abstract summary: Spiking Neural Networks (SNNs) aim at providing energy-efficient learning capabilities when implemented on neuromorphic chips with event-based Dynamic Vision Sensors (DVS)
This paper studies the robustness of SNNs against adversarial attacks on such DVS-based systems, and proposes R-SNN, a novel methodology for robustifying SNNs through efficient noise filtering.
- Score: 15.093607722961407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) aim at providing energy-efficient learning
capabilities when implemented on neuromorphic chips with event-based Dynamic
Vision Sensors (DVS). This paper studies the robustness of SNNs against
adversarial attacks on such DVS-based systems, and proposes R-SNN, a novel
methodology for robustifying SNNs through efficient DVS-noise filtering. We are
the first to generate adversarial attacks on DVS signals (i.e., frames of
events in the spatio-temporal domain) and to apply noise filters for DVS
sensors in the quest for defending against adversarial attacks. Our results
show that the noise filters effectively prevent the SNNs from being fooled. The
SNNs in our experiments provide more than 90% accuracy on the DVS-Gesture and
NMNIST datasets under different adversarial threat models.
Related papers
- RSC-SNN: Exploring the Trade-off Between Adversarial Robustness and Accuracy in Spiking Neural Networks via Randomized Smoothing Coding [17.342181435229573]
Spiking Neural Networks (SNNs) have received widespread attention due to their unique neuronal dynamics and low-power nature.
Previous research empirically shows that SNNs with Poisson coding are more robust than Artificial Neural Networks (ANNs) on small-scale datasets.
This work theoretically demonstrates that SNN's inherent adversarial robustness stems from its Poisson coding.
arXiv Detail & Related papers (2024-07-29T15:26:15Z) - Robust Stable Spiking Neural Networks [45.84535743722043]
Spiking neural networks (SNNs) are gaining popularity in deep learning due to their low energy budget on neuromorphic hardware.
Many studies have been conducted to defend SNNs from the threat of adversarial attacks.
This paper aims to uncover the robustness of SNN through the lens of the stability of nonlinear systems.
arXiv Detail & Related papers (2024-05-31T08:40:02Z) - sVAD: A Robust, Low-Power, and Light-Weight Voice Activity Detection
with Spiking Neural Networks [51.516451451719654]
Spiking Neural Networks (SNNs) are known to be biologically plausible and power-efficient.
This paper introduces a novel SNN-based Voice Activity Detection model, referred to as sVAD.
It provides effective auditory feature representation through SincNet and 1D convolution, and improves noise robustness with attention mechanisms.
arXiv Detail & Related papers (2024-03-09T02:55:44Z) - Inherent Redundancy in Spiking Neural Networks [24.114844269113746]
Spiking Networks (SNNs) are a promising energy-efficient alternative to conventional artificial neural networks.
In this work, we focus on three key questions regarding inherent redundancy in SNNs.
We propose an Advance Attention (ASA) module to harness SNNs' redundancy.
arXiv Detail & Related papers (2023-08-16T08:58:25Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep
Spiking Neural Networks by Training with Crafted Input Noise [13.904091056365765]
We present an SNN training algorithm that uses crafted input noise and incurs no additional training time.
Compared to standard trained direct input SNNs, our trained models yield improved classification accuracy of up to 13.7%.
Our models also outperform inherently robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images.
arXiv Detail & Related papers (2021-10-06T16:48:48Z) - DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking
Neural Networks [15.093607722961407]
Spiking Neural Networks (SNNs) and Dynamic Vision Sensors (DVS) are vulnerable to security threats.
We propose DVS-Attacks, a set of stealthy yet efficient adversarial attack methodologies.
Noise filters for DVS can be used as defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2021-07-01T12:56:36Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.