DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking
Neural Networks
- URL: http://arxiv.org/abs/2107.00415v1
- Date: Thu, 1 Jul 2021 12:56:36 GMT
- Title: DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking
Neural Networks
- Authors: Alberto Marchisio and Giacomo Pira and Maurizio Martina and Guido
Masera and Muhammad Shafique
- Abstract summary: Spiking Neural Networks (SNNs) and Dynamic Vision Sensors (DVS) are vulnerable to security threats.
We propose DVS-Attacks, a set of stealthy yet efficient adversarial attack methodologies.
Noise filters for DVS can be used as defense mechanisms against adversarial attacks.
- Score: 15.093607722961407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs), despite being energy-efficient when
implemented on neuromorphic hardware and coupled with event-based Dynamic
Vision Sensors (DVS), are vulnerable to security threats, such as adversarial
attacks, i.e., small perturbations added to the input for inducing a
misclassification. Toward this, we propose DVS-Attacks, a set of stealthy yet
efficient adversarial attack methodologies targeted to perturb the event
sequences that compose the input of the SNNs. First, we show that noise filters
for DVS can be used as defense mechanisms against adversarial attacks.
Afterwards, we implement several attacks and test them in the presence of two
types of noise filters for DVS cameras. The experimental results show that the
filters can only partially defend the SNNs against our proposed DVS-Attacks.
Using the best settings for the noise filters, our proposed Mask Filter-Aware
Dash Attack reduces the accuracy by more than 20% on the DVS-Gesture dataset
and by more than 65% on the MNIST dataset, compared to the original clean
frames. The source code of all the proposed DVS-Attacks and noise filters is
released at https://github.com/albertomarchisio/DVS-Attacks.
Related papers
- Query-Efficient Hard-Label Black-Box Attack against Vision Transformers [9.086983253339069]
Vision transformers (ViTs) face similar security risks from adversarial attacks as deep convolutional neural networks (CNNs)
This article explores the vulnerability of ViTs against adversarial attacks under a black-box scenario.
We propose a novel query-efficient hard-label adversarial attack method called AdvViT.
arXiv Detail & Related papers (2024-06-29T10:09:12Z) - VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise [0.5755004576310334]
We introduce a novel noise-reduction procedure, Vector Quantization U-Net (VQUNet), to reduce adversarial noise and reconstruct data with high fidelity.
VQUNet features a discrete latent representation learning through a multi-scale hierarchical structure for both noise reduction and data reconstruction.
It outperforms other state-of-the-art noise-reduction-based defense methods under various adversarial attacks for both Fashion-MNIST and CIFAR10 datasets.
arXiv Detail & Related papers (2024-06-05T10:10:03Z) - Push-Pull: Characterizing the Adversarial Robustness for Audio-Visual
Active Speaker Detection [88.74863771919445]
We reveal the vulnerability of AVASD models under audio-only, visual-only, and audio-visual adversarial attacks.
We also propose a novel audio-visual interaction loss (AVIL) for making attackers difficult to find feasible adversarial examples.
arXiv Detail & Related papers (2022-10-03T08:10:12Z) - A Mask-Based Adversarial Defense Scheme [3.759725391906588]
Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs)
We propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.
arXiv Detail & Related papers (2022-04-21T12:55:27Z) - KATANA: Simple Post-Training Robustness Using Test Time Augmentations [49.28906786793494]
A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks.
We propose a new simple and easy-to-use technique, KATANA, for robustifying an existing pretrained DNN without modifying its weights.
Our strategy achieves state-of-the-art adversarial robustness on diverse attacks with minimal compromise on the natural images' classification.
arXiv Detail & Related papers (2021-09-16T19:16:00Z) - R-SNN: An Analysis and Design Methodology for Robustifying Spiking
Neural Networks against Adversarial Attacks through Noise Filters for Dynamic
Vision Sensors [15.093607722961407]
Spiking Neural Networks (SNNs) aim at providing energy-efficient learning capabilities when implemented on neuromorphic chips with event-based Dynamic Vision Sensors (DVS)
This paper studies the robustness of SNNs against adversarial attacks on such DVS-based systems, and proposes R-SNN, a novel methodology for robustifying SNNs through efficient noise filtering.
arXiv Detail & Related papers (2021-09-01T14:40:04Z) - Detect and Defense Against Adversarial Examples in Deep Learning using
Natural Scene Statistics and Adaptive Denoising [12.378017309516965]
We propose a framework for defending DNN against ad-versarial samples.
The detector aims to detect AEs bycharacterizing them through the use of natural scenestatistic.
The proposed method outperforms the state-of-the-art defense techniques.
arXiv Detail & Related papers (2021-07-12T23:45:44Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - GreedyFool: Distortion-Aware Sparse Adversarial Attack [138.55076781355206]
Modern deep neural networks (DNNs) are vulnerable to adversarial samples.
Sparse adversarial samples can fool the target model by only perturbing a few pixels.
We propose a novel two-stage distortion-aware greedy-based method dubbed as "GreedyFool"
arXiv Detail & Related papers (2020-10-26T17:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.