Adversarial Attacks on Spiking Convolutional Networks for Event-based
Vision
- URL: http://arxiv.org/abs/2110.02929v1
- Date: Wed, 6 Oct 2021 17:20:05 GMT
- Title: Adversarial Attacks on Spiking Convolutional Networks for Event-based
Vision
- Authors: Julian B\"uchel, Gregor Lenz, Yalun Hu, Sadique Sheik, Martino Sorbaro
- Abstract summary: We show how white-box adversarial attack algorithms can be adapted to the discrete and sparse nature of event-based visual data.
We also verify, for the first time, the effectiveness of these perturbations directly on neuromorphic hardware.
- Score: 0.6999740786886537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event-based sensing using dynamic vision sensors is gaining traction in
low-power vision applications. Spiking neural networks work well with the
sparse nature of event-based data and suit deployment on low-power neuromorphic
hardware. Being a nascent field, the sensitivity of spiking neural networks to
potentially malicious adversarial attacks has received very little attention so
far. In this work, we show how white-box adversarial attack algorithms can be
adapted to the discrete and sparse nature of event-based visual data, and to
the continuous-time setting of spiking neural networks. We test our methods on
the N-MNIST and IBM Gestures neuromorphic vision datasets and show adversarial
perturbations achieve a high success rate, by injecting a relatively small
number of appropriately placed events. We also verify, for the first time, the
effectiveness of these perturbations directly on neuromorphic hardware.
Finally, we discuss the properties of the resulting perturbations and possible
future directions.
Related papers
- A Neuromorphic Approach to Obstacle Avoidance in Robot Manipulation [16.696524554516294]
We develop a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator.
Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN.
Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods.
arXiv Detail & Related papers (2024-04-08T20:42:10Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Visual Analytics of Neuron Vulnerability to Adversarial Attacks on
Convolutional Neural Networks [28.081328051535618]
Adversarial attacks on a convolutional neural network (CNN) could fool a high-performance CNN into making incorrect predictions.
Our work introduces a visual analytics approach to understanding adversarial attacks.
A visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks.
arXiv Detail & Related papers (2023-03-06T01:01:56Z) - Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural
Networks with Neuromorphic Data [15.084703823643311]
spiking neural networks (SNNs) offer enhanced energy efficiency and biologically plausible data processing capabilities.
This paper delves into backdoor attacks in SNNs using neuromorphic datasets and diverse triggers.
We present various attack strategies, achieving an attack success rate of up to 100% while maintaining a negligible impact on clean accuracy.
arXiv Detail & Related papers (2023-02-13T11:34:17Z) - Spiking Neural Networks for Frame-based and Event-based Single Object
Localization [26.51843464087218]
Spiking neural networks have shown much promise as an energy-efficient alternative to artificial neural networks.
We propose a spiking neural network approach for single object localization trained using surrogate gradient descent.
We compare our method with similar artificial neural networks and show that our model has competitive/better performance in accuracy, against various corruptions, and has lower energy consumption.
arXiv Detail & Related papers (2022-06-13T22:22:32Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons [0.6899744489931016]
We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the first convolutional layer.
We correlate these neurons with the distribution of adversarial attacks on the network.
arXiv Detail & Related papers (2022-01-31T14:34:07Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.