NeuroAttack: Undermining Spiking Neural Networks Security through
Externally Triggered Bit-Flips
- URL: http://arxiv.org/abs/2005.08041v1
- Date: Sat, 16 May 2020 16:54:00 GMT
- Title: NeuroAttack: Undermining Spiking Neural Networks Security through
Externally Triggered Bit-Flips
- Authors: Valerio Venceslai, Alberto Marchisio, Ihsen Alouani, Maurizio Martina,
Muhammad Shafique
- Abstract summary: Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems.
While these systems are going mainstream, they have inherent security and reliability issues.
We propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues.
- Score: 11.872768663147776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to their proven efficiency, machine-learning systems are deployed in a
wide range of complex real-life problems. More specifically, Spiking Neural
Networks (SNNs) emerged as a promising solution to the accuracy,
resource-utilization, and energy-efficiency challenges in machine-learning
systems. While these systems are going mainstream, they have inherent security
and reliability issues. In this paper, we propose NeuroAttack, a cross-layer
attack that threatens the SNNs integrity by exploiting low-level reliability
issues through a high-level attack. Particularly, we trigger a fault-injection
based sneaky hardware backdoor through a carefully crafted adversarial input
noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious
integrity threat to state-of-the art machine-learning techniques.
Related papers
- SpikingJET: Enhancing Fault Injection for Fully and Convolutional Spiking Neural Networks [37.89720165358964]
SpikingJET is a novel fault injector designed specifically for fully connected and convolutional Spiking Neural Networks (SNNs)
Our work underscores the critical need to evaluate the resilience of SNNs to hardware faults, considering their growing prominence in real-world applications.
arXiv Detail & Related papers (2024-03-30T14:51:01Z) - Deep Reinforcement Learning with Spiking Q-learning [51.386945803485084]
spiking neural networks (SNNs) are expected to realize artificial intelligence (AI) with less energy consumption.
It provides a promising energy-efficient way for realistic control tasks by combining SNNs with deep reinforcement learning (RL)
arXiv Detail & Related papers (2022-01-21T16:42:11Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and
Fault-Injection Attacks [14.958919450708157]
We first discuss different vulnerabilities that can be exploited for generating security attacks for neural network-based systems.
We then provide an overview of existing adversarial and fault-injection-based attacks on DNNs.
arXiv Detail & Related papers (2021-05-05T08:11:03Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - DeepDyve: Dynamic Verification for Deep Neural Networks [16.20238078882485]
DeepDyve employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification.
We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve.
arXiv Detail & Related papers (2020-09-21T07:58:18Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips [29.34622626909906]
We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
arXiv Detail & Related papers (2020-03-30T18:51:59Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.