Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and
Fault-Injection Attacks
- URL: http://arxiv.org/abs/2105.03251v1
- Date: Wed, 5 May 2021 08:11:03 GMT
- Title: Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and
Fault-Injection Attacks
- Authors: Faiq Khalid, Muhammad Abdullah Hanif, Muhammad Shafique
- Abstract summary: We first discuss different vulnerabilities that can be exploited for generating security attacks for neural network-based systems.
We then provide an overview of existing adversarial and fault-injection-based attacks on DNNs.
- Score: 14.958919450708157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: From tiny pacemaker chips to aircraft collision avoidance systems, the
state-of-the-art Cyber-Physical Systems (CPS) have increasingly started to rely
on Deep Neural Networks (DNNs). However, as concluded in various studies, DNNs
are highly susceptible to security threats, including adversarial attacks. In
this paper, we first discuss different vulnerabilities that can be exploited
for generating security attacks for neural network-based systems. We then
provide an overview of existing adversarial and fault-injection-based attacks
on DNNs. We also present a brief analysis to highlight different challenges in
the practical implementation of adversarial attacks. Finally, we also discuss
various prospective ways to develop robust DNN-based systems that are resilient
to adversarial and fault-injection attacks.
Related papers
- Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks [3.9444202574850755]
Spiking Neural Networks (SNNs) are known for their low energy consumption and high robustness.
This paper explores the robustness performance of SNNs trained by supervised learning rules under backdoor attacks.
arXiv Detail & Related papers (2024-09-24T02:15:19Z) - Problem space structural adversarial attacks for Network Intrusion Detection Systems based on Graph Neural Networks [8.629862888374243]
We propose the first formalization of adversarial attacks specifically tailored for GNN in network intrusion detection.
We outline and model the problem space constraints that attackers need to consider to carry out feasible structural attacks in real-world scenarios.
Our findings demonstrate the increased robustness of the models against classical feature-based adversarial attacks.
arXiv Detail & Related papers (2024-03-18T14:40:33Z) - Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Artificial Neural Networks and Fault Injection Attacks [7.601937548486356]
This chapter is on the security assessment of artificial intelligence (AI) and neural network (NN) accelerators in the face of fault injection attacks.
It discusses the assets on these platforms and compares them with ones known and well-studied in the field of cryptographic systems.
arXiv Detail & Related papers (2020-08-17T03:29:57Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - NeuroAttack: Undermining Spiking Neural Networks Security through
Externally Triggered Bit-Flips [11.872768663147776]
Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems.
While these systems are going mainstream, they have inherent security and reliability issues.
We propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues.
arXiv Detail & Related papers (2020-05-16T16:54:00Z) - DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips [29.34622626909906]
We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
arXiv Detail & Related papers (2020-03-30T18:51:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.